content
stringlengths
0
1.88M
url
stringlengths
0
5.28k
Many of the poems in National Monuments explore bodies, particularly the bodies of indigenous women worldwide, as monuments--in life, in photos, in graves, in traveling exhibitions, and in plastic representations at the airport. Erdrich sometimes imagines what ancient bones would say if they could speak. Her poems remind us that we make monuments out of what remains--monuments are actually our own imaginings of the meaning or significance of things that are, in themselves, silent. As Erdrich moves from the expectedly poetic to the voice of a newspaper headline or popular culture, we are jarred into wondering how we make our own meanings when the present is so immediately confronted by the past (or vice versa). The language of the scientists that Erdrich sometimes quotes in epigraphs seems reductive in comparison to the richness of tone and meaning that these poems--filled with puns, allusions, and wordplay--provide. Erdrich's poetry is literary in the best sense of the word, infused with an awareness of the poetic canon. Her revisions of and replies to poems by William Carlos Williams, Robert Frost, and others offer an indigenous perspective quite different from the monuments of American literature they address. Product Details BISAC Categories: Earn by promoting books Earn money by sharing your favorite books through our Affiliate program.Become an affiliate About the Author Collaborative artist, filmmaker, and independent curator Heid E. Erdrich teaches in the low-residency MFA Creative Writing program of Augsburg College. She is the author of five collections of poetry, including National Monuments, which won the 2009 Minnesota Book Award. Erdrich grew up in Wahpeton, North Dakota, and is Ojibwe enrolled at Turtle Mountain.
https://bookshop.org/books/national-monuments/9780870138485?aid=2386&listref=native-american-indigenous-poets
Students are feeling the squeeze from Capital Hill once again, with Congress voting in December to tighten Pell Grant restrictions, which will go into effect July 1, 2012. After a narrow miss from significant cuts during the debt ceiling debacle on Aug. 2, 2011, students may face greater restrictions on the amount of money they may qualify for from the federal program, as well as the number of semesters they may receive it. The number of semesters that students can receive Pell Grants has been reduced from 18 to 12. While the total amount per year a student may receive has stayed the same – $5,500 – the yearly income cut-off for receiving the full grant has been dropped from $30,000 to just $23,000. This may have a significant impact on students, particularly those nearing graduation or who have been in school for several years. The changes are also retroactive, making it all the more important for students to plan their academic career wisely. “We need to be armed with this information so we can start making some good choices,” said Sarah Volstad, Director of Legislative Affairs for Student Senate. These changes come on the heals of the elimination of the double-Pell Grant in 2011, cutting aid for students wishing to work through the summer semester to fast-track to graduation. That, and the new restrictions ratified last month, will both go into effect this summer. “It’s definitely not like it was,” said Volstad. “Ten years ago, the state grants were plentiful … tuition was much lower.” With 65% of MCTC students relying on Pell Grants to attend school, according to the American College Review, all of these changes could make for a harsh 2012 for students who struggle financially, or who are approaching the new semester limits. MCTC student Andy Freeman, was unaware of the changes to the Pell Grant, and is concerned about the effect it may have on students’ ability to go to college. “I feel it’s very important for folks to go to school and one of the best things out there is financial aid,” he said. The government is feeling the pinch, with Pell Grant recipients increased by more than 50% since 2008, according to the Department of Education. With tuition also on the rise as government funding to public colleges is slashed, high unemployment, and stagnant wages, paying for an education is becoming a daunting task, and one with increasingly strict time limits.
https://citycollegenews.com/2523/news/pell-grant-cuts-to-be-enacted-fall-2012/
All major vegetation cutting (i.e., tree removal, tree pruning, etc.) should be completed well before June 1, the beginning of hurricane season. We encourage residents to perform all major pruning and tree removal from December 1 through April 30. Once a storm has been named: - Do not cut down trees or perform major yard work. - Do not begin construction projects that include debris. - Once a watch or warning has been issues, do not trim or prune vegetation of any kind. - Do not take materials to the curb or the landfill during a watch or warning period. After the storm: - Please be patient. - Keep household garbage, recycling, yard debris, and construction-storm debris in separate piles at least 3-feet apart. - Securely containerize all household garbage in garbage bags to be placed in the correct garbage containers curbside on your scheduled day. The City of Winter Haven will provide regular updates on the progress of debris collection. We ask all residents within Winter Haven’s City limits to help us restore the City to it’s pre-storm state. Every resident’s cooperation and support enables us to complete the entire process in the quickest, safest, and most efficient manner possible. | | Task | | Days to Complete | | Assess all areas within the City to determine the amount of damage, debris, and hardest hit areas | | 2-3 | | Set up temporary debris sites | | 3-4 | | Deployment of specialized storm debris collection equipment | | 4-5 | | Complete collection of storm debris | | 30-160 Important Information: - Do not call and ask that trucks be pulled from scheduled routes to pick up your own debris first. Deviation from carefully planned routes causes delays in the entire system. - There is no reimbursement provided to any resident or home owner association (HOA) who hires a private contractor to remove and dispose of storm-related debris. - For additional information, contact the Solid Waste Division at 863-291-5756.
https://www.mywinterhaven.com/government/winter-haven-emergency-information-and-resources/solid-waste-after-the-storm-q-a/
Research and write your family history one of the most rewarding challenges i have accomplished in my genealogy journey has been the completion of my family. The book how to write a family history, by tvh fitzhugh, is particularly recommended for the advice it gives on writing a real family history. Sample write-up #1 info [back to note father’s medical history not known no known family history of colon cancer sh mr h is a retired factory worker. Writing your personal history can be easy with a unique approach write your personal history differently, write it better we'll show you how for free. 2 gather the information if you want to write a thorough and interesting family history, you must collect thorough and interesting information. Hints & tips sixteen: writing genealogical there is a specific style of narrative family history that can be hints & tips sixteen: writing genealogical reports. You and your family members can preserve unwritten family history using oral history techniques likewise you and your write a thank-you note. Many people shy away from including too much writing in their family history books, assuming it takes some special talent often this results in a quick rendition of. In the june issue of family tree, we've launched a brand new series, helping readers discover the best way to write their family history - something that will become. Presenter’s background ♦mfa in nonfiction writing, university of pittsburgh,1997 freelance writer ♦instructor of genealogy and family history courses. Creating a family history book from planning to printing: i would not consider trying to write a family history book without using a computer program. How to write a good medical history nearly every encounter between medical personnel and a patient includes taking a medical history the level of detail the history. Writing about family history: getting started duane roen arizona state university [email protected] I have been meaning to start writing my family history stories for years and years but the sheer magnitude of the project stopped me in my tracks and i'd put. A step-by-step tutorial showing you how to compose an informative, readable family history.
http://bvpaperugis.blinkingti.me/how-to-write-a-family-history.html
THE ENGLISH VERSION OF THE TERMS AND CONDITIONS IS FOR PURELY INFORMATIONAL PURPOSE. SHOULD ANY CONTRADICTION ARISE BETWEEN THE ENGLISH VERSION AND THE SPANISH VERSION, THE LATTER SHALL ALWAYS PREVAIL. TALLERES MECANICOS E INDUSTRIALES SA, responsible for the website, hereinafter RESPONSIBLE, makes this document available to users, with which it intends to comply with the obligations set forth in Act 34/2002, of July 11, on Information Society Services and Electronic Commerce (LSSICE), BOE No. 166, as well as informing all users of the website about what the conditions of use are. TALLERES MECANICOS E INDUSTRIALES S.A. reserves the right to modify any type of information that may appear on the website, without there being any obligation to pre-advise or inform the users of these obligations, being understood as sufficient with the publication on the website of TALLERES MECANICOS E INDUSTRIALES S.A. Company name: TALLERES MECÁNICOS E INDUSTRIALES S.A. The website, including but not limited to its programming, editing, compilation and other elements necessary for its operation, designs, logos, text and / or graphics, are property of the RESPONSIBLE or, if the case, has license or express authorization by the authors. All the contents of the website are duly protected by the rules of intellectual and industrial property, as well as registered in the corresponding public registers. Regardless of the purpose for which they were intended, the total or partial reproduction, use, exploitation, distribution and marketing, requires in all cases prior written authorization by the RESPONSIBLE. Any use not previously authorized is considered a serious breach of the intellectual or industrial property rights of the author. The designs, logos, text and / or graphics beyond the RESPONSIBLE that may appear on the website belong to their respective owners, who are themselves responsible for any possible controversy that may arise with respect to them. The RESPONSIBLE expressly authorizes third parties to redirect directly to the specific contents of the website, and in any case redirect to the main website of www.tameinsa.com. The RESPONSIBLE acknowledges in favor of its owners the corresponding intellectual and industrial property rights, not implying its mere mention or appearance on the website the existence of rights or any responsibility on them, nor endorsement, sponsorship or recommendation by the RESPONSIBLE. To make any kind of observation regarding possible breaches of intellectual or industrial property rights, as well as on any of the contents of the website, you can do so through the email [email protected]. This website may use technical cookies (small information files that the server sends to the computer of the person accessing the page) to carry out certain functions that are considered essential for the proper functioning and visualization of the site. The cookies used have, in any case, a temporary nature, with the sole purpose of making navigation more efficient, and they disappear when the user’s session ends. In any case, these cookies provide personal data by themselves and will not be used to collect them. The user has the possibility to configure your browser to be alerted of the reception of cookies and to prevent its installation on your computer. Please, consult the instructions of your browser to expand this information. From the website, it is possible to be redirected to content from third-party websites. Since the RESPONSIBLE can not always control the contents introduced by third parties in their respective websites, does not assume any type of responsibility with respect to such contents. In any case, proceed to the immediate withdrawal of any content that may contravene national or international legislation, morality or public order, proceeding to the immediate withdrawal of redirection to the website, bringing to the attention of the competent authorities the content. The RESPONSIBLE is not responsible for the information and content stored, for example, but non exhaustive, in forums, chats, blog generators, comments, social networks or any other means that allows third parties to publish content on the RESPONSIBLE website independently. However, and in compliance with the provisions of articles 11 and 16 of the LSSICE, it is made available to all users, authorities and security forces, collaborating actively in the withdrawal or, where appropriate, blocking of all those contents that may affect or contravene national or international legislation, the rights of third parties or morality and public order. In the event that the user considers that there is any content on the website that could be susceptible to this classification, please notify the website administrator immediately. This website has been reviewed and tested to work properly. In principle, the correct operation can be guaranteed 365 days a year, 24 hours a day. However, the RESPONSIBLE does not rule out the possibility of certain programming errors, or events of force majeure, natural catastrophes, strikes or similar circumstances that make it impossible to access the website. The website’s servers can automatically detect the IP address and domain name used by the user. An IP address is a number automatically assigned to a computer when it connects to the Internet. All this information is registered in a server activity file duly registered that allows the subsequent processing of the data in order to obtain only statistical measurements that allow knowing the number of page impressions, the number of visits made to the web servers, the order of visits, the access point, etc. For the resolution of all disputes or issues related to this website or the activities developed therein, Spanish legislation will apply, to which the parties expressly submit, being competent to resolve all disputes arising or related to their use the Courts and Tribunals of A Coruña.
http://www.tameinsa.com/en/legal-notice/
Shipping Weight:?0.12 kgSwitch to Imperial unitsShipping WeightThe Shipping Weight includes the product, protective packaging material and the actual shipping box. In addition, the Shipping Weight may be adjusted for the Dimensional Weight (e.g. length, width & height) of a package. It is important to note that certain types of products (e.g. glass containers, liquids, fragile, refrigerated or ice packed) will often require protective packaging material. As such, these products will reflect a higher Shipping Weight compared to the unprotected product. - Product code: SOR-01378 - UPC Code: 076280013788 - Package Quantity: 100 Count - Dimensions: 10.2 x 5.1 x 5.1 cm , 0.1 kgSwitch to Imperial units Product rank: Similar item to consider Frequently purchased together Product overview Description - Dietary Supplement - Mushroom - Lab Verified Discussion: Maitake, also known as Hen of the Woods, is a popular mushroom native to Asia, where it has used as both a food and a medicine for thousands of years. This formula combines Maitake with two other traditional Asian mushrooms, Reishi and Shiitake. These mushrooms have adaptogenic properties, meaning they support the body’s ability to adapt to stress. Studies suggest that the polysaccharides found in mushrooms, known as Beta Glucans, may provide nutritive support for healthy immune system function. Suggested use Use only as directed. Take 1 VegCap two times daily with a meal or glass of water. Other ingredients Vegetable cellulose capsule, silica and magnesium stearate. Warnings Do not use if safety seal is broken or missing. Keep out of reach of children. Keep your licensed health care practitioner informed when using this product. Store in a cool, dry place. Disclaimer While iHerb strives to ensure the accuracy of its product images and information, some manufacturing changes to packaging and/or ingredients may be pending update on our site. Although items may occasionally ship with alternate packaging, freshness is always guaranteed. We recommend that you read labels, warnings and directions of all products before use and not rely solely on the information provided by iHerb.
https://md.iherb.com/pr/solaray-mushroom-immune-complex-with-maitake-reishi-shiitake-100-vegcaps/70029
Citation: Sandvik, H. (2013) Methods and set of criteria. Pages 57–64 in Alien species in Norway – with the Norwegian Black List 2012 (edited by L. Gederaas, T. L. Moen, S. Skjelseth, and L.-K. Larsen), Norwegian Biodiversity Information Centre, Trondheim, Norway [English parallel edition of a report that was originally published in Norwegian]. Summary: The Norwegian risk classification of alien species is an assessment of their impact on the native biota. Impact is the product of the species’ local ecological effects and the area they have colonised, and so the risk assessment uses a two-dimensional classification scheme, in which the x axis measures the species’ invasions potential, while the y axis expresses their ecological effects. The scheme applies three criteria to quantify invasion potential (incl. rates of establishment and spread) and six criteria to capture effects on native species and landscapes. Full text: © 2012 Norwegian Biodiversity Information Centre. If you accept (i) that further reproduction, and all further use other than for personal research, is subject to permission from the publisher (Norwegian Biodiversity Information Centre), and (ii) that printouts have to be made on recycled paper, you may download the article here (pdf, 0.7 MB). The entire report can be ordered or downloaded here. Related publications: The set of criteria underlying the Norwegian risk classification has now been published in Biodiversity and Conservation.
http://www.evol.no/hanno/12/AS2.htm
Available under License Creative Commons Attribution. Download (480kB) | Preview Abstract When parasitic plants and aphid herbivores share a host, both direct and indirect ecological effects (IEEs) can influence evolutionary processes. We used a hemiparasitic plant (Rhinanthus minor), a grass host (Hordeum vulgare) and a cereal aphid (Sitobion avenae) to investigate the genetics of IEEs between the aphid and the parasitic plant, and looked to see how these might affect or be influenced by the genetic diversity of the host plants. Survival of R. minor depended on the parasite's population of origin, the genotypes of the aphids sharing the host and the genetic diversity in the host plant community. Hence the indirect effects of the aphids on the parasitic plants depended on the genetic environment of the system. Here, we show that genetic variation can be important in determining the outcome of IEEs. Therefore, IEEs have the potential to influence evolutionary processes and the continuity of species interactions over time. Impact and Reach Statistics Additional statistics for this dataset are available via IRStats2.
https://e-space.mmu.ac.uk/617301/
Knowing how to assess the types of damages that occur in pipelines is often challenging, especially considering the potential for failures. Additionally, operators are often hesitant to shut down operation or remove lines from service unless absolutely necessary. For this reason, Stress Engineering Services is frequently called upon to work with operators to assess the extent of pipeline damage. Our damage-assessment approach is built on our experience from prior evaluations and draws heavily from resources involving finite element methods as well as a database integrating years of full-scale pipeline testing. Our goal is to help pipeline operators better position themselves to appropriately respond to pipeline damage using a methodology that permits the continued safe operation of their pipeline systems. Anomaly classification is one of the most critical elements for assessing pipeline damage. It is the starting point that leads to a better understanding of the damage, characterization of the behavior, and predictability of the response. Although a wealth of information exists for a wide range of anomalies, it is often difficult to organize that information so that it is useful. The pipeline engineers at Stress Engineering Services possess the knowledge and expertise to review existing documentation to determine exactly what information is required to conduct an informed assessment.
https://www.stress.com/capabilities/pipelines/damage-assessment/
Iloilo City has overtaken the province of Aklan in terms of tourist arrivals. The number of tourists who visited the city, based on data released by the Department of Tourism (DOT-6), reached 1.24 million last year, higher by 15.33 percent compared to 2017’s figure. This year, the city’s tourist arrivals are projected to increase further up to 1.4 million. Iloilo City dethroned Aklan. Tourist arrivals in the province, which is known for its Boracay Island, decreased by 50.32 percent to 1.1 million in 2018. DOT-6 Director Atty. Helen J. Catalbas attributed the decline to the six-month closure of the world-famous island. Negros Occidental came in third with 920, 242 tourist arrivals, followed by Bacolod City (835, 453), Iloilo province (347, 354), Capiz (265, 662), Guimaras (133, 525), and Antique (108, 220). Of the cities and provinces in Western Visayas, only Antique posted a decrease in tourist arrivals by some 33.6 percent. All in all, the region welcomed 4.96 million local and foreign tourists last year, down by 15.33 percent.
https://www.iloilometropolitantimes.com/iloilo-city-records-1-24m-tourist-arrivals-dethrones-aklan/
Job Description: Using PMO best practices and standards, partner with product/program managers to facilitate a project management process to fit their needs, and oversee and execute on that process/plan. Create, maintain and control the project schedule and dashboard, facilitate meetings, and proactively identify risks to the project. Shepherd the Project Team through the Release process, being mindful of all requirements. Facilitate program communications, identify and implement continuous improvement practices, and provide regular status reports as required. 2-4 years of HW/FW project management experience, Bachelor' s degree required, PMI Certification desired. Technical PM experience in HW and/or FW is required. Must be able to show product ship history. Knowledge/background in software industry products/services/applications, with in-depth knowledge of Microsoft' s products/services/applications preferred. Must possess strong cross team/group/org collaboration skills; ability to foresee and analyze project risks, develop risk management plan and mitigate subsequent issues. The ideal candidate will have high-powered analytical skills and the ability to understand concepts and situations that pass by many others. Specialized knowledge as defined by project required. Must have excellent communication skills, experience working with external vendors, strong project management skills, and demonstrated success in adult learning and training principles/skills. Proficiency in Microsoft Office required.
https://jobs.protingent.com/jb/Technical-PM-2-Jobs-in-Redmond-Washington/4482195
Things to Consider in a Priming Game The priming game is a pretty powerful strategy to use in backgammon. Often when both players execute a priming game the showdown really becomes very interesting. Let's take a look at different things we need to consider when executing a priming game in backgammon. When a backgammon game turns into a blocking contest (meaning that both players have set up primes and have enemy checkers behind each of their primes) this turns into one of the most interest priming games there is in backgammon. Both players are trying to free their trapped checkers in this priming game. They also try to build their positions to make sure that the checker that gets sent to the bar during this contest in the priming game stays there for quite a while. Here are some things you might want to consider during this situation when it arises during a game of backgammon. First is the length of your prime on the backgammon board. Remember that the longer the prime the harder it is for your opponent to escape the trapped checkers. Another thing players need to consider during a priming game is the position of your opponent's back checkers. This is an important factor you might want to pay attention to when doing a priming game. Checkers should be ideally positioned right next to the prime. This position in a priming game gives those back checkers better chances to jump over. Back checkers that are able to occupy your five-point can be bad news. If your opponent has made an anchor at the five-point that would add to the items you should keep an eye on when executing a priming game. But if there is a gap between your opponent's anchor and your prime, the situation is better for you (especially if you have a six-point prime). This makes it harder for your opponent to jump over your prime. Another important thing to consider during a priming game in backgammon is to check how you can advance your checkers. It is going to be a huge waste if you have to break up your prime because you can't smoothly move your checkers forward. This also means you have to check out if you have any spares you can use to cover points further ahead in your own home board. If ever you have to move your checkers forward, use your spares first. Then, move the checkers from the back end of your prime. The priming game is truly a powerful strategy in backgammon. The player who knows how to maintain the prime has the edge in a backgammon priming game.
http://www.bgplatform.com/things-to-consider-in.html
We present the second part of the Hamburg/SAO Survey for Emission-Line Galaxies (HSS therein, SAO – Special Astrophysical Observatory, Russia) which is based on the digitized objective-prism photoplates database of the Hamburg Quasar Survey (HQS). The main goal of the project is the search for emission-line galaxies (ELG) in order to create a new deep sample of blue compact/Hii galaxies (BCG) in a large sky area. Another important goal of this work is to search for new extremely low-metallicity galaxies. In this paper we present new results of spectroscopy obtained with the 6 m Russian telescope. The main ELG candidate selection criteria applied are blue or flat enough continuum (near λ4000 Å) and the presence of strong or moderate [Oiii]  4959, 5007 Å emission lines recognized on digitized prism spectra of galaxies with the survey estimated B-magnitudes in the range . No other criteria were applied. The spectroscopy resulted in detection and quantitative spectral classification of 134 emission-line objects. For 121 of them the redshifts are determined for the first time. For 13 ELGs known before emission line ratios are presented at first time. 108 of 134 emission-line objects are classified as BCG/Hii galaxies and probable BCGs, 6 – as QSOs, 1 – as Seyfert galaxy, 1 – as super-association in a dwarf spiral galaxy, 2 – as probable LINERs, 14 are low excitation objects – either of starburst nuclei (SBN), or dwarf amorphous nuclei starburst galaxy (DANS) type, and 2 – nonclassified. 23 galaxies did not show significant emission lines. The five most metal-deficient BCGs discovered have oxygen abundances log(O/H)+12 in the range 7.4 to 7.7, similar to the most metal-deficient BCGs known before. Tables 2 to 6 are only available in electronic form at the CDS via anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or via http://cdsweb.u-strasbg.fr/Abstract.html. Figures A1 to A13 will be made available only in the electronic version of the journal.
https://aas.aanda.org/articles/aas/abs/1999/11/ds1671/ds1671.html
- What is machine learning? - Key phases of machine learning - Prediction API model of machine learning What is Machine Learning? Simply speaking, Machine Learning is a set of artifical intelligence techniques which are used to solve one of the following problems based on the examples in hand: - Classification problems: Problems having “which type” as a question. For example, “which type” of email is this? (Spam or ham). In classification problem, one out of fixed number of answers are chosen. - Regression problems: Problems having “how much” as a question. For example, “how much” should be the price of a house in a given locality? Simply speaking, regression problems are related with numeric answer. Machine learning is a key aspect of data science. It allows data scientist to apply existing data sets to one of the machine learning algorithm and predict based on that. In other words, a person wanting to become a data scientist must learn machine learning algorithms to be able to predict/recommend. Key Phases of Machine Learning Following are key steps in machine learning: - Training: Train the model - Prediction: Predict using the model, given an input data-set Prediction API Model of Machine Learning Above steps of machine learning could be represented using following, from API perspective. Thus, whether using R, or pyhton APIs, following is how the API structure would look like: # Model created based on a given data set model = createModelAPI(existingDataSet) # Model is fed with new dataset, newDataSet which gives predicted output, predictedOutput predictedOutput = createPredictionAPI( model, newDataSet ) Lets take an example of linear regression using R programming console. Look at the code below:
https://vitalflux.com/data-science-quick-start-guide-machine-learning/
The latest ‘Open Budget Index‘ (2008), produced by the Open Budget Initiative, ranks governments according to the information they make available to the public throughout the budget process. The main findings are: Only five countries of the 85 surveyed—France, New Zealand, South Africa, the United Kingdom, and the United States—make extensive information publicly available as required by generally accepted good public financial management practices. On average, countries surveyed provide minimal information on their central government’s budget and financial activities. Twenty-five countries surveyed provide scant or no budget information. These include low-income countries like Cambodia, the Democratic Republic of Congo, Nicaragua, and the Kyrgyz Republic, as well as several middle- and high-income countries, such as China, Nigeria, and Saudi Arabia. The least transparent countries are mostly located in the Middle East and North Africa and in sub-Saharan Africa. The worst performers tend to be low-income countries and often depend heavily on revenues from foreign aid or oil and gas exports. Many poor performers have weak democratic institutions or are governed by autocratic regimes. In Croatia, Kenya, Nepal, and Sri Lanka, significant improvements either were influenced by the activities of civil society groups or have created opportunities for greater civil society interventions. Important improvements in budget transparency were also documented in Bulgaria, Egypt, Georgia, and Papua New Guinea. There is also evidence that good performance can occur in challenging contexts: Jordan and South Africa stand out among their regional counterparts. Among lower-income countries, Peru and Sri Lanka both provide their citizens with a significant amount of budget information. Here’s a mini league table from this week’s Economist. For the full rankings see here.
https://oxfamblogs.org/fp2p/how-open-is-your-government-find-out-here/
Vatican City, Mar 23, 2017 / 13:22 pm On Thursday a Vatican event on the prevention of child abuse narrowed in on the importance of education in schools and parishes in the safeguarding of children – not only for teachers, but for parents and children – and on the Church's role. Led by Cardinal Sean P. O'Malley of Boston, head of the Pontifical Commission for the Protection of Minors, he told CNA at the March 23 event that Catholic schools are, of course, a very important part of the Church's and Commission's ministry. There are "60 million children in our care in Catholic schools and so this kind of a conference is extremely important for the ministry of the Church," O'Malley said. "And we were very gratified that so many cardinals made time to be a part of this." The seminar was attended by five different cardinals in addition to O'Malley, including Cardinal João Braz de Aviz, head of the Congregation for Institutes of Consecrated Life and Societies of Apostolic Life, and Cardinal Marc Ouellet, head of the Congregation for Bishops.
https://www.catholicnewsagency.com/news/35686/vatican-abuse-prevention-event-extremely-important-for-church
# The Rumba Kings The Rumba Kings is an original American world music band co-founded in 2015 in Seattle, Washington, by producer/songwriter/bassist and former Capitol Records recording artist, Johnny Bacolas,(best known for being a member of the band Second Coming), and guitarist/songwriter, George Stevens. The songwriting is strongly influenced by Latin and Mediterranean music, and is mostly nylon-guitar driven, influenced by such artists as Yanni and The Gipsy Kings.Regarding the band's songs, Stevens states, "If it isn't beautiful, it doesn't make the cut," while Bacolas describes the band's music as "passionate and beautiful." In 2018, the band released their debut double-album, The Instrumental and Vocal Sessions, Vol. I, and in 2019, released their sophomore full-length album, The Instrumental Sessions, Vol. II.Subsequently, the band released three singles in 2020. ## History (2015–present) In June 2015, Bacolas and Stevens formed The Rumba Kings. Bacolas and Stevens began collaborating in the studio for the next two years writing their debut album. Soon after Bacolas and Stevens formed a live band from musicians they both knew. In April 2016, the group began performing live at local small bars and bistros. In 2018, band released their debut double-disc debut album The Instrumental and Vocal Sessions, Vol. I. The album was produced by Johnny Bacolas and mixed and mastered by Martin Feveyear. In 2019, the group released their sophomore full-length album, The Instrumental Sessions, Vol. II. The album was also produced by Johnny Bacolas and mixed and mastered by Martin Feveyear. During the production of both records, Bacolas traveled to Greece on two occasions, to record several musicians on songs for The Rumba Kings' albums, and to film and direct music videos for the band's new albums. Filming for the music videos took place on location on Mykonos, Santorini, and as well as throughout the mainland of Greece.In a November 1998 interview in The National Herald, Bacolas states, "My ultimate dream is to build a small villa in Santorini with a recording studio." The group released their Latin single, "Mirame", written by Horacio Alcantar (Lyrics), and George Stevens (Music), and produced by Johnny Bacolas in January 2020. Natalis is the featured vocalist. The Rumba Kings released their Greek single "Den tha se ksehaso", written by Sofi Alexandrou (Lyrics), George Stevens (Music), and John Paul Adams (Lyrics/Music), and produced by Johnny Bacolas in July 2020. Also in July 2020, the band released their Latin single "Dance with me", written by Johnny Bacolas (Lyrics), and George Stevens (Music), and produced by Johnny Bacolas. Natalis is the featured vocalists on "Dance with me." During the pandemic of 2020, when live music was paused, the group released several quarantine videos. During this time, Bacolas and Stevens redesigned their show, growing it from 90 minutes to almost three hours in length, adding several new songs, as well as instrumentalists."We're really progressing with the show, revamping and adding a lot of new elements," says lead guitarist George Stevens. "This downtime that we've had really put everything in perspective as to what needed to be done and gave us the time to do that." Bassist and producer Johnny Bacolas adds, "That's been our attitude from the outset of this, to turn lemons into lemonade and to utilize this time, since everybody was going to be at home. George and I put the entire show under a microscope. We added songs, auditioned several additional musicians, brought in new percussionists and three new vocalists, and spent a lot of time training other instrumentalists. It was a really good time for us, since everybody has a home studio and we were able to collaborate electronically. It ended up being a really positive thing." ## Musicians that perform and record with The Rumba Kings George Stevens – guitar Johnny Bacolas – producer, bass guitar, guitar, bouzouki Teddy Adams – guitar Vinnie Uanno – guitar Mohamed Hussein – guitar Andrey Zasypkin/Mike Fernandez/Josh Kossak/Christos Manolopoulos – drums Sofi Alexandrou, Natalis, Rustam Shtar – vocals Rustam Shtar – Soprano Saxophone Bahaa Sadak/Achilleas Dantilis – Keyboards Tor Dietrichson/ Rachel Nesvig – violin Arcobaleno Strings – String Quartet Panagiotis Kotsianos – violin Geoffrey Castle – violin Brian Gunter – cello Enrique Haneo – guitar Eric Snyder – composer, guitar Seth May-Patterson – viola/arranger Loren Tempkin – Piano Martin Ross – Piano Charly Hernandez – Deseo Carmin –
https://en.wikipedia.org/wiki/The_Rumba_Kings
When designing interior spaces, in most cases it is not just one issue we have to tackle, but a whole set of problems. The end result is a true reflection of the cooperation between the creative team, that is, the designer, the constructor, and the client. We place great emphasis on finding harmony between the form and function of the furniture and selecting the appropriate materials. After a careful on-site survey, we prepare the custom unit designs, to be finalized after further consultations. Professional visualization plans will illustrate not only the furniture, but the entire setting (walls, floor, ceiling, etc.). This is how we can realize the harmony of colors and visualize the entire scene to be created. Complex price calculations and price variants are created as a result of consultations on form and materials, which will help the client make the decision. As each interior space is unique, any challenges posed in interior design will represent a chance for exclusivity. We seek to satisfy our clients’ needs while knowing this.
https://www.lukasart.ro/complete-indoor-structures
A genetic basis for asthma- and atopy-related quantitative traits, such as allergen-specific immunoglobulin E (IgE) levels, has been suggested by the observed familial aggregation of these traits in temperate climates. Less information is available for tropical climates, where different allergens may predominate. Sensitivity to the mite Blomia tropicalis is related to asthma in tropical climates, but heritability of B. tropicalis sensitivity and the impact of age, sex, and other environmental covariates on heritability have not been widely explored. Total and specific IgE levels were measured by immunochemiluminescent assay in 481 members of 29 Barbadian families (comprised of 340 parent-offspring trios or pairs) ascertained through two asthmatic siblings. Trait heritability was estimated using regression of offspring on mid-parent (ROMP) and pairwise correlation analysis of unadjusted IgE levels and on residual values after adjustment for covariates. Heritability of IgE levels to the major antigen of B. tropicalis (Bio t M) estimated by ROMP in 180 complete parent-offspring trios was 0.56. Heritability was consistently greater for male offspring than for female offspring. Similar sex-specific patterns were observed for specific IgE to Dermatophagoides pteronyssinus and total IgE levels and were relatively unaffected by adjustment for covariates. Pairwise correlational analyses of specific and total IgE levels showed similar results. Moderate heritability of Bio t M IgE levels was detected in these Barbadian families and was greater for sons than daughters. Adjustment for covariates had minimal impact. This suggests that future investigations of genetic determinants of IgE levels should include approaches that allow for potential sex differences in their expression.
https://jhu.pure.elsevier.com/en/publications/sex-differences-in-heritability-of-sensitization-to-blomia-tropic-3
Many startup founders have wasted considerable time and energy trying to work with large corporate partners, so there are some very good arguments in favor of avoiding these conversations early on. But there are two advantages that a well-aligned corporate partner might bring to the table: infrastructure and distribution. Early-stage corporate VC has grown, meaning there are more people who understand early-stage startup needs as well as large operating groups within corporates. More companies now have programs that are explicitly designed to make it easier to work with large firms, much like APIs have made it easier to quickly leverage an array of services. Corporate VC objectives generally include access to solutions, technology partnerships, and acquisitions. They are also increasingly prioritizing returns. It’s important to understand the motivation of the corporate partners with whom you are engaging. It’s very rare that they are motivated by a desire to hijack your intellectual property, as some startups fear, but conflicts may arise and should be discussed early. There are many risks in working with a much larger partner, but the biggest risk is wasting time and money. Make sure you understand how much risk you are exposed to, and that you aren’t risking the death of your company.
https://urban.us/urbantech-startup-playbook/corporate-partners/
Often you have to rely on intuition. —BILL GATES Whether you call it a hunch, a gut reaction, or just a feeling, intuition is real and can be harnessed to increase your ability to influence and transmit charisma. Intuition helps you read and understand people in an instant. Intuition is a combination of your feelings, your wisdom, and your experience. People who are able to distinguish between random thoughts and intuition are more successful in life and in business. CEOs of large corporations, for example, have access to all the research they need to make sound, educated decisions. Yet the successful ones will admit that ultimately they have to follow their heart and use personal intuition. When we ...
https://www.oreilly.com/library/view/the-laws-of/9780814415917/xhtml/part02chapter10.html
GENERATORprojects proudly presents A Hidden Record A project by Peter Amoore with Valerie Norris and Lauren Printy Currie; Alex Millar and Viki Mladenovski as part of the current archival residency programme. Preview: Friday the 5th of May @ 7pm Continues: 6th-7th May 12-5pm Invited to respond to GENERATORprojects’ archive, that documents twenty years of artist led activity, Peter, Alex and Viki present three projects of new work focusing on themes of language, abstraction and anonymity found in the documents of past exhibitions and events. In the duo show, I take into my arms more than I can bear to hold, by Lauren Printy Currie and Valerie Norris, the artist’s practices sat in parallel, their respective works coexisted in the gallery alongside a collaborative installation of drawings and shared reading. Responding to the exhibition, Peter invited the artists to do a series of drawing exercises together. Valerie, Lauren and Peter made abstract paper collages as a means of seeing the differences within and between each other’s approaches to making. The exercises reflect on Valerie and Lauren’s continued practices; placing the artists’ exhibition and respective sensibilities in dialogue with Peter’s ongoing collaborative drawing practice. After thoroughly looking through the GENERATORprojects’ archive, Viki Mladenovski decided to focus on her perceived distance towards artworks, events, and exhibitions recorded in the archives. She contemplates about the way those documents of past events and exhibitions are being seen by somebody who has not been present at the real event or exhibition. There is an abstraction in the archive which can never be fully removed; an anonymity of the documents and a detachment from the real events and exhibitions. Do words and images within the archive, which were made for the purpose of an event, become self-contained? How does the purpose of these documents change when they become part of the archive? In her work, Viki focuses on the construction of written and visual language around the artworks by rearranging past posters, leaflets, and written documents in newly created posters. The chaos and distance which she experienced while looking at the archive are conveyed through her collages which act as a replica of her encounter with the archive. Alex’ sculpture, performance and film work seeks to find new relationships between mythology and nature, the body and the machine. For this exhibition, he has responded to a document that was published to accompany the 2010 exhibition DROMOS, a show that brought together artists Derek Sutherland and Bedwyr Williams and architect James Alexander-Craig. This show took the theme of ‘Dromology,’ the science and logic of speed, to ask questions of time, space and place in art, and in the context of our fast paced cultural landscape. Alex hopes to reflect on these questions still further in relation to new ‘ready-made’ sculptural works. The varying projects focus on the gaps in understanding found in all forms of documentation. Through presenting new works created in response to this lack of understanding, the exhibition intends to open up a discussion about the artists’ varying interpretation of an exhibition or event through the limited information provided in an artist led archive.
http://generatorprojects.co.uk/project/a-hidden-record/
The Memorial Sloan Kettering David H. Koch Center for Cancer Care. Image © ATCHAIN for Perkins Eastman and Ennead architects Planning for hospital renovations or new construction projects used to focus on concerns such as having enough space for the intended program, how workflow traffic maximizes the patient experience, and how the space can be designed to allow for future flexibility. But with changing weather patterns and traumatic weather events occurring at an alarming rate, it also is critical to plan for how a facility that provides essential patient care services will remain operational in the event of a natural disaster. Memorial Sloan Kettering Cancer Center (MSK) sought to address that reality when planning its 760,000-gross-square-foot David H. Koch Center for Cancer Care ambulatory outpatient facility in New York City. Built in the aftermath of Superstorm Sandy, MSK wanted to ensure that its new asset, planned for 50 or more years, would stand the test of time and continue to provide high-quality cancer care to the neighboring community while minimizing the risk of downtime. Lines of defense Planning for the David H. Koch Center for Cancer Care consisted of both site conditions that were fixed (site or project elements that could not be changed) and variables (site and design considerations that could help offset the fixed conditions). The major fixed condition was the project site location, a New York City plot of land spanning 73rd and 74th streets adjacent to the East River and running parallel with FDR Drive — all located within the 100-year flood zone. Since the location of the project could not be changed, the project from its inception focused on integrating resiliency measures for variables within MSK’s control to ensure that the facility’s continuity of service would be maintained at the highest level possible to serve the community. These measures consisted of a multiple-stage approach. The first line of defense was protecting the site from water infiltration. The project’s design utilized numerous types of dry floodproofing construction methods to create a continuous flood barrier around the building. This included a foundation system that was designed to withstand the hydrostatic uplift of water in a flood condition, exterior walls that were reinforced and waterproofed to withstand storm surges, and a series of flood barriers that were integrated into the project site to protect building street entrances and drive aisles from floodwaters. Resource Careful consideration was given to sleeves or penetrations through the basement-level mat slab. The structural mat slab construction was a 6-foot-thick slab designed to withstand hydrostatic uplift of groundwater once the building was fully constructed. The mat slab did not contain any piping, grounding or service connections that went fully through the slab, as this could have been a failure point within the structural system where groundwater could have wicked up through the slab. There also was a series of perforated piping system networks that was installed to act as an intermediate slab drainage layer. This also served as a redundant measure that would drain any water that may penetrate the slab, and the water would then drain into a building sump pit and be pumped out into the combined storm and sewer system. The second line of defense was elevating all critical infrastructure components above the project’s design flood elevation (DFE) to mitigate damage to key infrastructure elements required for the operation of the building should there be a breach in the first line of defense. This included planning for the building’s main electrical service and utility transfers to be located within an interior electrical vault located on the second floor of the building, planning for the building’s fire pump to be located on the first floor of the building, and elevating utility point-of-entry handoffs, where physically possible, to the second and third floors of the building. The handoffs included the incoming water service mains installed to the second floor, where the utility meters were located, instead of only within the building’s foundation wall. Additionally, the building information technology point-of-entry rooms, where carrier circuits entered the building, were extended to the third floor, where the transition from company cable to customer cable occurred. In addition, each point-of-entry penetration through the building’s foundation wall bathtub construction was sealed using mechanical link-seals between conduit and piping and the foundation walls. Within conduits, inflatable airbag-type seals were utilized to provide a water stop between the cables and conduit interior in the system raceway. The third line of defense was the redundant and robust measures that were integrated into the design of critical infrastructure systems essential to continuous operation of the facility. When planning these systems, there was a multi-failure scenario design approach to determine the resiliency of a particular system. The planning for each of these systems was not limited to how to protect the system in the event of a flood. The resiliency criteria for the mechanical-electrical-plumbing (MEP), fire protection and information technology systems supporting the project were expanded to include how the system can remain operational upon loss of utilities; how to protect against a single equipment component failure as well as against internal piping failures when utility equipment is flooded on the exterior of the building; and how to maintain systems operation on a 24/7 basis while providing facilities staff the flexibility to perform preventive maintenance. Storm points of entry were designed with backflow prevention systems that can withstand the water pressure associated with the project’s DFE. In addition, should the backflow prevention equipment fail, all interior stormwater piping up to the second floor of the building was designed using stainless steel piping with welded joints. This ensures that the interior stormwater piping system can withstand a system pressure rating during a flood event should the backwater prevention system fail. All pumps associated with stormwater were designed as N+1 to protect against an equipment failure. Additional stormwater sump pumps were integrated into the design with higher gallon-per-minute ratings to help fight against water collecting in the lower levels of the building should there be a rain event outside, a flood condition on the exterior of the building, or water seepage at flood barrier joints where flood barrier assemblies married to building structure. In addition, all system components were connected to the building’s diesel emergency generator system, which was also designed as an N+1 system. Other critical mechanical and electrical redundancy measures included the electrical system being designed so that the electrical distribution serving areas of the building below the DFE (the first and second subcellars) could be fully disconnected from the remaining electrical distribution serving the remaining floors above the DFE. The building’s emergency and standby systems were designed to provide power to code-required life safety loads within the building as well as key clinical functions to ensure that the facility can continue to operate through the end of a business day should a utility loss of power occur. These systems consisted of two 2,500-kilowatt diesel emergency generators with 96 hours of on-site fuel oil storage and a 1-megawatt natural gas-fired microturbine that helps reduce the building’s overall peak electrical demand load on the utility service during normal operating conditions. This system can operate in island mode to generate 1 megawatt of electricity and recover the waste heat from the microturbine’s exhaust flue to provide heating hot water and system reheat hot water if there is an electrical utility failure but the natural gas service remains operational. The electrical system also was designed within a buildingwide uninterruptible power supply system that provides power to major information technology equipment located within building intermediate distribution frame rooms, main distribution frame rooms and point-of-presence rooms, as well as data collection devices and cabinets associated within medical imaging modalities throughout the building to eliminate a loss of power to these systems as building infrastructure systems transition from operating on the utility grid to operating on the building emergency generators upon loss of power. The building’s mechanical systems design aimed to keep the building occupiable with patient comfort in mind. The air distribution systems use fan-array technology to provide heating, ventilation and cooling throughout the building. The air handling systems were designed with a series of small fans within each unit to protect against the possibility of a single motor failure preventing an air handling unit from operating. In addition, multiple air handling units were headered together to allow for overall air distribution system redundancy. The building heating system uses natural gas-fired hot water boilers to provide heating through the facility. In addition to a boiler plant containing N+1 equipment throughout (e.g., boilers and pumps), the plant was designed as a dual-fuel plant so that heating can be provided if a failure of natural gas occurs by utilizing the building’s on-site fuel oil storage system. Flood mitigation strategy The flood mitigation strategy for the project consisted of a multi-angle approach to verify that the requirements of the institution and stakeholders (e.g., facilities and clinical staff), as well as expectations that the facility will have a high level of uptime to serve the community, were achieved. The first angle was to elevate all critical infrastructure above the DFE. This eliminated the need for special system operations and major readiness efforts that would have had to occur prior to each storm predicted to hit the area. This consisted of a major space-planning effort that needed to occur early on in the project. There was a fundamental shift in planning because infrastructure systems that typically reside within below-grade basement levels of a building were now competing for space on the first and upper floors of the building. A space-planning balance had to occur during the schematic design phase of the project to correctly balance the space required for elevated infrastructure without compromising the functional flow of visitors and staff within the building so as to maintain an optimal patient and visitor experience. The second angle was to design systems in such a way that their normal operating functions were the same as those during a storm event to minimize the amount of unique storm preparedness measures required to get the facility ready for a flood event. These design criteria helped reduce the storm preparation checklist that would be required within the facility. This also simplified MEP system operations from a building operator perspective, since there are not two modes of operation; that is, a normal operation mode and storm operation mode. The operation of critical infrastructure systems is the same regardless of the exterior environmental condition. The third angle was making sure all systems requiring setup prior to a flooding event (e.g., flood barriers) were able to be put in place in a systematic way, thus eliminating setup errors as well as enabling deployment within a 24- to 48-hour window in situations with a short time frame of preparation before a major storm impacts the area. All major flood barriers within the facility were designed to operate using hydraulics to raise and lower the flood barrier system to simplify how these flood barriers are deployed and to minimize any field-erected deployable systems to select doorframes at building entrances. Scenario evaluation Given the wide variety of resilience design and system design configurations, it’s important that resiliency criteria for planned infrastructure systems be evaluated against the specific project type, owners requirements and the failure scenarios the facility is expected to withstand. These scenarios and associated system uptime expectations are unique by project and by location, and resiliency measures need to consider these factors during the design process to verify the system design meets project resiliency goals and has been coordinated with the project’s various stakeholders, including the final end users who are going to utilize, operate and maintain the facility. Steven Friedman, PE, HFDP, LEED AP, is director of facilities engineering, design and construction in the facilities management division of Memorial Sloan Kettering Cancer Center, New York City; and John P. Koch, PE, is associate partner at JB&B, New York City. They can be reached at [email protected] and [email protected].
https://www.hfmmagazine.com/articles/3737-resilient-design-strategies-to-withstand-extreme-weather-events
Weather, climate, seasons, crops Hungary is in the temperate zone and has a continental climate. This means that the weather is not very changeable but there are big differences between the four seasons: December, January and February are winter months. In winter the weather is usually cold, the temperature is below zero and it often snows. The mornings are foggy and frosty. You should wear warm clothes if you don’t want to shiver with cold and catch a cold. If it’s cold and it’s snowing children can make a snowman or play snowballs. A lot of people go skiing in winter. In this kind of weather I like to stay at home, read an interesting book or watch a film on DVD. My mother always bakes delicious cakes or doughnuts. March, April and May are spring months. In spring the weather is usually changeable. It can be quite warm and sunny or cold, rainy and windy. The snow melts, flowers begin to bloom. Snowdrops, violets, tulips and daises are the nicest spring flowers. Many people take long walks in nature in the fresh air after cold winter months. June, July and August are summer months. It is usually very hot and dry in Hungary. It is especially dangerous to lie in the sun around noon because the ultraviolet radiation is very high at that time. Summer is the favorite season for most people. Students like it the best because they are on their summer holiday so they don’t have to go to school. Cherries, raspberries, gooseberries, currants and sour cherries are the most typical fruits of summer. My favorite season is summer because in summer the sun shines. In summer there isn’t school and my family and I usually go on holiday. In summer I sometimes go riding a horse. Then there are a lot of concerts, more than in winter and I meet my friends very often. I can go out and sunbath. People are happy because the weather is good. September, October and November are autumn months. In autumn there is a rainy and foggy weather. The nature has beautiful colours in this season as the leaves of trees and bushes start to change their colours and fall to the ground. Most of the students don’t like autumn because it means that they have to go to school again. England has an oceanic climate. It means that the weather is very wet and there are not big differences between the seasons of the year. It is mild and rarely snows in winter while summers are cool and damp in England. The weather is very changeable. As it rains very often, the English regularly carry their mackintosh and umbrella with them even in summer.
https://erettsegi.com/tetelek/angol/weather-climate-seasons-crops/
This article is part of a series of articles aimed at getting you, the complete beginner at using the command line, from zero to hero. At the end of this article you'll be able to: - define what the command line is - advantages of using the command line - various uses of the command line for both basic and advanced users - getting started with basic commands This article is based on Meta's Back-end Developer Certificate Course 3: Version Control. You can find a link to the course here and a link to my article on Course 1: Introduction to Back-end Development here. We interact with computers with a daily basis to perform various tasks e.g. editing word documents, listen to music, watch videos, create files etc. At the heart of all these interactions is a process of the computer accepting data and giving some output. There are various ways we interact with the computer with the most popular one being through the use of graphical user interfaces (GUI). These are very popular since they are easy to use with little or no training but the have their own limitations including being slow and limiting full interaction with the computer. Another way to interact with the computer is the command line. Using the command line is faster, less prone to errors and offers a lot of flexibility. With the command line, we can perform some basic tasks like: - creating directories - creating files - combining directories - copying and moving files - performing advanced searches Some of the more advanced uses of the command line include but not limited to: - tracking software changes - accessing remote servers - unzipping archives - accessing software manuals - installing and uninstalling software - checking, mounting and unmounting disks - automating tasks with scripts - controlling access to files and directories - containerization (running and controlling self contained virtual software) Basic Commands Let's introduce some of the most common and basic commands: cd- This command is used to change into a directory for example if you wanted to change into the desktop directory you would type cd ~/Desktop. Similarly, to leave that directory you type cd .. The touchcommand is used to create files. Say you want to create a file named example.html. This would be the correct command syntax for that - touch example.html Another basic command is the mkdircommand which is used to create a directory. If for instance you wanted to create a directory named project you could type mkdir projectinto the command line So how do we put this all together to effectively communicate with the computer? Lets see this in a sample workflow: Lets say you want to create a folder for your new text project, create an empty text document,add some content into the text document and open the same on a code editor so as to work on it: - cd ~/Desktop- Change into Desktop directory - mkdir myproject- Create a folder called myproject on the desktop - cd myproject- Change into the myproject folder - touch example.txt- Create a file called example.txt - cat > example.txt- This will let us add some text to the file. After typing whichever text you want to go in the file, press CTRL+D to terminate the command - code example.txt- Open example.html on VSCode to edit And there you go. We have defined what the command line is, its various uses and we've gone through a sample workflow using the command line to create a folder, create a file, add content to the file and finally open a code editor to continue working. On the next article in this series we will delve further into understanding the command line. We will look at how to: - create, rename and delete files and folders on your hard drive using Unix commands - Use pipes and redirection.
https://practicaldev-herokuapp-com.global.ssl.fastly.net/danielstai/getting-started-with-the-command-line-part-1-22bi
A child’s experience and environment – both in the womb and in early life – lay the foundation for life. Mothers and fathers are the most important influences on a child’s well-being and development. In recent years, advances in neuroscience have increased our understanding of the links between early brain development and later life outcomes, and have shown the importance of providing very young children with consistent, positive and loving care. From pregnancy onwards, the relationship between a baby and his or her primary caregiver has a lasting impact on that child’s future, including on his or her health as an adult. What is included in the course: - Session 1: baby's physical, cognitive, social and emotional development; changes for parents, keeping healthy nutrition, exercise and mental wellbeing; your blossoming body, hormonal and physiological changes; pelvic floor; perineal care; common pregnancy challenges; baby's wellbeing and monitoring - Session 2: giving birth and meeting your baby; psychological, social and physical aspects of labour and birth; physiology of labour; preparing of labour; signs of labour; stages of labour; empowering strategies; medical interventions and emergencies; what happens straight after birth - Session 3: caring for newborn baby; adaptation to parenthood; postnatal care; physical and emotional health after giving birth; looking after your baby; bathing, cord care, safe sleeping - Session 4: feeding your baby; breastfeeding, positioning and attachment; skin to skin contact; baby's stools; common problems and resolutions; weight loss; bottle feeding and expressing.
https://www.urbanfamilies.uk/antenatal-education
RELATED APPLICATIONS FIELD OF THE INVENTION DESCRIPTION OF THE PRIOR ART SUMMARY OF THE INVENTION BRIEF DESCRIPTION OF THE FIGURE DETAILED DESCRIPTION OF THE INVENTION This application is related to U.S. patent application Ser. No. 10/______ filed ______ entitled, Plastic Expandable Utility Shed. This invention relates generally to a large enclosure constructed of plastic structural panels. More specifically, the present invention relates to a modular construction system utilizing shelves having integrated connectors to cooperate with integrated connectors in the roof for stability and support. This invention relates generally to a large enclosure constructed of plastic structural panels. More specifically, the present invention relates to a modular construction system utilizing shelves having integrated connectors to cooperate with integrated connectors in the structural panels for stability and support. Utility sheds are a necessity for lawn and garden care, as well as general all-around home storage space. Typically, items such as garden tractors, snow blowers, tillers, ATVs, motorcycles and the like consume a great deal of the garage floor space available, forcing the homeowner to park his automobile outside. The large items, such as mentioned above, require accessories and supplies that must also be stored, as well as other small tools. To avoid using more floor space for these supplies, a system of shelving is usually constructed as free standing units or attached to the walls of the sheds. Free standing units are unstable, particularly, when carrying a top-heavy load. And in the modular plastic sheds, now available, it is difficult to attach shelves to the plastic panels without damaging the integrity of the panels. Modular shelving systems are well known as illustrated by U.S. Pat. No. 6,178,896 to Houk, Jr., U.S. Pat. No. 5,709,158 to Wareheim and U.S. Pat. No. 5,588,541 to Goetz. These are stand-alone modular units with multiple horizontal shelves supported by sectional legs or, in the case of the Goetz patent, a back panel. Accordingly, it is a primary objective of the instant invention to provide a hanging shelving system for cooperating with structural elements in a plastic utility shed for stability and support. It is another objective of the instant invention to provide a modular shelving system with flexibility in assembly to support different size and different weight articles. It is a still further objective of the instant invention to provide manual assembly of the shelving system. Other objects and advantages of this invention will become apparent from the following description taken in conjunction with any accompanying drawings wherein are set forth, by way of illustration and example, certain embodiments of this invention. Any drawings contained herein constitute a part of this specification and include exemplary embodiments of the present invention and illustrate various objects and features thereof. Other objects and advantages of this invention will become apparent from the following description taken in conjunction with any accompanying drawings wherein are set forth, by way of illustration and example, certain embodiments of this invention. Any drawings contained herein constitute a part of this specification and include exemplary embodiments of the present invention and illustrate various objects and features thereof. The FIGURE is a perspective of a utility shed with a hanging shelving system. 10 As shown in the FIGURE, the utility shed is a structure assembled from molded panels with the roof panels removed. The shed may have floor panels, also. The panels, including the roof panels, are reinforced with metal strips for safety, structural rigidity and strength. The excess structural strength afforded by the strips may be utilized to support interior shelving provided as an accessory or as an after market item. 11 12 12 13 14 15 16 13 As shown, a sidewall panel is joined to and end panel . Each of the end panels have a peak to support a pitched roof. The reinforcing strip , in the form of a ridge pole, joins each end panel at the center of the peak. Roof reinforcing strip and roof reinforcing strip are disposed on opposite sides of the ridge pole and at the same level with each other. The ends of the ridge pole and the roof strips abut the interior of the peak allowing a continuous straight upper edge for covering the ends of the roof panels. The roof panels (not shown) are in contact with the ridge pole and reinforcing strips and have spaced clips that secure the roof panels to the reinforcing strips. Because of the pitch of the roof, there is a vertical space between the roof panels and the reinforcing strips on each side. By hanging shelves from the reinforcing strips this space can be used for storage. 20 21 22 23 21 24 25 26 27 28 29 30 31 15 16 The shelving , shown in the drawing, is assembled from several long wire shelves , , and suspended from the reinforcing strips by hangers located at each corner of each shelf. The shelf has a storage surface formed of metal or plastic ribs extending across the width of the shelf. Each end of the ribs terminates with a down-turned portion and . The ribs are supported by longitudinal bars , and extending the length of the shelf. The down-turned ends are fixed to bars and . The shelves are of a length to span the distance between the reinforcing strips and . 32 33 15 16 33 34 34 32 Each hanger is made of metal or plastic of requisite strength. The upper end of the hanger is a C-shaped double hook with the upper portion in the shape to mate with the exterior surface of the reinforcing strip and . The lower portion of the C-shaped hook is attached to one end of a spacer which extends the width of the shelf. The other end of the spacer is attached to another hanger at the adjacent corner of the shelf. 35 32 24 30 34 The lower end of the hanger is a loop which extends around two adjacent ribs and the bar near the down-turned portions of the shelf. The loop is closed about the spacer and the lower portion of the C-shaped hook. The shelves may be included with the molded utility shed or a separate accessory or an after market item. The hangers are normally attached to the corners of the shelves at the factory. Assembly merely requires placing the upper C-shaped hook over the roof reinforcing strips. It is to be understood that while a certain form of the invention is illustrated, it is not to be limited to the specific form or arrangement herein described and shown. It will be apparent to those skilled in the art that various changes may be made without departing from the scope of the invention and the invention is not to be considered limited to what is shown and described in the specification and any drawings/FIGURES included herein.
The male partners of adolescent girls and young women in eSwatini (Swaziland) and South Africa report substantial HIV risk behaviours, but the data also challenge the stereotypical image of a ‘sugar daddy’. The men were only a few years older than their partners and many described challenging life circumstances such as unemployment, homelessness and violence. A parallel, qualitative study from Uganda describes a somewhat different situation, with men often acting as economic providers in relationships of a relatively long duration. These three studies have recently been published in PLOS ONE and presented at this summer’s International AIDS Conference (AIDS 2018) in Amsterdam. In many African countries, rates of new HIV infections among females aged 15 to 24 are much higher than among their male peers. Unequal power dynamics in sexual relationships with male partners, especially older partners, contribute to this vulnerability. However, older male partners have often been seen as a ‘hidden’ or ‘hard to reach’ population. Until now, most researchers have asked women about their partners, rather than conducting research directly with men. The US President’s Emergency Plan for AIDS Relief (PEPFAR) therefore created the DREAMS partnership. It aims to reduce HIV infections among adolescent girls and young women in ten sub-Saharan African countries with interventions that go beyond the health sector, addressing structural factors such as poverty, education and gender inequality. The partnership includes research to better understand the characteristics of older male partners and identify settings in which they can be reached. DREAMS will try to improve these men’s engagement with HIV services. A higher uptake of HIV testing in men, and antiretroviral therapy for the HIV positive, may lower their risk of passing HIV on to young women. eSwatini eSwatini is a landlocked country in Southern Africa, formerly known as Swaziland, in which 15% of adolescent girls and young women are living with HIV. DREAMS’ study in eSwatini took a quantitative approach. In 19 rural, peri-urban and urban districts across the country, the researchers worked with community informants to identify 182 venues where men meet and socialise with adolescent girls and young women. These were predominantly drinking spots, kiosks, stores, bars and clubs, although other sites such as parks and churches were also included. Only men aged 20 to 34 were surveyed, on the basis of other work that suggests they are the age group most likely to have younger female partners and to be involved in HIV transmission. The researchers interviewed 843 men in order to identify 568 men who reported having at least one female partner aged 15-24 in the past year. Over half of this group reported having two or more sexual partners in the past year. Relatively few men told interviewers that they were living with HIV (6%), but men who had at least three partners aged 15 to 24 in the past year were more likely to be HIV positive (adjusted odds ratio 3.2). The vast majority (88%) of the men described their marital status as single and their average age was 25.7 years. The majority of men (71%) had female partners no more than four years younger than them. In 14%, their partners' average age was 5 to 9 years younger but only 1% reported having partner ten years younger than them or more. Even for men aged 30 to 34, only 3% reported an age gap of ten years or more. Only 36% had consistently used a condom with their current partner and 57% had talked about HIV status with this partner. Men whose current partner was under the age of 20 were less likely to talk about HIV status than men who had older partners (adjusted odds ratio 0.6). Only half of the men were currently employed. They were engaged in a wide range of occupations, including industry, transport and government. A substantial minority reported precarious living conditions –15% had slept outside due to homelessness in the past year, 13% had been jailed in the past year and 8% had experienced a lack of food in the last month. “Challenging life circumstances suggest structural factors may underlie some risk behaviours,” comment the authors. South Africa A similar survey was done in two informal settlements in Durban, South Africa. Here, recruitment took place both in venues where men meet younger women (such as drinking establishments and taxi ranks) and in HIV services. The demographic profile of the 962 respondents was comparable to that in Swaziland, in terms of age, marital status and employment. Seventy-one per cent reported having multiple partners in the past year (of any age); 24% had five or more partners; 75% had at least one female partner aged 15 to 24 in the past year; and 54% had both younger and older partners. Just under half the relationships could be described as ‘transactional’ – the men had given money, goods or services mainly to start or stay in the relationship. Such gifts could include drinks, meals and make-up, but for 12% of the men it was something much more substantial, such as paying their partner’s rent or school fees. Men who were employed (p < 0.01), small business owners (p < 0.01) or educated to technical college or university level (p < 0.05) were more likely to have multiple partners, in accordance with the idea that more economically powerful men have more partners. Men's vulnerabilities, as well as their power, also emerged as a theme, however. There was a link between traumatic events in childhood or adulthood and sexual risk behaviour. - In childhood, 77% had been beaten and 21% had seen or heard their mother being beaten. These experiences were associated with less consistent condom use (p < 0.05) and having multiple partners (p < 0.05). - 42% were not raised by a biological parent and 37% had experienced a parent’s death during childhood. These experiences were associated with less consistent condom use (p < 0.01) and more age-disparate relationships (p < 0.05). - As adults, 59% had witnessed an armed attack, 39% had been robbed at gunpoint or knifepoint and 41% had felt close to death. These experiences were associated with multiple sexual partnerships (p = 0.001). Uganda The third DREAMS study took a qualitative approach, using in-depth interviews with 94 men in Uganda. The study was done in a mix of rural and urban sites in three geographically and culturally diverse districts. As well as recruiting at community venues where men and younger women met, adolescent girls and young women were asked to refer their male partners to the study. The latter approach was expected to identify men in more stable partnerships. The respondents had a different profile to the other two studies. While the mean age was 28, interviewees were up to 45 years of age, 80% of respondents were married or cohabiting, and 94% were employed. Multiple sexual partnerships were seen as a very common practice, as one man described: “These days most men have more than one woman, most men say that you can’t keep eating one type of food all the time. In fact you can’t find a man with only one woman, it isn’t there.” Men often sought to develop ‘side’ relationships. They were seen as additional long-term partners whom the man would provide for economically. Many men referred to them as ‘wives’. “She is not at my home she is in a rental but it’s me who pays her rent, it’s me who is taking care of her in everything even though she is not at my place she still is like my wife now.” When men did not have an additional partner, it was often because of the cost of doing so. They might however have short-term casual partners who they would meet at drinking establishments. These relationships were nearly always transactional in nature, usually starting with the man buying the girl or woman drinks, a meal or low-cost items like mobile phone credit. Some interviewees explicitly described a preference for younger partners. As this 31 year old described, they could be more easily controlled: “If you get a woman who is your age mate, somehow these women tend not to be submissive… yet for a young girl, because of the age difference, she will find it very easy to listen to you, she will treat you with respect because she knows you are older than her and perhaps more experienced.” Men described a pattern of establishing relationships with younger females. Most men had married in their early twenties, usually to a woman three to five years younger than themselves, and generally saw marriage as an important life event. Within a few years after marrying, however, many men described taking on one or more side partners. This might occur during a period of separation from the first wife due to a conflict or residential relocation. As men got older, they might also have a series of short-term casual partnerships, almost always with adolescent girls and young women. A few of these might develop into longer term ‘side’ relationships. The researchers comment that these complex and fluid patterns – including temporary separations of long-term partnerships, ‘side’ partnerships being initiated and casual partnerships sometimes becoming more formalised – will complicate public health strategies to reach these men.
https://www.aidsmap.com/news/oct-2018/adolescent-girls-male-partners-are-not-all-much-older-much-wealthier-sugar-daddies
I'm looking for recent dive/fishing reports of the Radford. If you've been there in the last year or two, I'd like to hear what you found. In particular, where is the stern now? I can find no reports since 2012. | | New Jersey Scuba Diving | | Topside Pix USCG Gallatin Built 1967, 378', 3,250 tons, 29 knots The Coasties have a great color scheme, don't you think ? In 2014, the 45 year old Gallatin was retired from the USCG and donated to Nigeria, becoming the NNS Okpabana. Chichen Itza The famous Mayan ruins - a small ceremonial platform. A small pyramid. While the ancient Egyptians built each pyramid in a single shot, the Mayan's construction technique was generational. Starting with a small platform like the one above, every fifty years or so they would build a new structure over it, completely encasing the old one, usually in a completely different style. Thus, each pyramid has a smaller one inside, which has a smaller one inside, which has a smaller one inside ... The inner pyramids are often preserved in near-perfect condition - an archaeologist's dream. All of the buildings were brightly painted, and traces of the paint remain in spots. Many seem to have been blood red. The "Observatory". The Indians made many accurate celestial observations, although they had no instruments other than their own eyes. A building decorated with the hook-nosed mask of the rain god Chaac. The big pyramid, El Castillo - "The Castle", from the restored good side. Mayan pyramids were not tombs, but ceremonial and religious centers. The local limestone is quite soft and chalky, which would make it easy to shape and carve, but from a structural point-of-view this material is nearly worthless. I am amazed that the Indians were able to build anything at all out of it, let alone what you see here. The Mayans never learned to build arched roofs, and because of this most of their buildings, even the largest ones, contain only one or two small rooms. Another stone platform, with snake-head decorations. Our tour guide ( in the blue hat ) knew the locations of all the ancient Mayan souvenir shops, and made sure we didn't miss any. The wall of skulls. Nice. Death figured largely in the natives' religion and society. The Temple of Chaac Mool. Look at that funky sky. ( Digital camera is dying from the heat.) The ball court. The game was something like a cross between soccer and basketball, except that the winning team was executed after the game. On a good day, so was the losing team. I suspect that no one ever got very good at this game. A cenote, where you could go swimming ( and buy souvenirs. ) Actually, it was pretty neat. You couldn't swim in the other cenotes. Looking out from the inside. For the Culturally-Minded: We took a direct flight from Cozumel to Chichen Itza, which makes it a half-day trip, and the plane broke down for several hours, just as the concierge at the hotel warned us it would. If you are lucky you might catch a glimpse of things from the air - don't bet on it. The airport tax is a hefty $44 per head on top of the cost of the flight. They don't tell you about that! There are also all-day bus tours from Playa del Carmen. The ruins are pretty interesting, but don't go on the solstice ( the first day of Spring ) +/- 1 day - big crowds and everything is roped off. Otherwise, normally you can actually climb up a lot of the monuments at Chichen Itza. The ruins at Tulum are closer, and more picturesque, atop a white cliff overlooking the sea, but Tulum is disappointingly small compared to Chichen Itza. The ruins on the island itself are extremely ruined, from what I have seen of them. If I were to do it again, I would take the ferry to the mainland, rent a car or jeep in Playa del Carmen, and drive to Chichen Itza and spend two days there. There are nice hotels within walking distance, and with a little guide book and the extra time you could do a lot better than with a hurried tourguide. ( You could tag along with a guided group for a while to get the lay of the land, and then strike out on your own. ) The drive to and from Chichen Itza is long, but it runs up the coast to Cancun along the way, so you could stop there too. The main roads are narrow, but in good repair. Watch out for the local Policias. Playa del Carmen is a nice place to visit in itself - very pretty and much less touristy than Cozumel or Cancun. Last day diving. The Regal Empress - what a ship is supposed to look like. Built 1953, 612', 21,909 tons, 17 knots. She was once a trans-Atlantic liner, and was one of the few remaining. Sadly, scrapped in 2009. Old-time dive gear at the museum of the island, downtown. The Museum of the Island has a big map of the island with little lights on it, one for each souvenir shop. Ha - just kidding! That many lights wouldn't fit. Seriously, the museum is not very big, but it has lots of interesting displays, and the locals are justifiably proud of it, as they point you from one room to the next. It's a good way to spend a few hours on your last day. Replica of a native hut. I thought it was interesting the way the palm fronds were woven into the roof. Alas, once an engineer, always an engineer - it's a lifelong curse. Home Disclaimer: I make no claim as to the accuracy, validity, or appropriateness of any information found in this website. I will not be responsible for the consequences of any action that is based upon information found here. Scuba diving is an adventure sport, and as always, you alone are responsible for your own safety and well being.
http://njscuba.net/cozumel/topside.php
Woodwork club is a small club of 3 - 4 children (small numbers due to health and safety) who will be learning basic carpentry skills in a safe environment. The children will be taught how to use and store tools in a safe way; to understand the dangers and also the importance of being sensible when using them. Some of the tools and equipment they will learn to use are: Saws, Hammers, rulers, screwdrivers, drills, clamps, paint, workbenches, screws and nails, sandpaper and wood glue. The children have made things for the school grounds. So far they have made: a bird feeder and a nesting box, which have been placed for the birds to enjoy; a selection of different size planters to grow herbs and plants in as well as some planters in the shape of a train, which you can see in the quiet area. They are also given the opportunity to make something to take home, such as: nesting boxes, and smaller planters in the shape of ducks, dogs and flowerpot men. Currently they are making Christmas trees to take home, including one to take to All Saints Parish Church for their Christmas Tree Festival.
https://www.loughborough-primary.co.uk/woodwork-club/
Early news coverage in the US about the COVID-19 pandemic focused on information released from local, state and federal government officials. With an emphasis on US government at these levels, this study examined whether the public’s credibility perceptions and trust in government, along with message exposure, influenced their adherence to information from the government about (a) stay-at-home orders, (b) social distancing and (c) COVID-19 testing. Source credibility theory and situational crisis communication theory provided the theoretical framework for this study. Through the survey data analysis, we investigated communication preferences in the wake of the pandemic and whether credibility perceptions differed according to the level of government. Survey findings revealed that message exposure influenced respondents’ perceived credibility of and trust in government officials during and after the stay-at-home order. Finally, practical implications regarding recommended communication strategies based on the findings were discussed. Document Type Article Publication Date 4-25-2021 Notes/Citation Information Published in Journal of Creative Communications. © 2021 MICA-The School of Ideas This article is distributed under the terms of the Creative Commons Attribution 4.0 License (https://creativecommons.org/licenses/by/4.0/) which permits any use, reproduction and distribution of the work without further permission provided the original work is attributed as specified on the SAGE and Open Access page (https://us.sagepub.com/en-us/nam/open-access-at-sage). Digital Object Identifier (DOI) https://doi.org/10.1177/09732586211003856 Repository Citation Bickham, Shaniece B. and Francis, Diane B., "The Public’s Perceptions of Government Officials’ Communication in the Wake of the COVID-19 Pandemic" (2021). Communication Faculty Publications. 22.
https://uknowledge.uky.edu/comm_facpub/22/
CROSS-REFERENCE TO RELATED APPLICATIONS 1. &quot;A FUNCTIONALLY COMPLETE FAMILY OF SELF-TIMED LOGIC CIRCUITS, &quot; by Jeffry Yetter, filed Apr. 12, 1991, having application Ser. No. 07/684, 720, now U.S. Pat. No. 5,208,490; and 2. &quot;UNIVERSAL, PIPELINE LATCH FOR MOUSETRAP LOGIC CIRCUITS,&quot; by Jeffry Yetter, filed Apr. 12, 1991, having application Ser. No. 07/684, 637, abandoned. BACKGROUND OF THE INVENTION I. Field of the Invention The present invention relates generally to logic in computers and, more particularly, to a system and method for clocking pipelined stages of self-timed dynamic logic gates, also known as &quot;mousetrap&quot; logic gates. II. Related Art Pipelining in computer logic generally refers to the concept of configuring various stages of logic in sequence, whereby data is initially introduced into the sequence of logic stages and then subsequently more data is introduced before completion of the operation on the first data through the sequence. Pipelining enhances the performance of high &quot;latency&quot; logic networks. High latency logic networks are logic circuits which perform long sequences of logic operations requiring a relatively large amount of time. Pipelining improves performance because pipelining permits the overlapping of operation execution. At present, pipelining is considered a requirement for high latency logic networks in the high performance arena. For instance, instruction execution logic in the central processing unit (CPU) of a computer invariably employ pipelining. As a further example of where pipelining is considered a necessity, consider multiplication. To perform multiplication, a &quot;carry save adder&quot; pipeline of logic stages is usually employed. Specifically, each pipeline stage is essentially several rows of conventional full adder logic stages. Moreover, each full adder compresses three partial products into two partial products. Thus, each full adder adds in another partial product as data flows through the chain of full adder logic stages in each pipeline stage. In order to perform a single multiplication operation, more than one clock cycle is usually required, but as a result of pipelining, a new multiplication operation may be commenced generally in substantially less than, perhaps in half of, the total number of clock cycles. Traditionally, &quot;static&quot; logic gates have been utilized in computers to perform logic functions, for example, mathematical operations. Static logic gates are those which can continuously perform logic operations so long as electrical power is available. In other words, static logic gates need no electrical precharge, or refresh, in order to properly perform logic operations. Static logic gates can be easily connected together in sequence to collectively perform logic functions in an efficient manner. However, static logic gates are slow individually. In addition, when static logic gates are pipelined, the resulting logic operation is performed in an even slower manner. &quot;Dynamic&quot; logic gates are also known in the art. Dynamic logic gates are used in the conventional design of logic circuits which require high performance and modest size. Dynamic logic gates are much faster than static logic gates. However, dynamic logic gates require a periodic electrical precharge, or refresh, such as with a dynamic random access memory (DRAM), in order to maintain and properly perform their intended logic function. Once an electrical precharge supplied to a dynamic logic gate has been discharged by the dynamic logic gate, the dynamic logic gate can no longer perform another logic function until subsequently precharged. However, the use of conventional dynamic logic circuits in combinational logic or pipelining is problematic. First, dynamic logic circuits require a precharge cycle in order to render them operative. Effectively, a precharge cycle periodically interrupts the useful work cycle for the necessary purpose of maintenance. Precharge cycles significantly and undesirably increase the execution time of a sequence of logic stages. Dynamic logic circuits must maintain a minimum clock frequency in order to insure proper functioning. Proper operation of dynamic logic circuits requires that an electrical charge be deposited and maintained in the circuits. In reality, the charge deposited in the logic circuits eventually will decay to an unknown logic level and thereby corrupt the state of the pipeline. The decay results from uncontrollable design and manufacture characteristics. In most practical situations, the preceding problem may be overcome via a periodic refresh cycle, similar to the refresh cycle in conventional dynamic random access memory (DRAM). Hence, a minimum clock rate, analogous to refresh cycles, must be maintained. However, the minimum clock rate poses an additional problem. Many times, logic circuits are required to operate arbitrarily slow, &quot;at DC.&quot; For instance, logic circuits may be required to operate slow during IC testing. Conventionally, dynamic logic circuits can be modified to exhibit slow operation by including &quot;trickle charge&quot; devices or &quot;cross- coupled negative feedback&quot; devices. However, these devices consume valuable computer real estate and further decrease the speed of the logic circuits. Thus, a need exists in the industry for teachings that will permit the high performance pipelining of dynamic logic circuitry which adequately preserves data without the need for a minimum (refresh) clock rate. SUMMARY OF THE INVENTION The present invention optimizes the flow of self-timed logic evaluations through a plurality of pipeline stages comprised of blocks of self-timed dynamic logic gates. The present invention has particular application to, for example, &quot;mousetrap&quot; logic gates. In accordance with a first preferred embodiment of the present invention, a first clock signal and a second clock signal both have an evaluation state and a precharge state of shorter time duration, which states are staggered in time. In other words, the second clock precharge state exists during the first clock evaluation state, and vice versa. A first stage of self-timed logic gates receives data and also the first clock signal. The first clock precharge state precharges the self-timed logic gates of the first stage. The first clock evaluation state permits self-timed logic evaluation of the data travelling through the first stage after precharge. A latch receives the data from the first stage and receives the second clock signal. A second stage of self-timed logic gates receives the data from the latch and also receives the second clock signal. The second clock precharge state precharges the self-timed logic gates of the second stage. The second clock evaluation state permits self- timed evaluation of the data travelling through the second stage after precharge. A second preferred embodiment of the present invention is directed to a pipeline stage having self-timed dynamic logic gates for optimizing the flow of logic evaluations through a series of pipeline stages. In accordance with the second embodiment, a clock signal has a first clock evaluation state and a first clock precharge state. A delayed clock signal has a second clock evaluation state which overlaps with the first clock evaluation state and a second clock precharge state. A stage of self-timed dynamic logic gates receives data. The stage has a first group of cascaded gates connected to the clock signal and a successive second group of cascaded gates connected to the delayed clock signal. The clock signal and the delayed clock signal are configured to permit parallel precharge of the first and second groups of gates. Moreover, the clock signal is configured to permit self-timed logic evaluation in the first group directly after precharge, and the delayed clock signal is configured to permit self-timed logic evaluation in the second group at a predetermined period after precharge. A third preferred embodiment of the present invention is directed to eliminating the latch of the first preferred embodiment. In accordance with a third embodiment of the present invention, a first clock signal and a second clock signal both have an evaluation state and a precharge state of shorter time duration, which states are staggered in time. In other words, the second clock precharge state exists during the first clock evaluation state, and vice versa. A first stage of self- timed logic gates receives data and also the first clock signal. The first clock precharge state precharges the self-timed logic gates of the first stage. The first clock evaluation state permits self-timed logic evaluation of the data travelling through the first stage after precharge. A second stage of self-timed logic gates receives data and also receives the second clock signal. The second clock precharge state precharges the self-timed logic gates of the second stage. The second clock evaluation state permits self-timed evaluation of the data travelling through the second stage after precharge. The present invention overcomes the deficiencies of the prior art, as noted above, and further provides for the following additional features and advantages. Generally, the present invention teaches a system and method for optimizing the pipelining of blocks of self-timed dynamic logic gates, including but not limited to, mousetrap logic gates. Pipeline stages having varying numbers of cascaded gates, and therefore, requiring different time periods for performing logic evaluations, can be linked together as a result of the clocking system and method in accordance with the present invention. The present invention permits the pipelining of mousetrap logic gates with broad insensitivity to clock asymmetry, or clock skew, resulting from the use of both clock edges. Specifically, mousetrap logic stages operating in a &quot;disadvantaged&quot; clock phase can steal large time periods from mousetrap logic stages operating in an &quot;advantaged&quot; clock phase. The preceding terms and associated concepts are discussed in specific detail in the Detailed Description section of this document. The present invention can be used to pipeline vector logic having a monotonic progression, thereby eliminating any static hazard problems. Further advantages of the present invention will become apparent to one skilled in the art upon examination of the following drawings and the detailed description. It is intended that any additional advantages be incorporated herein. BRIEF DESCRIPTION OF THE DRAWINGS The present invention, as defined in the claims, can be better understood with reference to the text and to the following drawings. FIG. 1 illustrates a high level block diagram of a family of dynamic logic gates, called &quot;mousetrap&quot; logic gates, which can be pipelined in accordance with the present invention; FIG. 2 illustrates a low level block diagram of a two-input inclusive OR mousetrap logic gate in accordance with FIG. 1; FIG. 3 illustrates a low level block diagram of a two-input add predecoder mousetrap logic gate in accordance with FIG. 1; FIG. 4 illustrates a low level block diagram of a carry propagate mousetrap logic gate in accordance with FIG. 1 and for use series with the add predecoder mousetrap logic gate of FIG. 3; FIG. 5A illustrates a high level block diagram of a mousetrap logic gate having shared ladder logics; FIG. 5B illustrates a low level block diagram of a three-input exclusive OR mousetrap logic gate in accordance with FIG. 5A; FIG. 5C illustrates a low level block diagram of a combined two- input/three-input exclusive OR mousetrap logic gate in accordance with FIG. 5A; FIG. 6 illustrates a high level block diagram of a sequence of pipeline stages forming a pipeline; FIG. 7 illustrates graphically the relationship and the inherent clock asymmetry between the two clocks in the preferred embodiments wherein advantaged and disadvantaged clock phases arise; FIG. 8 illustrates a high level block diagram of a pipeline of mousetrap pipeline stages in accordance with the present invention; FIGS. 9A and 9B collectively illustrate at a high level the envisioned operation and response of the various pipeline latches of FIG. 8, in accordance with the present invention; FIG. 9A shows a high level block diagram of a pipeline latch having a vector input and a vector output for the discussion of FIG. 9B; FIG. 9B shows a state diagram for the pipeline latch of FIG. 9A; FIG. 10 illustrates a low level block diagram of a first embodiment of the mousetrap pipeline latch in FIG. 8; FIG. 11 illustrates a low level block diagram of the mousetrap latch in accordance with a second embodiment wherein the pipeline of FIG. 8 processes a vector input and a vector output, each having only two vector components; FIG. 12 shows a timing diagram corresponding to the first embodiment of the present invention; FIG. 13 illustrates a high level block diagram of the second preferred embodiment of the present invention wherein the self-timed dynamic logic gates in a pipeline stage are divided into two groups which are clocked separately; FIG. 14 shows a timing diagram corresponding to the clocks of the second preferred embodiment; FIG. 15 illustrates a low level block diagram of the architecture for the second preferred embodiment; FIG. 16 illustrates a high level block diagram of the division of several pipeline stages in accordance with the second preferred embodiment; FIG. 17 shows a timing diagram corresponding to FIG. 16; FIG. 18 illustrates a low level block diagram of the architecture for implementing the timing diagram of FIG. 17; and FIG. 19 illustrates a high level block diagram of the third preferred embodiment of the present invention wherein the self-timed pipeline stages are cascaded without interposing latches between stages. DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT Table of Contents I. Logic System A. Vector Logic: B. Mousetrap Logic Gates 1. Architecture 2. Operation 3. Inclusive OR Gate 4. Add Predecoder Gate 5. Carry Propagate Gate 6. Shared Ladder Logic 7. Exclusive OR Gates II. Pipelining A. Overview of Pipelines B. Pipelining Mousetrap Logic Stages 1. Architecture 2. Operation C. Latch State Machine D. First Embodiment Of Latch 1. Architecture 2. Operation E. Second Embodiment Of Latch III. Clocking System of the Present Invention A. First Preferred Embodiment B. Second Preferred Embodiment C. Third Preferred Embodiment I. Logic System Mousetrap logic gates are the subject matter focused upon in copending application Ser. No. 07/684,720 entitled &quot;A FUNCTIONALLY COMPLETE FAMILY OF SELF-TIMED LOGIC CIRCUITS,&quot; by Jeffry Yetter, filed Apr. 12, 1991 now U.S. Pat. No. 5,208,490. The present invention is essentially directed to, among other things, the pipelining of logic stages comprised of cascaded self-timed mousetrap logic gates, as presented in detail below. However, before pipelining is discussed, a description of mousetrap gates is warranted. A. Vector Logic Typically, logic in a computer is encoded in binary fashion on a single logic path, which is oftentimes merely an electrical wire or semiconductor throughway. By definition, a high signal level, usually a voltage or current, indicates a high logic state (in programmer's language, a &quot;1&quot;). Moreover, a low signal level indicates a low logic state (in programmer's language, a &quot;0&quot;). The present invention envisions implementing &quot;vector logic&quot; by pipelining mousetrap gates. Vector logic is a logic configuration where more than two valid logic states may be propagated through the logic gates in a computer. Unlike conventional binary logic having two valid logic states (high, low) defined by one logic path, the vector logic of the present invention dedicates more than one logic path for each valid logic state and permits an invalid logic state. For example, in accordance with one embodiment, in a vector logic system requiring two valid logic states, two logic paths are necessary. When both logic paths are at a logic low, i.e., &quot;0,0&quot;, an invalid logic state exists by definition. Moreover, a logic high existing exclusively on either of the two logic paths, i.e., &quot;1,0&quot; or &quot;0, 1&quot;, corresponds with the two valid logic states of the vector logic system. Finally, the scenario when both logic paths are high, i.e., &quot;1, 1&quot;, is an undefined logic state in the vector logic system. In a vector logic system requiring three logic states in accordance with another embodiment, three logic paths would be needed, and so on. In conclusion, in accordance with the foregoing embodiment, a vector logic system having n valid logic states and one invalid state comprises n logic paths. Furthermore, encoding of vector logic states could be handled by defining a valid vector logic state by a logic high on more than one logic path, while still defining an invalid state when all paths exhibit a low logic signal. In other words, the vector logic states are not mutually exclusive. For example, in a vector logic system using a pair of logic highs to define each valid vector logic state, the following logic scheme could be implemented. With three logic paths, &quot;0,1,1&quot; could designate a vector logic state 1, &quot;1,0,1&quot; a vector logic state 2, and &quot;1, 1,0&quot; a vector logic state 3. With four logic paths, six valid vector logic states could be specified. Specifically, &quot;0,0,1,1&quot; could designate a vector logic state 1, &quot;0,1,0,1&quot; a vector logic state 2, &quot;1,0,0,1&quot; a vector logic state 3, &quot;0,1,1,0&quot; could designate a vector logic state 4, &quot;1,0,1,0&quot; a vector logic state 5, and &quot;1,1,0,0&quot; a vector logic state 6. With five logic paths up to ten valid vector logic states could be specified, and so on. As another example, a vector logic system could be derived in accordance with the present invention wherein three logic highs define each valid vector logic state. In conclusion, as is well known in the art, the above vector schemes can be summarized by a mathematical combination formula. The combination formula is as follows: ##EQU1## where variable n is the number of logic paths (vector components), variable m is the number of logic paths which define a valid vector logic state (i.e., the number of logic paths which must exhibit a logic high to specify a particular vector logic state), and variable i is the number of possible vector logic states. B. Mousetrap Logic Gates FIG. 1 illustrates a high level block diagram of a family of &quot;mousetrap&quot; logic gates in accordance with the present invention. Mousetrap logic gates, described in detail hereinafter, can implement vector logic at high speed, are functionally complete, are self-timed, and do not suffer adverse logic reactions resulting from static hazards when chained in a sequence of stages. As shown in FIG. 1, each input to the mousetrap logic gate 100 of the present invention is a vector, denoted by vector inputs I, J, . . . , K (hereinafter, vectors variables are in bold print). No limit exists as to the number of vector inputs I, J, . . . , K. Further, each of vector inputs I, J, . . . , K may be specified by any number of vector components, each vector component having a dedicated logic path denoted respectively in FIG. 1 by I.sub.0 -I.sub.N, J.sub.0 -J.sub.M, and K.sub. 0 -K.sub.S. Essentially, each vector input specifies a vector logic state. As mentioned previously, an invalid vector logic state for any of the input vectors I, J, . . . , K is present by definition when all of its corresponding vector components, respectively, I.sub.0 -I.sub.N, J.sub.0 - J.sub.M, and K.sub.0 -K.sub.S, are at a logic low. The output of the generic mousetrap logic gate 100 is also a vector, denoted by a vector output O. The vector output O is comprised of vector components O.sub.0 -O.sub.P. The vector components O.sub.0 -O. sub. P are mutually exclusive and are independent functions of the vector inputs I, J, . . . , K. Further, the vector components O.sub.0 -O.sub.P have dedicated mousetrap gate components 102-106, respectively, within the mousetrap logic gate 100. By definition in the present invention, one and only one of O.sub.0 -O.sub.P is at a logic high at a any particular time. Moreover, no limit exists in regard to the number of vector components O.sub.0 -O.sub.P which can be associated with the output vector O. The number of vector components O.sub.0 -O.sub.P and thus mousetrap gate components 102-106 depends upon the logic function to be performed on the vector inputs individually or as a whole, the number of desired vector output components, as well as other considerations with respect to the logical purpose of the mousetrap logic gate 100. 1. Architecture With reference to FIG. 1, each mousetrap gate component 102-106 of the mousetrap logic gate 100 comprises an arming mechanism 108, ladder logic 110, and an inverting buffer mechanism 112. The arming mechanism 108 is a precharging means, or energizing means, for arming and resetting the mousetrap logic gate 100. The arming mechanism 108 essentially serves as a switch to thereby selectively impose a voltage V.sub.0 defining a logic state on a line 116 upon excitation by a clock signal (high or low) on line 114. As known in the art, any type of switching element or buffer for selectively applying voltage based upon a clock signal can be used. Furthermore, when the logic of a computer system is based upon current levels, rather than voltage levels, then the arming mechanism 108 could be a switchable current source, which is also well known in the art. Any embodiment serving the described switching function as the arming mechanism 108 is intended to be incorporated herein. The ladder logic 110 is designed to perform a logic function on the vector inputs I, J, . . . , K. The ladder logic 110 corresponding to each mousetrap gate component 102-106 may vary depending upon the purpose of each mousetrap gate component 102-106. In the preferred embodiment, the ladder logic 110 is essentially a combination of simple logic gates, for example, logic OR gates and/or logic AND gates, which are connected in series and/or in parallel. It should be noted that the ladder logic 110 is configured in the present invention so that one and only one of the vector output components O.sub.0 -O.sub.P is at a logic high at any sampling of a valid vector output O. Specific implementations of the ladder logic 110 are described below in regard to the illustrations of FIGS. 2-5. The ladder logic 110 must operate at high speed because it resides in the critical logic path, unlike the arming mechanism 108 which initially acts by arming the mousetrap gate component, but then sits temporarily dormant while data actually flows through the mousetrap gate component, i. e., through the critical logic path. Furthermore, because the ladder logic 110 resides in the critical logic path which is essentially where the logical intelligence is positioned, a plurality of logic gates are generally required to implement the desired logic functions. Also residing in the logic path is the inverting buffer mechanism 112. The inverting buffer mechanism 112 primarily serves as an inverter because in order to provide complete logic functionality in the mousetrap gate 100, it is necessary to have an inversion function in the critical logic path. Moreover, the inverting buffer mechanism 112 provides gain to the signal residing on line 114 and provides isolation between other potential stages of mousetrap gate components similar to the mousetrap logic gate components 102-106 of FIG. 1. The inverting buffer mechanism 112 is characterized by a high input impedance and low output impedance. Any buffer embodiment serving the described function as the buffer mechanism 112 is intended to be incorporated herein. Furthermore, worth noting is that the arming mechanism 108, the ladder logic 110, and the inverting buffer mechanism 112 could in some implementations all reside on a single integrated circuit (IC), for example, an application specific integrated circuit (ASIC) or microprocessor chip. 2. Operation The operation of the mousetrap logic gate 100 is described below at a high conceptual level in regard to only the mousetrap gate component 102 for simplicity. The narrowing of the present discussion is well grounded, because the various mousetrap gate components 102-106 are essentially redundant with the exception of their corresponding ladder logic functions implemented by ladder logics 110, 120, and 130. Consequently, the following discussion is equally applicable to the remaining mousetrap gate components 104 and 106. In operation, upon excitation by a clock CK on the line 114, the arming mechanism 108 pulls up, or drives, the output 116 of the ladder logic 110 to a logic high. Concurrently, the arming mechanism 108 pulls the input at line 114 to the inverting buffer mechanism 112 to a logic high. Consequently, the corresponding vector component O.sub.0 on a line 117 is maintained at a logic low, defined in the present invention as an invalid state. In the foregoing initial condition, the mousetrap logic gate 100 can be analogized as a &quot;mousetrap,&quot; in the traditional sense of the word, which has been set and which is waiting to be triggered by the vector inputs I, J, . . . , K. The mousetrap logic gate 100 will remain in the armed predicament with the vector component O.sub.0 in the invalid state, until being triggered by the ladder logic 110. The mousetrap logic gate 100 is triggered upon receiving enough valid vector inputs I, J, . . . , K to definitively determine the correct state of the vector component O. sub.0 on the line 117. In some designs of the ladder logic 110, not all of the vector inputs will need to be considered in order to produce an output signal on line 116, and hence, on line 117. The number of vector inputs I, J, . . . , K needed to make the definitive determination of the output state and also the timing of the determination is defined by the content and configuration of the simple logic gates within the ladder logic 110. After the vector component O.sub.0 on line 117 is derived, it is passed onto the next stage (not shown) of logic. The mousetrap logic gate component 102 will not perform any further function until being reset, or re-armed, or refreshed, by the arming mechanism 108. In a sense, the timing from mousetrap gate component to mousetrap gate component as well as gate to gate depends upon the encoded data itself. In other words, the mousetrap gate components are &quot;self-timed.&quot; Mousetrap logic gates in accordance with the present invention directly perform inverting and non-inverting functions. Consequently, in contrast to conventional dynamic logic gates, mousetrap logic gates can perform multiplication and addition, which require logic inversions, at extremely high speeds. Finally, it should be noted that the family of mousetrap logic gates 100 can be connected in electrical series to derive a combinational logic gate which will perform logic functions as a whole. Thus, a mousetrap gate component, comprising an arming mechanism, ladder logic, and an inverting buffer mechanism, can be conceptualized as the smallest subpart of a mousetrap logic gate. Moreover, various mousetrap gate components can be connected in series and/or in parallel to derive a multitude of logic gates. However, when mousetrap logic gates are chained together in long chains (perhaps, greater than two or three mousetrap gate components in series), precharging of the chains might require an undesirable lengthy amount of time. The reason is that mousetrap gate components will not be able to pull their output low (invalid) until their input is pulled low. The result is that the mousetrap gate components will charge in sequence from the first to the last in the chain, thereby undesirably slowing the precharge of the overall chain. Hence, a way is needed to cause the mousetrap gate components of a chain to precharge in parallel, not in sequence. Parallel precharging can be accomplished several different ways. A preferred way is to provide a clock triggered n-channel MOSFET to disable the ladder logics 110, 120, and 130 of FIG. 1 during the precharging of the mousetrap gate components. In other words, a push- pull situation is implemented. The arming mechanism of a mousetrap gate component pulls (precharges) the input to the inverting buffer mechanism high, while the inserted n-channel MOSFET pulls the ladder logic low. It should be noted that the n-channel MOSFET slightly slows the operation of the mousetrap gate component. However, it should be emphasized that the n-channel MOSFET need not be implemented for every mousetrap gate component. It need only be inserted every second or third mousetrap gate component in series. Moreover, in certain logic circuits, such as multiplication, the parallelism of the logic operation may be exploited to reduce the number of requisite n-channel MOSFETs. The foregoing embodiment for providing parallel precharging has advantages. It requires little additional power dissipation. Moreover, it can, if desired, be uniformly applied to all mousetrap gate components for simplicity. Another preferred way of providing for parallel precharging of mousetrap gate components chained in series is to periodically insert a mousetrap AND gate in the critical logic path. The mousetrap AND gate is inputted (1) an output vector component from a preceding mousetrap gate component and (2) the precharge clock. The output of the mousetrap AND gate is inputted to the next in series mousetrap gate component. 3. Inclusive OR Gate FIG. 2 shows a low level block diagram of an example of a two- input inclusive OR mousetrap logic gate 200 in accordance with the present invention of FIG. 1. The inclusive OR mousetrap logic gate 200 can be used in a vector logic system having two logic states and one invalid logic state. As shown, the inclusive OR mousetrap logic gate 200 has two mousetrap gate components 202 and 204. The mousetrap gate component 202 comprises an arming mechanism 208, ladder logic 210, and an inverting buffer mechanism 212. The mousetrap gate component 204 comprises an arming mechanism 218, ladder logic 220, and an inverting buffer mechanism 222. Note the similarity of reference numerals with regard to FIG. 1, as well as with the other figures to follow. The inclusive OR mousetrap logic gate 200 and specifically, the arming mechanisms 208 and 218, is armed by command of a clock NCK (&quot;N&quot; denotes active at logic low) on respective lines 214 and 224. In the preferred embodiments of the present invention, the arming mechanisms 208 and 218 are p-channel metal-oxide-semiconductor field-effect transistors (MOSFET), as shown in FIG. 2, which are well known in the art and are commercially available. N-channel MOSFETs could be used instead of p- channel MOSFETs; however, the clocking obviously would be diametrically opposite. With reference to FIG. 2, the MOSFETs comprising the arming mechanisms 208 and 218 essentially serve as switches to thereby impose a voltage V0 on respective lines 216 and 226 upon excitation by a low clock NCK signal on respective lines 214 and 224. As further known in the art, any type of switching element for voltage can be used. Additionally, in the preferred embodiments, the simple logic in the ladder logics 210 and 220 is implemented with n-channel MOSFETs, as shown. The rationale for using n-channel MOSFETs is as follows. N- channel MOSFETs have superior drive capabilities, space requirements, and load specifications, than comparable p-channel MOSFETs. A typical n- channel MOSFET can generally switch approximately fifty percent faster than a comparable p-channel MOSFET having similar specifications. Furthermore, in the preferred embodiments, the inverting buffer mechanisms 212 and 222 are static CMOSFET inverters, as shown in FIG. 2, which are well known in the art and are commercially available. A CMOSFET inverter is utilized for several reasons. As stated previously, an inversion must take place in the critical logic path in order to provide functional completeness. The inversion which must take place in the critical path can be accomplished by cleverly manipulating the design (gain) of a conventional CMOSFET inverter, which comprises both a p- channel MOSFET pull-up 215 and an n-channel MOSFET pull-down 219. In other words, because of the known existence of a monotonic progression, the ratio of the widths of the MOSFET gates can be designed to favor switching in one direction [i.e., either high (1) to low (0) or low (0) to high(1)], at the expense of the other direction. Specifically, in the particular CMOSFET inverter envisioned by the present invention, the gate width of the constituent p-channel MOSFET 215 is made wider than the gate width of the constituent n- channel MOSFET 219. Consequently, the CMOSFET inverter output switches very quickly from a logic low (0; the armed state of the mousetrap) to a logic high (1; the unarmed state of the mousetrap). The speed of the CMOSFET inverter output switching from a logic high to a logic low does not matter because the mousetrap gate 200 is precharged during this time period. Hence, the mousetrap logic gate 200 can be constructed to exhibit superior performance and size specifications in one direction, to thereby tremendously increase the speed of data transfer and reduce the size specifications of the mousetrap logic gate 200. With respect to operation, a truth table for the inclusive OR mousetrap logic gate 200 is set forth in Table A hereinafter. TABLE A ______________________________________ a b O AH AL BH BL OH OL ______________________________________ inv inv inv 0 0 0 0 0 0 inv 0 inv 0 0 0 1 0 0 0 inv inv 0 1 0 0 0 0 1 x 1 1 0 x x 1 0 x 1 1 x x 1 0 1 0 ______________________________________ In the above Table A, &quot;x&quot; denotes a an irrelevant or &quot;don't care&quot; situation; &quot;inv&quot; denotes an invalid logic state; &quot;1&quot; denotes a high logic state; and &quot;0&quot; denotes a low logic state. As indicated in Table A and shown in FIG. 2, a vector input a and a vector input b are operated upon by the inclusive OR mousetrap logic gate 200 to derive a vector output O. For discussion purposes, it is worth noting that vector input a, vector input b, and vector output O could correspond respectively with vector input I, vector input J, and vector output O of FIG. 1. Vector input a specifies a vector logic state defined by two vector components AH and AL. Vector input b specifies a vector logic state defined by two other vector components BH and BL. Vector output O specifies a vector logic state defined by two vector components OH and OL, which collectively describe the inclusive disjunction (OR function) of vector inputs a and b. In vector notation, as shown, a= &lt; AH,AL &gt; ; b= &lt; BH,BL &gt; ; and O= &lt; OH,OL &gt; =a+b. 4. Add Predecoder Gate FIG. 3 shows a low level block diagram of a two-input add predecoder mousetrap logic gate 300 in accordance with the present invention of FIG. 1. Well known in the art, a predecoder is logic primarily used in the arithmetic logic unit (ALU) to perform arithmetic functions, especially addition. Generally, a predecoder aids in parallel processing and facilitates control of a carry bit path. As shown, the predecoder 300 has three mousetrap gate components 302- 306. Respectively, the three mousetrap gates 302-306 comprise the following: (1) an arming mechanism 308, ladder logic 310, and a buffer 312; (2) an arming mechanism 318, ladder logic 320, and a buffer 322; and (3) an arming mechanism 328, ladder logic 330, and a buffer 332. A truth table describing the operation of the add predecoder logic gate 300 is set forth in Table B hereinafter. TABLE B ______________________________________ a b O AH AL BH BL P K G ______________________________________ inv x inv 0 0 x x 0 0 0 x inv inv x x 0 0 0 0 0 0 0 kill 0 1 0 1 0 1 0 0 1 prop 0 1 1 0 1 0 0 1 0 prop 1 0 0 1 1 0 0 1 1 gen 1 0 1 0 0 0 1 ______________________________________ Similar to the inclusive OR mousetrap logic gate 200 of FIG. 2, vector input a specifies a vector logic state defined by two vector components AH and AL. Vector input b specifies a vector logic state defined by two other vector components BH and BL. However, in contrast to the mousetrap logic gate of FIG. 2, vector output O specifies a vector logic state defined by three vector components P, K, and G, discussed in detail below. In vector notation, as shown, a= &lt; AH,AL &gt; ; b= &lt; BH,BL &gt; ; and O= &lt; P,K,G &gt; . Conventional predecoders are usually designed so that the output indicates only one of two logic states. In many implementations, the conventional predecoder indicates either that the carry should be &quot;propagated&quot; (designated by &quot;P&quot;) or that the carry bit should be &quot;killed&quot; (designated by &quot;K&quot;). In other implementations, the predecoder indicates either that the carry should be &quot;propagated&quot; or that the carry bit should be &quot;generated&quot; (designated by &quot;G&quot;), In the present invention, as noted in Table B, the vector output O can indicate any of four logic states: an invalid state and three valid states, namely, kill, propagate, or generate. Furthermore, the add predecoder logic gate 300 must perform an exclusive OR function as part of the overall predecoder function. Conventionally, dynamic logic gates could not implement the exclusive OR function because static hazards would cause logic errors. Static hazards occur in combinational logic configurations because of propagation delays. The mousetrap logic gates of the present invention are not adversely affected by static hazards, because of self-timing. No valid vector component output is present unless all the vector inputs, needed to definitively determine the output of the ladder logic, are valid as indicated in Table B. 5. Carry Propagate Gate FIG. 4 shows a low level block diagram of a carry propagate gate 400 in accordance with the present invention. Well known in the art, a carry propagate logic gate is oftentimes used in series with an add predecoder logic gate, as discussed previously, in order to control a carry bit path in an ALU. Specifically, the carry propagate gate 400 functions in series with the add predecoder logic gate 300 in the preferred embodiment to provide a high performance carry bit path. The carry propagate gate 400 has two mousetrap gate components 402 and 404. The mousetrap gate component 402 comprises an arming mechanism 408, ladder logic 410, and an inverting buffer mechanism 412. The mousetrap gate component 404 comprises an arming mechanism 418, ladder logic 420, and an inverting buffer mechanism 422. To further clarify the functionality of the carry propagate gate 400, a truth table for the carry propagate gate 400 is set forth in Table C hereinafter. TABLE C _________________________________________________________________________ _ I CIN COUT P K G CINH CINL COUTH COUTL _________________________________________________________________________ _ inv x inv 0 0 0 x x 0 0 x inv inv x x x 0 0 0 0 kill x 0 0 1 0 x x 0 1 prop 0 0 1 0 0 0 1 0 1 prop 1 1 1 0 0 1 0 1 0 gen x 1 0 0 1 x x 1 0 _________________________________________________________________________ _ 6. Shared Ladder Logic FIG. 5A shows a high level block diagram of an embodiment of a mousetrap logic gate wherein the ladder logics 510-520 of any number n of mousetrap gate components have been combined in a single mousetrap logic gate 500A. The mousetrap logic gate 500A is inputted with a plurality of vectors I, J, . . . , K, and/or parts thereof. In turn, the gate 500A outputs a plurality of vector output components &lt; O.sub.1 - O.sub.n &gt; , which can define vectors and/or partial vectors. Essentially, the logic function which generated the vector component output &lt; O.sub.n &gt; is a subset of all logic functions deriving vector component outputs &lt; O.sub.1 &gt; through &lt; O.sub. n- 1 &gt; . More specifically, the vector component output &lt; O.sub.1 &gt; is determined by ladder logics 510, 520, while the vector component output &lt; O.sub.n &gt; is determined by only ladder logic 520. As is obvious from FIG. 5A, this configuration saves hardware and cost. More outputs are derived with less ladder logic. 7. Exclusive OR Gates A specific example of FIG. 5A is illustrated in FIG. 5B. FIG. 5B shows a low level block diagram of a three-input exclusive-OR (XOR) mousetrap logic gate 500B. The exclusive OR mousetrap logic gate 500B can be used for high speed sum generation in either a full or half adder and does not suffer from any adverse effects from static hazards. Sum generation logic gates are well known in the art. They are especially useful in adder and multiplier logic circuits. The exclusive OR logic gate 500 has two mousetrap gate components, having respective arming mechanisms 538 and 548 as well as inverting buffer mechanisms 532 and 542. However, as shown by a phantom block 550, the ladder logic associated with each of the two mousetrap gate components is not separated completely in hardware, but remains mutually exclusive in a logic sense. Hence, as a general proposition, because the ladder logic in each mousetrap gate component of a mousetrap logic gate uses the same type of gates, namely, n-channel MOSFETs, sometimes their logic functions can share the same hardware, thereby resulting in a less number of total gates and a reduction in utilized computer real estate. A truth table indicating the operation of the exclusive OR logic gate 500B is set forth in Table D hereinafter. TABLE D _________________________________________________________________________ _ a b c s AH AL BH BL CH CL SH SL _________________________________________________________________________ _ inv x x inv 0 0 x x x x 0 0 x inv x inv x x 0 0 x x 0 0 x x inv inv x x x x 0 0 0 0 0 0 0 0 0 1 0 1 0 1 0 1 0 0 1 1 0 1 0 1 1 0 1 0 0 1 0 1 0 1 1 0 0 1 1 0 0 1 1 0 0 1 1 0 1 0 0 1 1 0 0 1 1 0 0 1 0 1 1 0 1 0 1 0 1 0 0 1 1 0 0 1 1 1 0 0 1 0 1 0 0 1 0 1 1 1 1 1 1 0 1 0 1 0 1 0 _________________________________________________________________________ _ As indicated in Table D and shown in FIG. 5B, vector input a specifies a vector logic state defined by two vector components AH and AL. Vector input b specifies a vector logic state defined by two other vector components BH and BL. Vector input c specifies a vector logic state defined by two vector components CH and CL. Furthermore, vector output s specifies a vector logic state defined by two outputs SH and SL. In vector notation, as shown, a= &lt; AH,AL &gt; ; b= &lt; BH,BL &gt; ; c= &lt; CH,CL &gt; ; and s= &lt; SH,SL &gt; . Another specific example of FIG. 5A is illustrated in FIG. 5C. FIG. 5C shows a low level block diagram of a three-input exclusive-OR (XOR) logic gate combined with a two-input exclusive-OR (XOR) logic gate. The input vectors are a= &lt; AH, AL &gt; , b= &lt; BH, BL &gt; , and c= &lt; CH, CL &gt; . Furthermore, the output vectors are the XOR logic function of vectors a and b, defined by vector component outputs &lt; O. sub.0, O.sub. 1 &gt; , as well as the XOR logic function of vectors a, b, and c, defined by vector component outputs &lt; O.sub.n-1, O.sub.n &gt; . The vector component outputs &lt; O.sub.0, O.sub.1 &gt; are determined by ladder logics 560-590, while the vector component outputs &lt; O.sub. n-1, O.sub.n &gt; are determined by only ladder logics 580, 590. Worth noting is that FIG. 5C illustrates a mousetrap logic gate having multiple vector inputs and multiple vector outputs. II. Pipelining A. Overview of Pipelines The pipelining of logic stages comprised of static logic gates is well known in the art. &quot;Static&quot; logic gates are traditional logic gates which do not require a periodic precharge to maintain a proper logic state. In general, &quot;pipelining&quot; refers to the process of commencing a new operation prior to the completion of an outstanding, or in-progress, operation for the purpose of increasing the rate of data processing and throughput. FIG. 6 illustrates a conventional pipeline (or section of a pipeline) 600 of N pipeline stages 602-608 in sequence. Each of the pipeline stages 602-608 comprises any number of stages of logic gates. Data is introduced into the pipeline 600 as indicated by an arrow 610. The data ultimately travels through and is independently processed by each of the pipeline stages 602-608 of the sequence, as shown by successive arrows 612-618. Data is clocked through the pipeline 600 via clocks 622-628, which could be identical or staggered in phase as desired. Usually, successive pipeline stages are uniformly triggered by the same clock edge (either rising or falling) and are clocked a full cycle (360 degrees) out of phase. With respect to FIG. 6, pipelining means that new data is clocked into the pipeline 600, as indicated by the arrow 610, while old data is still remaining in the pipeline 600 being processed. Pipelining increases the useful bandwidth of high latency logic networks. Pipelining is often implemented to perform arithmetic operations, including floating point operations. For example, to perform multiplication, a &quot;carry save adder&quot; pipeline of logic stages is usually employed. Specifically, each pipeline stage is essentially several rows of conventional full adder logic stages. Moreover, each full adder compresses three partial products into two partial products. Thus, each full adder adds in another partial product as data flows through the chain of full adder logic stages in each pipeline stage. In order to perform a single multiplication operation, more than one clock cycle is usually required, but as a result of pipelining, a new multiplication operation may be commenced generally in substantially less than, perhaps in half of, the total number of clock cycles. The pipelining of dynamic logic gates, particularly mousetrap logic gates shown in FIG. 1, poses peculiar problems, unlike in the pipelining of static logic gates. With reference to FIG. 1, mousetrap logic gates 100 require a precharge cycle in order to arm the mousetrap gate components 102-106, rendering them potentially operative. Effectively, a precharge cycle periodically interrupts the useful work cycle for the necessary purpose of maintenance. Precharge cycles significantly and undesirably decrease the useful bandwidth of a sequence of mousetrap pipeline stages. Moreover, if attempts are made to use both clock edges (rising and falling), as envisioned by the present invention, in order to hide the precharge during the &quot;off duty&quot; clock time of a pipeline stage (when the pipeline stage is not propagating data), then the mousetrap logic gates are adversely affected by a phenomenon known as &quot;clock asymmetry.&quot; This concept is discussed in detail with respect to FIG. 7 below. FIG. 7 graphically illustrates a possible two clock system which may be employed with the pipeline 600 of FIG. 6. In the hypothetical scenario, the odd numbered logic stages of the N logic stages 602-608 are clocked by a clock CK1. Moreover, the even numbered logic stages are clocked by a clock CK2. The two clock system is desirable in order to hide the precharge delay from the forward logic path, as envisioned by the present invention. As shown in FIG. 7, clocks CK1 and CK2 are intended by design to switch simultaneously, to be ideally alternating (180 degrees out of phase), and to have a 50 percent duty cycle with respect to one clock state (t.sub.period) of the computer system's clock. However, because of unavoidable clock asymmetry, an &quot;advantaged phase&quot; (t.sub.1 ') and a &quot;disadvantaged phase&quot; (t.sub.2 ') will arise in reality, as comparatively shown in FIG. 7. Generally, clock asymmetry results from inherent physical inequities in the manufacture of clock generation circuits. The condition results when the pipeline stages 602-608 of FIG. 6 are alternately clocked and with each, by design, having a fifty percent duty cycle. A precise time allocation (duty cycle) to the individual pipeline stages 602-608 of FIG. 6 can never be achieved. A precise allocation or clocking of time to insure that each pipeline stage 602- 608 of the pipeline 600 has an identical duty cycle is important because it tremendously affects the useful bandwidth of the pipeline 600. The pipeline 600 will function with the two clock system of FIG. 7, but the cycle time for the pipeline 600 will be limited by the period of the disadvantaged phase. In other words, the speed of pipeline 600 is less than optimal because valuable time is wasted in the pipeline stages (either even or odd) operating in the advantaged phase. More time is accorded to the pipeline stages corresponding with the advantaged phase than is necessary for complete operation of the pipeline stages. Worth noting is that the clock asymmetry cannot be compensated for by balancing delays in the pipeline stages because the direction of the time deviation cannot be known. If pipeline 600 used pipeline stages 602-608 having static logic gates, such as conventional edge-triggered latch paradigm systems, clock asymmetry is not a problem because only one of the clock edges, i.e. , either the rising or falling clock edge, is utilized for clocking each pipeline stage. The problem is solved because the time period separating two parallel clock edges can be precisely controlled with simple and inexpensive conventional circuitry. However, in regard to dynamic logic gates, such as mousetrap logic gates 100 as shown in FIG. 1, the foregoing solution is not desirable because optimally both clock edges should perform a purpose (either precharge or propagation) in order to achieve high performance by hiding the precharging operation from the forward logic path. B. Pipelining Mousetrap Logic Stages 1. Architecture The pipelining of self-timed mousetrap logic stages is subject matter focused upon in parent application serial no. , entitled &quot;UNIVERSAL PIPELINE LATCH FOR MOUSETRAP LOGIC CIRCUITS&quot; filed Apr. 12, 1991. The present invention is essentially directed to, among other things, optimizing the feed of energizing clock signals to the pipeline of mousetrap logic stages so as to facilitate self-timing, as presented in detail further below. However, before the present invention is discussed, a description of pipelining mousetrap logic stages is set forth below. FIG. 8 illustrates a high level block diagram of a pipeline 800 of N mousetrap pipeline stages 802-808. Each of the mousetrap pipeline stages 802-808 comprises one or more mousetrap logic gates, as shown in FIG. 1, connected in series and/or in parallel. As further shown in FIG. 8, N pipeline latches 812-818 are associated in correspondence with the N mousetrap pipeline stages 802-808. Furthermore, in the preferred embodiments, an alternating two clock system is implemented as previously discussed in regard to FIG. 7 in order to hide the precharge of the mousetrap logic gates in the mousetrap pipeline stages 802-808. The rising edge of a clock pulse from a clock CK actuates the input vectors to a pipeline stage, which comprises one or more already-armed mousetrap gates, and the falling edge of the same clock CK precharges the arming mechanisms of the same one or more mousetrap gates for the next vector inputs. 2. Operation The operation of the pipeline 800 proceeds as follows. During the high time of clock CK1, a valid vector input is driven to the pipeline stage 802 by the pipeline latch 812 (latch 1). Moreover, pipeline stage 802 (stage 1) produces a valid vector output. The foregoing actions occur at all odd numbered stages during the high time of the clock CK1. Furthermore, during the high time of clock CK1, the clock CK2 is low. Consequently, the vector input to pipeline stage 804 (stage 2) is driven invalid by the pipeline latch 814 (latch 2), which is driven, or enabled, by the high time of clock CK2. Moreover, the pipeline stage 804 (stage 2) produces an invalid vector output because the pipeline stage 804 (stage 2) is forced to armed predicament by the clock CK2 at low time. See FIG. 2 where NCK (active low) operates arming mechanisms 208 and 218. The foregoing actions occur at all even numbered stages during the high time of clock CK1, i.e., during the low time of clock CK2. Next, the clocks CK1 and CK2 flip-flop, or reverse states. The clock CK2 transcends high, while the clock CK1 transcends low. The leading edge of the clock CK2 actuates the pipeline latch 814 (latch 2). Accordingly, the vector input to pipeline stage 804 (stage 2) is driven valid by the pipeline latch 804 (latch 2). Moreover, the pipeline stage 804 (stage 2) produces a valid vector output. The foregoing actions occur at all even numbered pipeline stages during the high time of clock CK2. Furthermore, during the high time of the clock CK2, the clock CK1 is low. As a result, the vector input to pipeline stage 802 (stage 1) is driven invalid by pipeline latch 812 (latch 1), which is driven by the clock CK1 at high time. The pipeline stage 802 (stage 1) produces an invalid vector output because the pipeline stage 802 (stage 1) is forced in an armed predicament by the low clock CK1. The foregoing actions occur at all even numbered stages during the high time of clock CK2. As a result of the above described operation parameters, one operation cycle starts and another finishes during each clock state (CK. sub. machine =CK1+CK2) of the computer system. The precharge latency for the even numbered stages coincides with the logic propagation delay in the odd numbered stages, and vice versa. Thus, the overall delay incurred for precharging is hidden. Another significant aspect of the pipeline 800 illustrated in FIG. 8 is that it provides for insensitivity to clock asymmetry. The pipeline stages (odd or even numbered) which operate in the disadvantaged phase effectively &quot;steal&quot; time from pipeline stages (even or odd numbered, respectively) which operate in the advantaged phase. The ability to steal time is available in part as a result of the inherent characteristics of mousetrap logic gates and in part as a result of the unique design and methodology of the pipeline latches 812- 818. Specifically, during the low time of a clock when precharging takes place at a particular pipeline stage, the vector outputs of the particular pipeline stage are forced to the invalid state. Moreover, during the high time of the clock, the vector inputs of the particular pipeline stage are forced to the valid state by enablement of the corresponding pipeline latch. Optimally, the vector inputs transition to a valid state and travel through the corresponding latch before the clock falls. Then, the vector inputs are processed by the pipeline stage and are gated to the stage's output when the clock falls. However, if the clock falls before the transition of vector inputs to the valid state and subsequent transmission of the valid vector inputs to the stage, then the pipeline latch at the input of the pipeline stage behaves as an &quot;data-triggered&quot; latch for the duration of the clock low time. In other words, a late arriving valid input state will be transferred immediately to the pipeline stage and processed by the pipeline stage. The pipeline stage's vector output is derived and persists at the output until the next transition to clock high time. As a specific example to illustrate how time is stolen from the advantaged phase, consider the hypothetical proposition that pipeline stage 804 (stage 2) is initially active (propagation) and operates in the disadvantaged phase. The pipeline stage 804 (stage 2) can produce its result after the disadvantaged phase has already passed, or lapsed, due to the precharged predicament of its mousetrap logic gates. Specifically, the precharging of mousetrap logic gates is slow compared to the forward logic delay. Thus, the one or more mousetrap logic gates in the pipeline stage 804 can produce a valid vector output even after the precharge cycle is relinquished. Further, because all of the vector inputs to pipeline latch 816 (latch 3) are self-timed, the pipeline latch 816 (latch 3) is designed to capture the vector output of stage 804 (stage 2) well after the disadvantaged phase of stage 804 (stage 2), i.e., during the advantaged stage of stage 806 (stage 3). The vector output will be driven into the pipeline stage 806 (stage 3) slightly late in time, but proper functioning will occur because the pipeline stage 806 (stage 3) has time to waste being that it is operating in the advantage phase. Hence, pipeline stage 806 (stage 3) has in effect stolen time to pay the deficit in pipeline stage 804 (stage 2). Moreover, the pipeline 800 will operate as if the two clocks CK1 and CK2 had a perfect 50 percent duty cycle (t. sub.1 =t.sub.2), as shown graphically in the upper portion of FIG. 7. C. Latch State Machine The pipeline latches 812-818 can be implemented as state machines operating in accord with a state diagram 900B of FIG. 9B. For proper understanding of the state diagram 900B, FIG. 9A shows a high level block diagram of a pipeline latch 900A having a vector input I= &lt; I. sub.1, I.sub.2, . . . , I.sub.N &gt; and a vector output O= &lt; O. sub.1, O.sub. 2, . . . , O.sub.N &gt; , which corresponds to the pipeline latch 900A of FIG. 9A. By designing the pipeline latch 900A to operate consistent with the state diagram 900B, logic operations can be performed by the novel pipeline 800 without a requisite minimum clock frequency. Mousetrap logic gates which are pipelined in the conventional fashion, such as in FIG. 6, must maintain a minimum clock frequency in order to insure proper functioning. Proper operation of individual mousetrap gates requires that an electrical charge be deposited and maintained on the associated buffer mechanisms (reference numerals 112, 122, and 132 of FIG. 1) to maintain a proper logic states. In reality, the electrical charge deposited on the buffer mechanisms eventually will discharge due to an unknown logic level and thereby corrupt the state of the pipeline. The decay results from uncontrollable design characteristics. Accordingly, vector outputs of mousetrap logic gates decay to an invalid logic state, defined as the case when more than one vector component is high. In most practical situations, the preceding problem may be overcome via a periodic refresh cycle, similar to the refresh cycle in conventional DRAM. Hence, a minimum clock rate, analogous to refresh cycles, must be maintained. The minimum clock rate poses an additional problem. Many times, logic gates are required to operate arbitrarily slow, for instance, during IC testing. Conventionally, dynamic logic gates can be modified to exhibit slow operation by including &quot;trickle charge&quot; devices or &quot;cross- coupled negative feedback&quot; devices. However, these devices consume valuable computer real estate and further decrease the speed of the logic gates. In order to eliminate the need for the pipeline 800 to operate at a minimum clock rate, the pipeline latches 812-818 are implemented as state machines operating in accord with the state diagram 900B of FIG. 9B. In the state diagram 900B of FIG. 9B, &quot;RESET&quot; is defined, for purposes of discussion, as follows: RESET=CK*INVALID I=CK* &lt; I.sub.1, I.sub.2, . . . , I.sub.n =0 &gt; . Furthermore, the states of the state machine 900B are defined as indicated in the state table, Table E, set forth below. TABLE E ______________________________________ States of State Machine Status of Vector Output Status of Vector Components ______________________________________ 0 invalid all = 0 1 valid O.sub.1 = 1, all others = 0 2 valid O.sub.2 = 1, all others = 0 N valid O.sub.N = 1, all others = 0 ______________________________________ D. First Embodiment of Latch FIG. 10 illustrates a low level block diagram of an exemplary mousetrap pipeline latch 1000, corresponding with any one of the mousetrap pipeline latches 812-818 in FIG. 8. The latch 1000 is a first embodiment. For discussion purposes, only a single vector input I and a single vector output O are shown and described, but the discussion is equally applicable to any number of vector inputs and outputs. 1. Architecture As shown in FIG. 10, the latch 1000 of the first embodiment comprises a latch reset mechanism 1002, an input trigger disabling mechanism 1004, an input trigger mechanism 1006, a flip-flop mechanism 1008, an output gating mechanism 1010, and a latch enable pull-up mechanism 1012. More specifically, as shown in FIG. 10, the latch reset mechanism 1002 comprises a combination of a CMOSFET inverter and a MOSFET for each of the vector components I.sub.1 -I.sub.N of a vector input I. A CMOSFET inverter 1020 and an n-channel MOSFET 1022 correspond with an input vector component I.sub.1. A CMOSFET inverter 1024 and an n- channel MOSFET 1026 correspond with an input vector component I.sub.2. Finally, a CMOSFET inverter 1028 and an n-channel MOSFET 1030 correspond with an input vector component I.sub.N. The inverse of each of the foregoing input vector components I.sub.1 -I.sub.N is derived by the corresponding inverter and the result is used to switch the respective MOSFET. The input trigger disabling mechanism 1004 comprises n-channel MOSFETs 1032-1037. A dual set of MOSFETs is allocated to each of the N input vector components I.sub.1 -I.sub.N. The MOSFETs 1032-1037 serve to pull a latch enable 1038 low as needed during operation which is discussed in specific detail later. The input trigger mechanism 1006 has n-channel MOSFETs 1040- 1044, one MOSFET for each of the N input vector components I.sub.1 -I.sub. N. The MOSFETs are 1040-1044 are actuated by the N input vector components and serve to trigger the pipeline latch 1000. The flip-flop mechanism 1008 comprises dual sets of conventional inverters, configured as shown. The pair of inverters 1048 and 1050 correspond with the input vector component I.sub.1. The pair of inverters 1052 and 1054 correspond with the input vector component I.sub. 2. Finally, the pair of inverters 1056 and 1058 correspond with the input vector component I.sub.N. The output gating mechanism 1010 comprises N AND gates corresponding to the N vector components of a vector output O. As shown, the AND gates have inverted inputs. An AND gate 1060 with inverters 1062 and 1064 is associated with the output vector component O.sub.1. An AND gate 1066 with inverters 1068 and 1070 is associated with the output vector component O.sub.2. Finally, an AND gate 1072 with inverters 1074 and 1076 is associated with the output vector component O.sub.N. The latch enable pull-up mechanism 1012 comprises a p-channel MOSFET 1078 which pulls the latch enable 1038 to a logic high when necessary. The specific operation of the pipeline latch 1000 is described below. The operation is in accordance with Table E, set forth previously. 2. Operation The following sequence of events, or cycle, is applicable to the pipeline latch 1000 when the vector input I turns valid from invalid during the high time of clock CK. More generally, the following sequence of events will occur in the latch 1000 when the latch 1000 drives the input to a pipeline stage operating in the disadvantaged phase or when the pipeline 800 is operating very slow (at DC). In other words, the vector inputs to the latch 1000 are produced by a preceding pipeline stage operating in the advantaged phase. Clock High Time (a) The latch enable 1038 is initially low. Moreover, by the design of the circuitry, note that d.sub.1 +d.sub.2 +. . . +d.sub.n =not(latch enable)=1. (b) The vector output O is forced invalid (all vector components low; O.sub.1 -O.sub.N =O) by the output gating mechanism 1010 via the AND gates 1060, 1066, and 1072 with the high clock signal (either CK1 or CK2, depending upon the position of the latch in the pipeline). (c) The vector input I is invalid (all vector components low; I. sub.1 -I.sub.N =0), as a result of the invalid vector output from the previous pipeline stage caused by precharging. (d) The flip-flop mechanism 1008 is set, via the latch reset mechanism 1002, such that d.sub.1 -d.sub.N =0, because of the invalid input vector components I.sub.1 -I.sub.N =0. Consequently, all pull-down MOSFETs 1032, 1034, and 1036 on the latch enable 1038 of the input trigger mechanism disabling mechanism 1004 are turned off. As a result, the latch enable 1038 gets pulled high by the latch enable pull-up mechanism 1012. Worth noting is that latch enable=not(d.sub.1 +d.sub.2 +. . . +d.sub.n)=1. (e) The latch 1000 will remain in this steady state, until the vector input I transitions valid (one of vector components I.sub.1 -I. sub.N goes high). The high vector component actuates a MOSFET (1040, 1042, or 1044) of the input trigger mechanism 1006. As a result, a low signal appears at the input of the respective flip-flop, despite the fact that the latched flip-flop value (one of d.sub.1 -d.sub.N) is attempting to impose a high signal at the input. In other words, the series connection of MOSFETs (1033, 1040; 1035, 1042; 1037, 1044) which is pulling low, wins over the flip-flop pulling high. (f) As a result of step (e), the respective one of d.sub.1 -d. sub. N is turned high. Hence, the high vector component is recognized and is &quot;latched&quot; (preserved) at the respective flip-flop as one and only one of d.sub.1 -d.sub.N. Moreover, the latch enable 1038 is pulled low through the corresponding pull-down MOSFET (1032, 1034, or 1038), thereby disabling the input trigger mechanism 1006. (g) At this point, the clock can be stopped without losing the state of the vector input I. The vector input I has been recognized as valid and is preserved. Moreover, input trigger mechanism 1006 is disabled (latch enable=O). Importantly, if an illegal state on the input vector I occurs, i.e, if another vector component goes high, as a result of node decay or some other reason, the pipeline latch 1000 will ignore the illegal state. (h) Finally, the clock CK transitions low. Clock Low Time (i) The vector output O is gated valid. In other words, the flip- flop with the latched, high vector component will transmit the high signal to the respective AND gate. All other AND gates will not emit an output signal. (j) The vector input I turns to the invalid state as a result of the forced invalid output setting of the previous stage due to precharging. The latch reset mechanism 1002 remains disabled and latch enable 1038 is low. (k) The clock transitions high and the foregoing cycle is repeated. The following sequence of events is applicable to the pipeline latch 1000 when the vector input turns valid from invalid after a clock high time. More generally, the following sequence of events will occur in the latch 1000 when the latch 1000 drives the input to a mousetrap pipeline stage operating in the advantaged phase, i.e., when the latch 1000 is receiving inputs from a pipeline stage operating in the disadvantaged phase. Clock High Time (a) The latch enable 1038 is initially low. Moreover, by the design of the circuitry, note that d.sub.1 +d.sub.2 +. . . +d.sub.n =not(latch enable)=1. (b) The vector output O is forced invalid (all vector components low; O.sub.1 -O.sub.N =0) by the output gating mechanism 1010 via the AND gates 1060, 1066, and 1072 with the high clock signal (either CK1 or CK2, depending upon the position of the latch in the pipeline). (c) The vector input I is invalid (all vector components low; I. sub.1 -I.sub.N =0), as a result of the invalid vector output from the previous pipeline stage caused by precharging. (d) The flip-flop mechanism 1008 is reset, via the latch reset mechanism 1002, such that d.sub.1 -d.sub.N =0, because of the invalid input vector components I.sub.1 -I.sub.N. Consequently, all pull-down MOSFETs 1032, 1034, and 1036 on the latch enable 1038 of the input trigger mechanism disabling mechanism 1004 are turned off. As a result, the latch enable 1038 gets pulled high by the latch enable pull-up mechanism 1012. Worth noting is that latch enable=not(d.sub.1 +d.sub.1 +. . . +d.sub.n)=1. (e) The clock transitions low. Clock Low Time (f) The vector output O is gated out of the latch 1000. Because no valid input has yet been received, d1-dN=0 and the vector output O remains invalid (all vector components are low). (g) The vector input I transitions valid (one of the vector component goes high). The high vector component actuates the corresponding MOSFET of the input trigger mechanism 1006. Consequently, the high vector component is recognized and passes through the corresponding MOSFET and directly through the corresponding AND gate. Said another way, the vector output O transitions to a valid state. In a sense, the latch 1000 operates after its allotted clock time as a &quot;transparent&quot; latch. It steals time from the subsequent stage in the pipeline. (h) In turn, the flip-flop mechanism 1008 pulls the latch enable 1038 low through the corresponding pull-down MOSFET (1032, 1034, or 1036), thereby disabling the input trigger mechanism 1006. (i) The vector input turns to the invalid state as a result of the forced invalid output setting of the previous stage due to precharging. The vector output remains valid (latched). Moreover, the latch reset mechanism 1002 remains disabled and latch enable 1038 remains low. (j) The clock transitions high and the foregoing cycle is repeated. It should be noted that the output gating mechanism 1010 of the foregoing first latch embodiment may be redundant with clock gating structures in the following pipeline stage (which were implemented to facilitate parallel precharge as discussed earlier). If this is true, the output gating mechanism 1010 can be eliminated from the latch proper without any change in system behavior. E. Second Embodiment of Latch FIG. 11 illustrates a low level block diagram of a useful mousetrap pipeline latch 1100 when the pipeline latch 1000 of FIG. 10 has a vector input I and a vector output O having only two vector components (N=2). As shown, the latch 1100 comprises a latch reset mechanisms 1102A and 1102B, an input trigger disabling mechanism 1104, an input trigger mechanism 1106, a flip-flop mechanism 1108, and an output gating mechanism 1110. Several aspects of the latch 1100 are worth noting. A cross- over network, denoted by reference numerals 1180 and 1182, has been implemented. As a consequence, no latch enable pull-up mechanism 1012 as in FIG. 10 is needed. Moreover, the inverters 1020, 1024, and 1028 shown in the latch reset mechanism 1002 of FIG. 10 are not required and have been eliminated, thereby further reducing the size and complexity of the circuit. In operation, at a high conceptual level, the latch 1100 functions in accordance with the methodology set forth in regard to latch 1000 of FIG. 10 to perform the same purpose. Again, as stated with respect to the first latch embodiment, it should be noted that the output gating mechanism 1110 of the second latch embodiment may be redundant with clock gating structures in the following pipeline stage (which were implemented to facilitate parallel precharge as discussed earlier). If this is true, the output gating mechanism 1110 can be eliminated from the latch proper without any change in system behavior. III. Clocking System of the Present Invention A. First Preferred Embodiment FIG. 12 shows a first preferred embodiment of a clocking system and method in accordance with the present invention. As previously discussed, the clocks CK1 and CK2, which are indicated in FIG. 12 at respective reference numerals 1202 and 1204, are each directed to an exclusive set of alternate pipeline stages. Moreover, the clocks CK1 and CK2 can exhibit a 50% duty cycle. In other words, half of each clock cycle is dedicated to precharging (PC) the corresponding set of alternate pipeline stages (even or odd numbered set), while the other half of the cycle is dedicated to permitting self-timed logic evaluation in the corresponding set of alternate pipeline stages. However, because of inherent clock asymmetry, the leading and falling edges of clocks CK1 and CK2 can vary by a deviation time t.sub.x, as illustrated in FIG. 12 by phantom lines 1206, 1208. In other words, a set of alternate pipeline stages will operate in a &quot;disadvantaged phase&quot; having a shortened ( &lt; 50%) precharge and logic evaluation period, while the correlative set of pipeline stages 802-808 will operate in an &quot;advantaged phase&quot; having a lengthened ( &gt; 50%) precharge and logic evaluation period. To overcome the clock asymmetry problem, the N pipeline latches 812-818 permit the pipeline stages 802- 808 operating in the disadvantaged phase to steal time from the pipeline stages 802-808 operating in the advantaged phase. Thus, data can flow from one pipeline stage to another anytime during the time window defined by the deviation time t.sub.x. The N latches 812-818 further permit very slow, or &quot;DC&quot;, operation of the pipeline 800 by preserving valid vector logic states. In essence, the latches 812-818 look at and preserve their inputs before or during their corresponding clock edges. In accordance with the first preferred embodiment of the present invention, the clocks CK1 and CK2 are adjusted so that the precharging period (PC) for each pipeline stage 802-808 is substantially less than the evaluation period (EV), as indicated by clocks CK1', CK2' denoted by respective reference numerals 1212, 1214 in FIG. 12. As a result of the novel timing scheme, the deviation time t.sub.x, indicating the window of time in which data can flow from one pipeline stage to another, has been greatly expanded, as indicated by phantom lines 1216, 1218 enclosing t. sub.x '. In essence, the evaluation periods corresponding to each of the clocks CK1' and CK2' are overlapping. As a further result of the novel clocking scheme, the N pipeline stages 802- 808 may be designed to exhibit varying evaluation times. In other words, the N pipeline stages 802-808 may have varying numbers of self-timed dynamic logic gates. The novel clocking scheme illustrated in FIG. 12 may be implemented in many ways, as is well known in the art. B. Second Preferred Embodiment A second preferred embodiment of the present invention will now be described and illustrated in regard to FIGS. 13-15. In effect, the clocking scheme of the second preferred embodiment ultimately results in the clocking outcome associated with the first preferred embodiment of FIG. 12, but provides for greater flexibility, efficiency, and ease of implementation. The second preferred embodiment is desirable over the first preferred embodiment because the same timing effect for the critical logic path is achieved, but the mousetrap logic gates are accorded a 50% duty cycle for precharging. Essentially, the second preferred embodiment involves dividing the mousetrap (MT) logic gates within each pipeline stage 802-808 into two successive groups, and then clocking each group by a different clock. More specifically, consider FIG. 13. As shown in FIG. 13, for instance, the pipeline stage 802 can comprise any number n of cascaded mousetrap logic gates 1302-1316. In accordance with the second preferred embodiment, these mousetrap logic gates 1302-1316 are divided into two groups, one group comprising any number y of mousetrap logic gates 1302- 1306 and another group comprising any number z of mousetrap logic gates 1312-1316, where y+z=n. Further, a start clock CK1.sub.S is connected to the pipeline latch 812 and the y mousetrap logic gates 1302-1306, while an end clock CK1.sub.E is connected to the z mousetrap logic gates 1312- 1316. The clocking scheme for clocks CK1.sub.S and CK1.sub.E is shown in FIG. 14. The start clock CK1.sub.S is preferably synchronized with the system clock (not shown). The end clock CK1.sub.E is slightly out of phase with the start clock CK1.sub.S by a selectable predetermined time t. sub.y. Said another way, the rising edge of the end clock CK1.sub.E lags in time from the rising edge of the start clock CK1.sub.S by the time t. sub.y. In operation, the start clock CK1.sub.S initiates the logic evaluation period of the pipeline stage 802. As a result, self-timed logic evaluations will commence in the pipeline stage 802. After a time t. sub.1, the self-timed logic evaluation has progressed entirely through the y mousetrap logic gates 1302-1306, as shown in FIG. 13. At this point, the end clock CK1.sub.E should have already initiated the logic evaluation period allocated to the z mousetrap logic gates 1312-1316. As a result, the logic evaluations will commence through the z mousetrap logic gates 1312-1316 within a time t.sub.z, at which point the vector output will be latched into the pipeline latch 814. Next, the pipeline stage 802 is precharged for a time t.sub.p in preparation for the next logic evaluation period, as indicated in FIG. 14. In conclusion, as a result of the second preferred embodiment, each of the n mousetrap logic gates 1302- 1316 is accorded a 50% duty cycle for precharge; however, the critical logic path is accorded a long evaluation period and short precharge period in each pipeline stage 802-808. In accordance with another significant aspect of the second preferred embodiment, the n mousetrap logic gates 1302-1316 are configured such that the number y of mousetrap logic gates 1302-1306 is much less than the number z of mousetrap logic gates 1312-1316. In a specific implementation of a multiplier, y=2 and z=11. As a result of the foregoing configuration, only about 20% of the mousetrap logic gates load the system clock, from which start clock CK1.sub.S is synchronized. This predicament substantially eases the burden on the system clock and permits an inexpensive implementation of the present invention by allowing the start clock CK1.sub.S to be a fast clock, while the end clock CK1.sub.E can be an inexpensive slow clock. A circuit for implementing the two clock clocking scheme of the second preferred embodiment is illustrated in FIG. 15. Referring to FIG. 15, the system clock 1502 is used to directly derive the start clock CK1. sub.S 1504. Furthermore, the end clock CK1.sub.E, denoted by reference numeral 1506, is derived from the system clock 1502 by implementing a propagation time delay. The propagation time delay is preferably implemented via a series of cascaded inverters 1508, 1512, which may be of the CMOSFET type. Two inverters 1508, 1512 are utilized so as to maintain the same polarity between the end clock CK1.sub.E and the start clock CK1.sub.S. Obviously, any number of inverters 1508, 1512 could be implemented to derive any desired propagation time delay. An even number of inverters 1508, 1512 will produce a time-delayed system clock waveform having the same polarity as the system clock 1502, while an odd number of inverters 1508, 1512 will produce a time-delayed system clock waveform having the opposite polarity of the system clock 1502. For a clearer understanding of the second preferred embodiment as applied to multiple pipeline stages, FIGS. 16-18 illustrate the preferred clocking scheme as applied to adjacent pipeline stages of the pipeline 800. As shown in FIG. 16 as an example, pipeline stage 802 exhibits the same division of mousetrap logic gates (y gates and z gates) as discussed with respect to FIGS. 13-14. Furthermore, the pipeline stage 804 is divided into a set of 1 mousetrap gates 1602 and m mousetrap logic gates 1604. The numbers 1 and m are arbitrary and could correspond with y and z, respectively, if desired. However, an important aspect of the present invention is that pipeline stages can have varying numbers of cascaded gates, and hence, different evaluation times. Accordingly, y+z need not equal 1+m. FIG. 17 illustrates a timing diagram for the start clocks CK1. sub. S, CK2.sub.S and the end clocks CK1.sub.E, CK2.sub.E. As a result of the timing of the foregoing clocks, the pipeline stage 802 exhibits an evaluation period and a precharge period as shown by clock CK1' in FIG. 17. Moreover, the pipeline stage 804 exhibits an evaluation and precharge period as shown by CK2' in FIG. 17. The flow of logic evaluations through the pipeline stages can be conceptualized as a ripple, or wave, through the pipeline 800, as indicated by arrow 1702 in FIG. 17. Essentially, logic evaluations in the critical logic path progress in a completely self-timed manner without hindrance of the clocks and with propulsion by the precharging supplied by the clocks. FIG. 18 illustrates the architecture for generation of the clocks shown in the timing diagram of FIG. 17. As shown in FIG. 18, the start clock CK1.sub.S is derived directly from the system clock 1502. The end clock CK1.sub.E is derived by sending the system clock 1502 through inverters 1804, 1806. Furthermore, the start clock CK2.sub.S is generated by sending the system clock 1502 through an inverter 1802. The end clock CK2.sub.E is derived by passing the system clock through inverters 1802, 1814, 1816. Preferably, the inverters 1802-1816 of FIG. 18 are CMOSFET inverters. In addition, these CMOSFET inverters may have ratioed PMOSFET to NMOSFET widths so as to further enhance the speed of logic evaluations through the critical logic path, as described hereafter. Specifically, the rising edge of each clock which triggers the latch output (AND gates 1060, 1066, 1072 of FIG. 10) is designed to rise very quickly. In other words, as shown in FIG. 17, the rising edges 1704, 1708 of start clock CK1.sub.S and also the rising edges 1706, 1710 of start clock CK2.sub.S are designed to be very fast, or &quot;hot&quot;, edges. The rationale for the foregoing hot edges is as follows. Optimally, there should be no waiting periods for logic evaluations in the critical logic path. An evaluation period in each pipeline stage 802-808 should be initiated as soon as possible and then a long evaluation period should be provided to complete the requisite logic functionality in that stage. However, sometimes a vector logic signal will reach a latch before the start of the evaluation period corresponding to that latch, and the logic evaluation cannot commence until the rising edge of the corresponding start clock. Hence, it is desirable to provide a fast clock edge for initiating the evaluation period so as to minimize any waiting which might occur in the critical logic path. To achieve fast rising edges 1704-1710, the architecture 1800 of FIG. 18 is manipulated by ratioing CMOSFET inverters. The fast rising edges 1704, 1708 are generated by the start clock CK1.sub.S, which parallels the system clock 1502. Consequently, there is no need to implement inverter ratioing to speed up the rising clock edges. However, the CMOSFET inverter 1802 produces the edges 1706, 1710, which must operate very fast from a logic low to a logic high. In order to effectuate this result, the CMOSFET inverter 1802 is configured so that the ratio of the PMOSFET width to the NMOSFET width is large. Thus, the output (CK2.sub.S) of the CMOSFET inverter 1802 rises from a logic low to a logic high very quickly. Worth noting is that the output of the CMOSFET inverter 1802 falls from a logic high to a logic low very slowly, but this predicament is of no significant consequence because there is sufficient time for precharge. Also worth noting is that the implementation of the ratioed widths in regard to CMOSFET inverter 1802 minimizes the load on the system clock 1502. C. Third Preferred Embodiment In accordance with a third preferred embodiment of the present invention, a pipeline 1900 of FIG. 19 is constructed without latches between the N number of pipeline stages 1902, 1904. This configuration is possible because of the self-timed nature of logic evaluations progressing through the pipeline 1900. Each pipeline stage 1902, 1904 knows when an input is valid and its value when it is valid. As shown in FIG. 19, any number N of pipeline stages 1902, 1904 may be cascaded in series. The N pipeline stages 1902, 1904 are alternately clocked with the clocks CK1 and CK2 as described with respect to the first preferred embodiment of the present invention. Further, each of the pipeline stages is preferably individually clocked with a start clock CKx. sub.S and an end clock CKx.sub.E, where x is 1 or 2, in accordance with the second preferred embodiment. Moreover, each of the individual pipeline stages 1902, 1904 can have any number n, m, respectively, of cascaded mousetrap logic gates, as shown in FIG. 1. The N pipeline stages 1902, 1904 of FIG. 19 are constructed so that the vector outputs are guaranteed to eventually transition to an invalid state, i.e., all vector components to a logic low, irrespective of whether any inputs (vectors or vector components) remain or transition to a logic high while the corresponding pipeline stage is in its precharge period. If logic evaluations progress through the pipeline 1900 without implementation of the preceding constraint, a first pipeline operation initiated at a time t may interfere with a later pipeline operation commenced at a time t+delta t. Worth mentioning is that the AND gates 1060, 1066, 1072 of the pipeline latch 1000 of FIG. 10 eliminated the foregoing constraint. However, also worth noting is that this constraint is usually accommodated for by most pipeline stages 1902, 1904 because a sufficient number of the cascaded mousetrap gates will have clocked-triggered pull- down transistors for disabling the corresponding ladder logics or some other clock-triggered gating means for disabling the pipeline stage outputs. The foregoing concept was described previously with respect to FIG. 1. In order to force this constraint for the purpose of reliability, only a minor modification need be implemented in the pipeline 1900. A clock- triggered gating means, for example, a pull-down transistor 1906, is disposed at pipeline stages after the first one or more. By not implementing the clock-triggered gating means in the early pipeline stages, such as pipeline stage 1 denoted by reference numeral 1902, early data arriving from other circuits (not shown) can ripple through pipeline stage 1 into the initial mousetrap gates of the subsequent pipeline stage 2, denoted by reference numeral 1904, in a manner unimpeded by the precharge clock of stage 1. This allows for pre- evaluation of the data and enhances speed. Finally, the foregoing modification adds little complexity to the pipeline 1900, and furthermore, allows for the maximum time-stealing advantages. It should be noted that this third embodiment does not yet provide for operation below a minimum clock frequency. If this provision is required, the re-introduction of either the first or second pipeline latch embodiments between the pipeline stages, without the output gating mechanism 1010, 1110, will satisfy this requirement while preserving the other features of the third preferred embodiment. The foregoing description of the preferred embodiments of the present invention has been presented for purposes of illustration and description. It is not intended to be exhaustive or to limit the present invention to the precise forms disclosed, and obviously many modifications are possible in light of the above teachings. All such modifications are intended to be incorporated herein.
Space science has been witnessing staggering advancement and in the course of time, space agencies like NASA and ESA have spotted several exoplanets in various star systems. However, until now, the concept of an exomoon is confined only in papers, even though many scientists believe such entities could exist in the depths of space. Now, a study conducted by a team of astronomers had detected some crucial hints that may be indicating the presence of an exomoon. As per experts, this possible space body is not just an exomoon, but it could a close sibling of Jovian moon Io, touted to be the most volcanically active object in the solar system. Researchers who conducted this study believe that this exomoon could be covered with volcanoes spewing lava. Interestingly, this probable exoplanet could be most likely orbiting a planet named WASP-49b, a massive giant planet that shares eerie similarities to Jupiter. This massive planet is orbiting a dwarf star WASP-49 once every 2.8 days. "It would be a dangerous volcanic world with a molten surface of the lava, a lunar version of close-in Super-Earths like 55 Cancri-e," said Apurva Oza, an astrophysicist at the Physics Institute of the University of Bern, Sciencealert reports. Oza also added that discovering more exomoons could help to unveil more details about the atmosphere of exoplanets. "While the current wave of research is going towards habitability and biosignatures, our signature is a signature of destruction. The exciting part is that we can monitor these destructive processes in real-time, like fireworks," added Oza. A few days ago, a different study conducted by a team of researchers at Caltech had discovered an exoplanet which they called 'unlike any other'. This newly discovered exoplanet has three times the mass of Jupiter, and due to its long egg-shaped orbit, it takes somewhere in between 45 to 100 years to complete one orbit.
https://www.ibtimes.sg/scientists-discover-possible-exomoon-covered-volcanoes-spewing-lava-32288
Mark Robinson is a senior software consultant having fulfilled roles such as a developer, tester, team leader and project leader. He is currently based in the Netherlands and has presented at TEDx Eindhoven on presentations skills. Time Stamped Show Notes (01:00) – Phil introduces Mark Robinson (01:28) – Mark tell us that he has spent the last two decades working in Eindhoven (01:38) – Mark then provides an overview of his programming, testing and consultancy background (01:52) – Mark also mentions his presentations skills coaching as well as development of his training business (www.markrobinsontraining.com) (02:08) – Mark says that technical people aren’t necessarily great at presenting and getting their message across (02:22) – Unique Career Tip: Mark says, “Family first” (02:41) – Mark believes that it’s important to spend quality time at home with family (02:49) – Time: “If you never go home late you have a job. If you go home late occasionally, you have a career. If you always go home late, you don’t have a life.” (03:03) – Mark talks about how he manages his time (03:17) – Quality: “I really want to do quality work, but I want to put as much effort into my home life as my work life.” (03:27) – Mark talks about the activities he does with his family to strengthen those relationships as well as his own, personal time (04:12) – Mark says that your mental, physical, emotional and spiritual states all have an impact on your career (04:44) – Phil comments that this is often referred to as, “Work Life Balance”, although Phil believes that it is really “Work Life Integration” (04:58) – Phil also says that Cost is another factor which he believes is relevant when discussing Quality and Time (05:37) – Mark quotes the phrase, “Whatever you water will grow”, referring to your investment of time in what you do. (05:47) – Worst Career Moment: Mark describes how his life as a consultant means that he his suitability for roles varies from client to client (06:18) – Mark believes that he is able to influence the direction of a role to make it more enjoyable and of more value to the client (06:28) – Mark talks about how he went from a role he loved to one that he hated (06:52) – Mark found that, although the people were great, the role was dull and very bureaucratic, and that he felt he couldn’t improve the situation (07:18) – Mark says he got into a downward spiral, being more irritable at home and eating badly. And eventually physical symptoms resulting in taking a few days off from work (07:54) – Fortunately the project ended and Mark was able to move on to a new role (08:11) – Mark believes that you need to be very mindful of how good a match your current role is to your skillset and interests (08:21) – Mark describes a simple exercise to help you evaluate your current role (09:10) – Mark recommends that if you are not enjoying your role, set yourself a date by which it must have improved particularly if it is affecting your happiness. You are if the driving seat of your career (09:29) – Mark also says that if your work is at the cost of your health you need to get out urgently (09:40) – Career Highlight / Greatest Success: Mark provides two highlights. The first I.T. related and the second about personal achievement (09:59) – Mark describes how he introduced a Wiki to a company and also how it has become indispensable to the company (10:16) – Mark conducted a survey to understand the time saved by introduction of the Wiki which was, on average, four hours per person per week (10:50) – Mark talks about entering the Wiki into a technology competition, which they won based upon the votes of his colleagues (11:10) – Mark then describes how he entered a TEDx pitch event to present at TEDx Eindhoven (12:04) – Mark mentions that he now runs workshops and training about “How to present to keep your audience’s attention” (12:14) – What Excites You About The Future of a Career in I.T.? Mark quotes the expression, “Software is eating the world” (12:32) – Mark talks about software replacing hardware, such as email replacing post (12:45) – Mark also talks about project methodologies, such as Agile, being used in non-I.T. business activities (13:04) – Other examples of how software is changing the world include Uber, Airbnb, Facebook and Google all of which are facilitators of change (13:32) – Mark believes that we are only limited by our imagination. You can do almost anything in the I.T. world (13:58) – The Reveal (14:05) – What attracted you to an I.T. career in the first place? – “I tried an evening programming course and loved it. I then started applying for work at I.T. companies.” (14:57) – What’s the best career advice you’ve ever received? – “It’s important that you recognise that you’re behind the steering wheel of your career,” and from Scott Adams, “You need to build your talent stack” (16:36) – If you were to begin your I.T. career again, right now, what would you do? – “I’d start earlier. Start as early as you can” (17:13) – What career objectives are you focusing on right now? – “A new project leader role and developing my own presentation skills business” (17:58) – What’s the number one non-technical skill that has helped you in your career so far? – “Presentation skills, no question” (18:38) – A Parting Piece of Career Advice: Mark says that you must continue developing your skills and building your talent stack (18:51) – Every new skill you learn increases your career opportunities as well as your market value (18:57) – Develop your ability to deliver a message to a group (19:04) – Marks suggests checking out his TEDx talk “How to present to keep your audience’s attention” 3 Key Points: - Ensure that you put your family first - You’re behind the steering wheel of your career - Keep developing your skills and build your talent stack Resources Mentioned:
http://itcareerenergizer.com/e9/
Raise your hand if you collect toys. Any kind of toys – garden toys, craft toys, electronic toys, car toys, sports toys, horse toys, or any one of the endless examples of the shiny things that capture our attention and are fun to collect. Me? One of my weaknesses is kitchen toys.
https://danceswithalimp.com/tag/easter/
Statements by Moussa Faki Mahamat have consistently reflected the position of the AU that member states are entitled to the protection of their sovereignty, territorial integrity and independence. In your edition of 13 January 2021, you published an article by Paulos Tesfagiorgis entitled: “AU Commission chair’s stance on Ethiopian civil war sabotages the union’s peace mission”. This article argues, without any evidence, that the chairperson of the African Union Commission adopted a position contrary to the principles of the AU and those expressed by current AU chairperson, South African President Cyril Ramaphosa, in matters of conflict resolution and in so doing, has purportedly sabotaged the promotion of a solution to the crisis in Tigray. It is not surprising that on the eve of the re-election of the Chairperson of the AU Commission, renewed malicious attacks on his person by certain individuals would come to the fore. I wish to restore the facts that were entirely and wilfully omitted in the article. Let’s start with the argument on which the allegation of the AU’s purported “sabotage” of the promotion of a peaceful solution to the Tigray crisis in Ethiopia, is based. It is based on one sentence in the introductory remarks by Moussa Faki Mahamat, Chairperson of the African Union Commission, at the 38th Extraordinary IGAD Summit on 20 December 2020. The sentence your author chose to highlight reaffirmed the consistent position of the chairperson on the imperative need to respect the constitutional order and commitment to the unity and integrity of the member states of the AU. There is nothing new in this position. In fact, the Constitutive Act of the African Union is clear on the sovereignty and integrity of member states of the AU. In its Article 2.b, the Constitutive Act stipulates: among the top objectives of the AU, the imperative need to “defend the sovereignty, territorial integrity and independence of its Member States”. What does the Charter of the United Nations, to which the Constitutive Act expressly refers, state? In its Article 2.7, it states: “Nothing contained in the present Charter shall authorise the United Nations to intervene in matters which are essentially within the domestic jurisdiction of any State, or shall require the Members to submit such matters to settlement under the present Charter.” It thus beggars belief that recalling these principles and pointing out the right and indeed the duty of every African state to maintain its constitutional order, sovereignty, unity and territorial integrity, can be interpreted by some to constitute a dereliction of duty. Now let us turn to the positions of the current AU leadership on the crisis in Tigray. In one of the first public statements on the conflict in Tigray by any outside party, Chairperson Faki’s communiqué of 9 November 2020 stated “the Chairperson reaffirms the African Union’s firm attachment to the Constitutional order, territorial integrity, unity and national sovereignty of the Federal Democratic Republic of Ethiopia”. He also made an appeal “for the immediate cessation of hostilities and calls all parties to respect human rights and ensure the protection of civilians. He further urged the parties to engage in dialogue to seek a peaceful solution in the interests of the country.” This communiqué, as the reader can note, is built on the dual principles of the need to safeguard constitutional order, unity and the territorial integrity of the Federal Democratic Republic of Ethiopia on the one hand, and the need for the immediate cessation of hostilities and the call for a solution to the conflict through dialogue, on the other. Why, then, ignore these two interlinked pillars as evidenced in the official communiqué of the chairperson of the commission? Why state that the chairperson opposed a peaceful solution to the conflict when in fact he was among the first to have publicly called for one? It is disingenuous at best. Chairperson Faki’s call for dialogue was echoed by current Chairman of the AU President Ramaphosa, in almost identical terms. Thus, in a tweet on 20 November 2020, it is clearly stated: “President Ramaphosa expressed his deep desire that the conflict should be brought to an end through dialogue between the parties.” Why then seek to oppose the two positions of the two leaders when they expressed the same position in the same terms regarding the AU approach to a solution, as they have consistently done on all continental issues? One can also ask what the special envoys, sent by the AU to engage the Ethiopian authorities, do? They made the very same plea as that of Chairperson Faki and President Ramaphosa to the authorities who received them in Addis Ababa on 27 November 2020 Similarly, the 20 December 2020 Extraordinary IGAD Summit, in its final communiqué, “reaffirmed the primacy of Constitutional order, stability and unity of the Federal Democratic Republic of Ethiopia”. Why then detach the position of the chairperson of the AU Commission, which is in perfect harmony with the texts and the practice of the AU, from the identical positions of the rest of the AU leadership? In the absence of any facts or supporting evidence, and by selectively highlighting without context, public utterances of Chairperson Moussa Faki Mahamat, all of which are on public record, is malicious and misleading at best, and defamatory. The clear and complete absence of any link, in fact or form, between the author’s assertion that the chairperson purportedly stood in the way of a solution to the crisis and the suggestion that his position countered that of the chair of the AU, lies starkly bare. While the author is obviously allowed his own opinion, he does not have the luxury of making up his own facts to satisfy his personal feelings, which a cursory Google search would easily have remedied. DM Ebba Kalondo is spokesperson for the Chairperson of the African Union Commission. Comments - share your knowledge and experience Everybody has an opinion but not everyone has the knowledge and the experience to contribute meaningfully to a discussion. That’s what we want from our members. Help us learn with your expertise and insights on articles that we publish. We encourage different, respectful viewpoints to further our understanding of the world. View our comments policy here. Sign in Don't have a login? Become an Insider There are many great benefits to being a Maverick Insider. Removing advertising from your browsing experience is one of them - we don't just block ads, we redesign our pages to look smarter and load faster. Click here to see other benefits and to sign-up to our reader community supporting quality, independent journalism.
University Hospitals (UH), one of the largest health systems in Northeast Ohio that cares for 1.3 million patients annually, offers the following statement in reaction to the announcement by the U.S. Department of Health and Human Services to reform the Physician Self-Referral Law (the "Stark Law") and the Federal Anti-Kickback Statute. At University Hospitals, we have aligned around a new narrative to keep people healthy at home. To facilitate this objective, we are enhancing the way we deliver care across the continuum by aiming to provide the best health outcome and the best patient experience at the lowest annual cost. Modernizing rules that help foster this paradigm shift to a value-based care environment will help our industry better provide the high-value care patients deserve and providers desire. In order to keep pace with our evolving industry, it's important to continuously evaluate legislation and processes that could hinder providing the American people with the best possible care. We support HHS' efforts to identify defects in value and create solutions that fuel innovation in patient care." Other Useful Links News-Medical.Net provides this medical information service in accordance with these terms and conditions. Please note that medical information found on this website is designed to support, not to replace the relationship between patient and physician/doctor and the medical advice they may provide.
1. Field of Invention The present invention is a vehicle body reinforcing structure for a vehicle, and more particularly, to a vehicle body reinforcing structure which improves performance for coping with a frontal collision and a broadside collision of a vehicle by integrally connecting a lower portion of a front pillar, a side inner reinforcing member, a floor compliance, and a dash cross member. 2. Description of Related Art In general, among elements that constitute a vehicle body of a vehicle, front pillars are structural bodies that serve to mount and support a front door, are disposed at both left and right sides in a width direction of the vehicle at a front side in a length direction of the vehicle, and serve as a column that integrally connects a front upper portion and a front lower portion of a vehicle body. A side member, which is formed to be extended forward and rearward in the length direction of the vehicle, may be connected to a lower portion of the front pillar, and a rear end portion of a front side member, which constitutes a front vehicle body of the vehicle and is formed to be extended in the length direction of the vehicle, may be connected to the lower portion of the front pillar. In addition, a floor compliance may be disposed at a front side in the length direction of the vehicle at the lower portion of the front pillar, and a dash panel, which separates an engine room and a passenger room, may be disposed at a front side of the front pillar. In a case in which the vehicle having the aforementioned vehicle body structure collides with a front object or other vehicles (hereinafter, referred to as a collision body) when the vehicle travels, particularly in a case in which the vehicle does not collide with the collision body at a front side of the vehicle, but collides with the collision body in a state in which the vehicle deviates to any one side of left and right sides (this collision is called a small overlap collision), the front side member of the front vehicle body, which serves to absorb collision impact, cannot exhibit its own function, but is bent toward one side, and the collision body deviates from the front side member, and then collides with a fender apron and a wheel. The wheel directly receives impact from the collision body, and strikes the lower portion of the front pillar that is positioned at a rear side of the wheel, such that the front pillar is excessively damaged. In addition, because the front pillar constitutes one side portion at a front side of the interior of the vehicle, excessive damage to the front pillar threatens safety of an occupant in the interior of the vehicle. Therefore, in order to prepare for the case in which the vehicle undergoes a frontal small overlap collision, and the front pillar receives excessive impact energy, it is necessary to increase rigidity of the lower portion of the front pillar, and properly cope with the small overlap collision of the vehicle. The information disclosed in this Background section is only for enhancement of understanding of the general background of the invention and should not be taken as an acknowledgement or any form of suggestion that this information forms the prior art already known to a person skilled in the art.
The 80W LZP-00MD00 RGBW LED emitter produces a full spectrum of brilliant colors with the highest flux output from a compact 12.0mm x 12.0mm footprint. Through its small size and ultra-low thermal resistance, it enables the miniaturization of lighting fixtures utilizing individual red, green, blue and white LED emitters. The emitter’s smart die positioning pre-mixes the colors before going into secondary optics maximizing coupling efficiency. The high quality materials used in the package are chosen to optimize light output and minimize stresses which results in monumental reliability and lumen maintenance. The robust product design thrives in outdoor applications with high ambient temperatures and high humidity.
https://www.sssltd.com/product/led-engin-lzp-00md00-red-green-blue-white-765lm-920lm-230lm-1550lm/
Pectin is a naturally occurring product in fruit, that helps the setting of jam. Some fruits have less pectin than others, and the riper the fruit gets the less pectin there is. Pectin is a substance in fruits that forms a gel if it is in the right combination with acid and sugar. All fruits contain some pectin. Apples, crab apples, gooseberries and some plums and grapes usually contain enough natural pectin to form a gel. Other fruits, such as strawberries, cherries and blueberries, contain little pectin and must be combined with other fruits high in pectin or with commercial pectin products to obtain gels. Because fully ripened fruit has less pectin, one-fourth of the fruit used in making jellies without added pectin should be under ripe. The proper level of acidity is critical to gel formation. If there is too little acid, the gel will never set; if there is too much acid, the gel will lose liquid (weep). For fruits low in acid, add lemon juice or other acid ingredients as directed. This is commercial grade pectin manufactured from citrus peel. Please note: Pure Pectin may clump as it is added to your jam, so it is advisable to mix it with a small amount of the sugar from your recipe, to make mixing it in, easier.
https://southernhighlandshomebrew.com/products/pectin
The yeast S. cerevisiae is an invaluable in vivo test tube for examining human gene functions and disease including, including cell signaling, DNA metabolism and mitochondrial function. We have focused on two human genes with broad health significance: the Friedreich's ataxia gene frataxin and the tumor suppressor p53. FRATAXIN AND FRIEDREICHS ATAXIA--The mitochondrial protein frataxin helps maintain appropriate iron levels in the mitochondria. Friedreich?s ataxia (FRDA) is a progressive neuro-degenerative disease with early onset that results from a deficiency in frataxin, a protein localized to the mitochondria (mt). Several model systems have been developed in an effort to understand the disease. None had been developed to investigate the relationship between mitochondrial damage and nuclear integrity. Deletion of the frataxin homolog YFH1 in yeast results in a 10-fold increase in iron within the mitochondria and this leads to loss of mitochondrial function and the appearance of a petite phenotype in nearly all strains that have been examined. We anticipated that a study of the consequences of frataxin loss could provide an understanding of the relationship between the mitochondria and the nucleus in terms of genome stability. In particular, we were interested in whether defects in the mitochondrial frataxin could lead to mitochondrial damage as well as lesions in mitochondrial and nuclear DNA. Furthermore, we wanted to develop a system that better represents the reduced levels of frataxin in FRDA. Using a highly regulatable system, we have shown that excess iron due to loss of frataxin within the mitochondria can generate ROS that in turn can cause nuclear genome instability, as measured by increases in mutation and recombination rates. However it is important to note that in recapitulating FRDA in model systems, the consequences of reduced activity and/or levels of proteins needs to be considered since the disease is associated with a deficiency of the protein frataxin rather than a complete absence. The highly regulatable, GAL1 promoter based system enabled the expression of variable levels of frataxin. Using this system we have been able to identify several consequences of reduced levels of frataxin including iron accumulation, mt protein damage, lesions in mtDNA, loss of mtDNA, the appearance of petites that lack mitochondrial DNA and the appearance of nuclear DNA damage in a sensitized rad52 mutant background. Our findings have implications for how mitochondrial associated syndromes could have impacts on nuclear genome stability. For the case of FRDA, our system is expected to prove helpful in the development of therapeutic strategies for FRDA and other neurodegenerative diseases that cause oxidative damage in mitochondria. P53--The p53 gene is central to many stress responses and genome stability in human cells. Nearly 50% of all cancers have an associated p53 mutation and most of these are missense mutants. We are addressing the sequence-specific transactivation function of p53 to better understand the consequences of tumor mutations and to use human p53 to approach the general issue of how in vivo transactivation specificity and selectivity are achieved. Given the broad spectrum of p53 functions as a transcription factor and the many different p53 alleles with single amino acid changes that are aberrantly expressed in cancer cells, a detailed knowledge of the functional status of p53 mutants could have clinical value, especially for therapies tailored to specific tumors. Although several methods have been attempted to classify p53 mutants, based on physical/chemical, or immunological/structural parameters, it is not presently possible to predict a priori the behavior of a mutant protein. p53 responds to a variety of stress signals by controlling, as a homotetramer, the expression of over 50 genes. Different biological responses can be elicited by p53-induced transcriptional networks, including cell cycle arrest, programmed cell death, cellular senescence and differentiation as well as stimulation of DNA repair. The extent and kinetics of transcriptional modulation of individual genes likely dictates which biological response will be elicited but the mechanisms regulating such specificity remain to be clarified. p53 target genes contain in their promoters p53 response elements (REs) whose sequences are related to a degenerate 20 bp consensus and deviations from the consensus sequence in individual REs are common. We have utilized yeast as an in vivo test tube to address the transactivation capacity of p53 and various mutants, as well as p53 family members (p63 and p73). These are expressed with a tightly controlled ?rheostatable promoter? so that the level of expression is proportional to level of inducer (galactose) in the medium. We have also systems with constitutive high expression (i.e., the ADH1 promoter). The ability of p53 and various mutants to act as sequence specific transactivation factors is determined by its ability to activate REs at promoters placed upstream of various reports. The REs can be easily changed so that it is possible to determine transactivation capacity from many REs. Therefore, by changing levels of expressed p53 as well as RE?s, many issues can be addressed including rules of binding and consequences of mutations on activating various REs. Because of the ease of targeted mutagenesis in yeast, it is now relatively easy to address rapidly the consequences of tumor associated p53 mutations. On a broader scale, the approach has enabled us to investigate the evolution of transcription networks. In addition the system allows us to address other types of sequence specific transcription factors, such as NF kappa beta and NKX2.5 that act on many genes. We are also addressing the biological and functional impact of ectopic expression of the p53 mutants with altered transactivation capacity in human cell lines, including transformed and non-transformed cells with different p53 status and evaluating the effects on cell cycle progression, apoptosis, DNA repair, and activation of p53 targets. The differential consequences of the functional p53 mutants with altered transactivation capacity may result in changes in the transactivation patterns that would be advantageous during tumorigenesis and could be selected in particular cellular or genetic environments. For example, mutants might affect specific pathways through altering the transcription network controlled by p53. Along this line, we have demonstrated that a p53 hotspot mutant in UV-induced skin tumors in mice, does in fact result in an altered spectrum of target gene tranasctivation by the mutant p53. Gene expression studies using both real time PCR and microarray technologies are being used to probe and better understand the global changes in gene expression underlying the complex selection of p53 downstream pathways.
This 3rd quantity in Mark Dvoretsky’s institution of Chess Excellence sequence is dedicated to questions of method aimed toward bettering the reader’s positional knowing. the writer additionally examines a few positions that lie at the boundary among the middlegame and the endgame. As within the different books within the sequence, Dvoretsky makes use of examples from his personal video games and people of his students in addition to episodes from different gamers’ video games. Developing Senior Navy Leaders: Requirements for Flag Officer Expertise Today and in the Future Might U. S. military officials be larger ready to turn into flag officials? This examine examines the categories of workmanship required for winning functionality in military flag billets, and even if contemporary swimming pools of officials own this event. The authors additionally learn military tendencies during the last decade to spot the kinds of workmanship more likely to develop into extra vital for army leaders sooner or later. Building Project-Management Centers of Excellence This is often the manifestation of effectively applied undertaking administration tools. The booklet and accompanying CD-ROM are instruments in adopting venture administration criteria and systems company-wide, at each point and in each division. Teach Yourself VISUALLY: PowerPoint 2016 The easy PowerPoint consultant designed in particular for visible freshmen Are you a visible learner who desires to spend extra time engaged on your displays than attempting to work out the right way to create them? educate your self Visually PowerPoint will give you a simple method of growing successful displays with the newest model of PowerPoint. - Calculus 1c-6, Examples of Taylor's Formula and Limit Processes - The G Quotient: Why Gay Executives are Excelling as Leaders... And What Every Manager Needs to Know - Effective Executive's Guide to Microsoft Word 2002 - Powerful PowerPoint for Educators: Using Visual Basic for Applications to Make PowerPoint Interactive Extra resources for Guide to Microsoft® Office OneNote™ 2003 Example text Text highlighted with the Pen tool will stay highlighted, no matter how you reflow or move the paragraph text. Organizing Text on the Page Let me take some of what I talked about in this chapter and offer a brief outline of how you might use OneNote to take and organize a page of notes effectively. Of course, this is just one approach, but by keeping these key elements in mind, you can take more effective notes and make them easier to use later. To start, give your page a title and choose whether or not you want to view rule lines on your page by choosing your preferred style from Rule Lines in the View menu. For quickly scanning recent notes, however, youâ ll probably want to organize your sections and folders so that your data is logically grouped and sorted. There are a few different ways that you can organize the notes that you take. You can divide your notes up functionally, meaning that youâ ll create a folder for each job or role that you have. You can organize your notes chronologically, meaning that youâ ll create sections and folders based on dates and that youâ ll probably take notes that rely on time stamps, or you can use a more high-end approach where your job- or subject-based note sections are stored in folders with some sort of time stamp. If youâ re annotating your paragraphs with a pen by circling or highlighting words using ink, for instance, the drawing that you do is held on its own canvas. This means that if you move the paragraph, the marks that you made will not stay with the paragraph. If you want to highlight text, a better option is to use the Highlight tool on the Formatting toolbar. This tool is different from the Pen tool, even though the Pen tool has some Highlighters available in its list. Text highlighted with the Pen tool will stay highlighted, no matter how you reflow or move the paragraph text.
http://iceeonline.org/lib/guide-to-microsoft-office-one-note-2003
The invention relates to the technical field of facial masks and particularly relates to a temperature-sensitive ink and a facial mask. The temperature-sensitive ink contains a color indicator, a temperature stabilizer and a skin-beautifying nutrient solution, wherein the temperature stabilizer and the skin-beautifying nutrient solution can be fused with the color indicator without chemical reaction; the color indicator contains a high-concentration natural plant dye extracting solution and a fading agent which are independently packaged; the temperature stabilizer contains a base solution andgranular powder, the base solution is independently packaged, and the granular powder is dissolved in the base solution and is used for releasing energy; and after the high-concentration natural plant dye extracting solution is mixed with the fading agent, the fading agent with a fading function fades to be colorless, and the fading time is determined through mixing in a volume proportion. The main component agents of the color indicator and the temperature stabilizer are independently packaged through the ink, and the corresponding component agents are mixed in proportion according to the actual demands in use, so that the temperature comfort and the indication of the use time are adjusted and controlled.
Git bisect and PowerShell Inevitably when writing code there will be bugs, it’s just part of being human and writing code and especially true when the code gets complex. So when it occurs we need to track it down and fix it, which can be easy, but we often want to track down where the bug was introduced so we can figure out what caused it (especially for the more difficult to pinpoint bugs). As we’re all using source control this becomes easier with git and it’s extremely powerful bisect tool. What is git bisect? To quote the official documentation “git-bisect - Use binary search to find the commit that introduced a bug”, which to the average person doesn’t mean a lot. The simple description is it takes two points you specify and splits them down the middle, picks a side and splits it again, doing this until it finds where the bad commit is. The way it figures out which side to pick can be done in two ways, either by you manually telling it “good” or “bad” or it can automatically figure that out for you if you give it something to run. It’s this last part we’ll be looking at in more detail here but we’ll look at the manual approach first. git bisect in action First we’ll need a git repository to work with, here’s one I prepared earlier that includes a function that has been changed over the course of a number of commits. This repository has a bit of history to it, looking at the log with git log --oneline we can see that it started out quite simple and extra functionality has been added over time, we’ve also got some less than useful commit messages that we should probably speak to the dev about and try to get them writing better messages in future but that’s a different problem. The function in example.ps1 is pretty simple and just does some simple data gathering, either from a local computer or a remote one, nothing too special and it’s been working for a while. We’ve been informed by a colleague that it’s stopped querying remote machines at some point. Looking at the current version of the script it’s pretty obvious what the problem is, the parameter being passed in is $ServerName but the Invoke-Command is trying to connect to $ComputerName, that’ll not work out well for us but it’s an easy fix to revert everything to ComptuerName since that’s the convention in PowerShell (we can add an alias for ServerName if necessary). But now we need to know when this bug was introduced, in our case it’s an easy fix but in many it won’t be so obvious and it’s useful to know what else might be impacted by this change if it was made at the same time as others. This is where git bisect comes in. Let’s look at the manual way first, we’ll need to start a bisect session: Then we need to tell it which commit is bad and which is good, we’ll start with bad because we know what that is: In this case we can also use git bisect bad HEAD to point at the current commit or give it part of the sha for a specific commit if we’ve already fixed the issue on this branch. Next up we need to tell git which commit we know is good, this can be either a specific commit sha or it can be relative to the current HEAD position: In our case we know that the first commit that added remote computer support was good so we’ll use the first few characters of the sha for that. You can provide as many characters as you want from the sha of that commit but I find 4 is usually enough on most code bases, if you’ve got one with a lot of history then you might need to use 5 or more. As we can see from the output git has already picked a commit somewhere close to the middle of the distance between the commits we’ve specified. From here we can test the code and see if we’re still seeing the error. As we know that we are then we can tell git that this is a bad commit and to look for another commit to try somewhere between here and the known good commit. Now git has picked another commit but looking at the function we can see it’s also bad in this one. Git tries to predict how many more commits it’s got to check before it finds the issue, and given the short amount of history we’ve got available it predicts it’s got 0 steps left to take. So we’ll tell git that this commit is also bad: Based on the the fact that the next commit it could check is our known good commit git has decided that this must be the first bad commit and where our problem first occurred. So we can see what changed and who did it, in this case some person called Chris Gardner is going to get a nice bug to fix since he introduced it. When we’re done with this bisect we simple run git bisect reset to get out of it and back to the branch we started on. In this case it was a pretty short history to look through and the code base wasn’t very complex, but if we imagine scaling this up to an older code base with 10s or 100s of commits (or 1000s) and a lot of files then it can become quite time consuming to try to narrow down where the bad commit is. Luckily the folks who wrote bisect thought of this ahead of time and gave us a way to automate this. git bisect run From the documentation we can see that git bisect run expects a command to run and its arguments, and if it returns an exit code between 1 and 127 (but not 125) then it’ll assume the result was equivalent to git bisect bad and try moving on to the next commit. So how do we leverage this with PowerShell? We’ll start with the same example repository and try running our simple test file against it. This test file can be as simple or as complex as we need it to be, as long as it fails in at least one of the ways that we see with the bug, ideally it’ll become part of our testing suite for this code (or start it if we don’t have any yet) so that we can prevent it occurring again (or catch it before it gets to production). So with our example.tests.ps1 file we can start the git bisect again. Note before you run this you’ll want to do git reset --soft HEAD~1 to ensure the test files aren’t part of the history of the repository and therefore disappear as soon as git bisect checks out an earlier commit. Now we see a problem here, while we’d like to just run our test file and let Pester do what it should we can see that it’s not actually going to work that way. Because git isn’t a PowerShell tool it doesn’t know how to handle .ps1 files even if we run it from within a PowerShell prompt. So the workaround is to of course just run PowerShell.exe and pass it the file we want to run. So now we can get our code running and the tests correctly failing, but the commit it’s flagging as the bad commit is actually in the wrong direction. What’s causing this? It’s due to the fact we’re running the test file directly and it’s not giving back an exit code,so therefore git assumes it’s a 0 and the commit it’s checked is good. The solution to this problem is one of two things, we can write a quick one line script to call Invoke-Pester -Script .\Example.tests.ps1 -EnableExit (as seen in runtests.ps1) or we can just use PowerShell.exe -command Invoke-Pester -Script .\Example.tests.ps1 -EnableExit, either will work just as well. EnableExit on Invoke-Pester will cause it to return an exit code equal to the number of failing tests, hopefully this’ll be a low number (less than 127) but if it doesn’t you can instead use the script approach and capture the output of Invoke-Pester (using -PassThru) and return your own error code if it fails. This approach is very useful when you’ve got a large test suite that you need to run against the code as part of a bisect to ensure no other bugs have appeared (that you don’t already know about), for most cases though you want to craft a small number of tests to check just this bug and use those rather than whatever full suite you have. And now we can see it actually finds the correct commit that introduced the issue. In this case it took about as long either way, but that was due to the simplicity of the code and the small number of commits, on larger codebases or more complex code the automated way will almost always be better (and helps your overall test coverage). Conclusion So we’ve seen here how to use git bisect to find the exact commit a bug appeared in, using our knowledge of Pester if necessary to automate it, and from there we can take whatever action we need to beyond just fixing the bug. If this was a larger commit with a few files changed we could see if the same or similar issues had been introduced in those but not detected in production use yet. The documentation for git bisect is very good (as is most of the git documentation) and includes a few other things you can do with it. Hopefully this will help with any future debugging you need to do, narrowing down the root cause of a bug can help to fix it where just studying the code in its current form might not be proving as useful, especially if there has been refactoring going on at the same time.
https://chrislgardner.dev/powershell/2019/07/17/git-bisect-and-powershell.html
Cadillac is moving into electric vehicle territory. Cadillac, owned by GM, is planning to sell electric vehicles and SUVs in order to retain its shrinking market share. Learn more at Business Insider. Collection #1 Security Breach collection Troy Hunt, a security researcher, released Collection #1. Collection #1 a large collection (87GB) of email and password combinations found by multiple people on MEGA, a popular cloud storage service. The information in Collection #1 comes from several sources and security breaches. You can check if your email(s) or password(s) were found in Collection #1 by going to HaveIBeenPwned.com. If your email is found in the information of one of the breaches, the page will tell you which one. You may need to change your passwords if they are found to have been breached. I recommend LastPass to keep track of your passwords for websites and services. It can also generate secure passwords and automatically fill login forms.
https://seriousabouttech.com/electric-cadillac-collection-1-data-breach-episode-15/
Your Education. Our Mission. International collaborations play a significant role in the life of the Faculty of Medicine both in education and research. Please check the list of exchange programs, scholarships you might be able to apply for by clicking here. Albert Szent-Györgyi Medical School Financial Support Program Financial matters play a big role in one's decisions and the choice of educational institutions for further Higher Education studies. The University of Szeged offers a range of different scholarship programmes for international students to help cover tuition fees or living costs. SZTE scholarships Governmental scholarship opportunities EU scholarship opportunities Once you have won the scholarship, you are required to submit a request to the head of the Academic Board asking for an individual study plan for the semester you wish to spend abroad as an Erasmus student. The following forms have to be submitted at the Foreign Students' Secretariat 30 days prior the first day of your mobility period: Request to Participate in an Exchange Program Abroad Declaration of the Head of the Department The rules and regulations of the Faculty of Medicine about partaking in partial studies abroad can be found in the Faculty Academic Regulations, 10.2. Available here.
http://www.med.u-szeged.hu/fs/exchange-programs/exchange-programs
Kiev is not ready for similar response to Russia's partial lifting of sanctions: expert The Western infleunce is the second cause of Kiev’s incapability to take reciprocal measures in response to the partial lifting of restrictions, the analyst added KIEV, October 15. /TASS/. Kiev will not be able to give a similar response to Moscow’s decision to lift restrictions on supplies from three Ukrainian enterprises due to possible pressure from right-wing radical groups, as well as from the West, head of the Ukrainian Institute of Politics, political scientist Ruslan Bortnik told TASS. "We should not expect a public reaction from the Ukrainian authorities in this regard, some kind of reciprocal measures. So far, the Ukrainian side is not ready to improve even economic relations with the Russian Federation, to revise the restrictions that were previously imposed due to several factors. First of all, due to pressure right-wing radical groups that will use any, even the smallest concession towards Russia as a betrayal, as an instrument of struggle against the ruling party in elections and on the streets," Bortnik said. The Western infleunce is the second cause of Kiev’s incapability to take reciprocal measures in response to the partial lifting of restrictions, the analyst added. "Now it is impossible due to the influence of Western partners - for them the restoration of economic cooperation and, accordingly, Russia's influence on the Ukrainian economy is the way to losing control over the Ukrainian economy," the expert said. Bortnik noted that the merit of the removal of restrictions belongs to the Opposition Platform - For Life party, which shows its effectiveness in communicating with Russia and is trying to score points in preparation for the local elections on October 25. However, the political scientist admitted that negotiations on lifting restrictions are not only an election step. "To some extent Opposition Platform - For Life party is meeting the social demand for direct negotiations between Ukraine and Russia, which are supported by 50-70% of Ukrainians, depending on a social survey," Bortnik recalled. In his opinion, in this way "Russia can establish a direct dialogue with certain regional financial and industrial groups" of Ukraine. Ukrainian companies that get the access to the Russian market will receive serious competitive advantages. However, he admitted, they may also find themselves under pressure from Ukrainian radical groups. On October 6, Prime Minister Mikhail Mishustin promised to consider the request of head of the Political Council of the Opposition Platform - For Life party Viktor Medvedchuk on the possible partial lifting of the counter-sanctions Russian had imposed on a number of industrial enterprises in Ukraine. On Wednesday, President Vladimir Putin supported Mishustin's proposal to allow the supply of products to the country from three Ukrainian enterprises, which were subject to retaliatory restrictive measures. The Russian government allowed the supply of equipment and products to the Russian market from the Rubezhansk Cardboard and Container Plant of the Lugansk Region, the Bratslav company and the Barsky Machine-Building Plant of the Vinnitsa Region.
--- abstract: 'It is shown that the trace of $3$ dimensional Brownian motion contains arithmetic progressions of length $5$ and no arithmetic progressions of length $6$ a.s.' author: - Itai Benjamini - Gady Kozma date: '13.10.18 ' title: Arithmetic progressions in the trace of Brownian motion in space --- Introduction ============ In this note we comment that a.s. the trace of a $3$ dimensional Brownian motion contains arithmetic progressions of length $5$, and no arithmetic progressions of length $6$. Similarly, the maximal arithmetic progression in the trace of Brownian motion in $\mathbb{R}^{d}$ is $3$ for $d = 4,5$ and $2$ above that (we will only prove the three dimensional result here). On the other hand, the trace of a $2$ dimensional Brownian motion a.s. contains arbitrarily long arithmetic progressions starting at the origin and having a fixed difference. Consider $n$ steps simple random walk on the $d$ dimensional square grid $\mathbb{Z}^{d}$, look at the number of arithmetic progressions of length $3$ in the range, study the distribution and large deviations? [**Question:**]{} In the large deviations regime, is there a deterministic limiting shape? Proofs ====== We start with the two dimensional case. The trace of $2$ dimensional Brownian motion a.s. contains arbitrarily long arithmetic progressions starting at the origin and having a fixed difference. Given a set $S$ of Hausdorff dimension $1$ in the Euclidean plane, $2$ dimensional Brownian motion $W$ running for unit time will intersect $S$ in a set of Hausdorff dimension $1$ as well, with positive probability, see e.g. [@P], [@MP]. Examine the unit circle. With positive probability Brownian motion run for unit time intersects the unit circle in a set $S_1$ of dimension $1$. To each point in $S_1$ add it to itself to get $S_2$ a set of dimension $1$. Let $\tau_1\ge 1$ be the first time (after 1) our Brownian motion hits the circle with radius $3/2$. Examine it now in the time interval $[\tau_1,\tau_1+1]$. By the Harnack principle [@MP Theorem 3.42], the probability that Brownian motion started from $W(\tau_1)$ to intersect $S_2$ in a set of dimension 1 is comparable to that of Brownian motion starting from 0 which, as already stated, is bounded away from 0. Hence $W[\tau_1,\tau_1+1]$ will again intersect $S_2$ in a set of dimension $1$. To each point in the intersection of the form $2x, x\in S_1$ add $x$ and call the resulting set $S_3$, again of dimension 1. Continue in the same manner to get arbitrarily long arithmetic progressions. Scale invariance implies that we get arbitrarily long arithmetic progression with probability $1$. The argument above shows that with positive probability the trace of a unit time two dimensional Brownian motion admits uncountably many arithmetic progression of arbitrary length and difference $1$. We now prove the high dimensional result. \[lem:no6\] A $3$-dimensional Brownian motion contains no arithmetic progressions of length 6, a.s. By scaling invariant we may restrict our attention to arithmetic progressions contained in the unit ball $B$, and to spacings at least $\delta$ for some $\delta>0$. Denote the Brownian motion by $W$. If it contains an arithmetic progression then for every $\varepsilon>0$ one may find $x_{1},\dotsc,x_{6}\in B\cap\frac13 \varepsilon\mathbb{Z}^{d}$ such that $W\cap B(x_{i},\varepsilon)\ne\emptyset$ and such that the $x_{i}$ form an $\varepsilon$-approximate arithmetic progressions, by which we mean that $|x_{i-1}+x_{i+1}-2x_{i}|\le4\varepsilon$ for $i=2,3,4,5$. Further, the $x_{i}$ are $\delta$-separated in the sense that $|x_{i}-x_{i+1}|\ge\delta-2\varepsilon$ for $i=1,2,3,4,5$. Denote the set of such $x_{i}$ by $\mathcal{X}$ and define $$\begin{gathered} H_x=\mathbbm{1}\{W\cap B(x_{i},\varepsilon)\ne\emptyset\;\forall i\in\{1,\dotsc,6\}\}\qquad x=(x_{1},\dotsc,x_{6})\\ X=X(\varepsilon)=\sum_{x\in\mathcal{X}}\mathbbm{1}\{H_x\}.\end{gathered}$$ We now claim that $$\mathbb{E}(X)\le C\qquad\mathbb{E}(X^{2})\ge c|\log\varepsilon|\label{eq:moments}$$ where the constants $c$ and $C$ may depend on $\delta$. Both calculations are standard: the first (that of $\mathbb{E}(X)$), is an immediate corollary of the fact that 3$d$ Brownian motion starting from 0 hits the ball $B(v,\eps)$ with probability $\approx\eps/(|v|+\eps)$, see e.g. [@MP corollary 3.19]. Here and below, $\approx$ means that the ratio of the two quantities is bounded above and below by constants that depend only on $\delta$. This gives $$\label{eq:miloyodea} \PP(H_x)\approx \frac{\eps^6}{d(0,x)+\eps}$$ where $d(0,x)\coloneqq\min\{d(0,x_i):i=1,\dotsc,6\}$. Denote by $\mathcal{X}_n$ the set of $x\in\mathcal{X}$ such that $\eps 2^n<d(0,x)\le\eps 2^{n+1}$, with $\mathcal{X}_0$ having the lower bound removed. We can now write $$\mathbb{E}(X)=\sum_{n=0}^{\log 1/\varepsilon}\sum_{x\in\mathcal{X}_n}\mathbb{P}(H_x) \stackrel{\textrm{\eqref{eq:miloyodea}}}{\le} C\sum_{n=0}^{\log 1/\varepsilon}2^{3n}\cdot 2^{-n}\cdot \eps^{-3}\cdot \eps^5\le C$$ where $2^{3n}$ is the number of possibilities for the $x_i$ closest to 0 for $x\in\mathcal{X}_n$ (this, and all other quantities in this explanation are up to constants); where $2^{-n}$ is $\PP(W\cap B(x_i,\eps)\ne\emptyset)$; where $\eps^{-3}$ is the number of possibilities for $x_2-x_1$ (we use here that the determination of $x_1$ and $x_2-x_1$ leave only a constant number of possibilities for $x_3,\dotsc,x_4$); and where $\eps^5$ is the probability to hit all of $B(x_1,\eps),\dotsc,B(x_6,\eps)$ except $B(x_i,\eps)$ given that you have hit $B(x_i,\eps)$. The calculation of $\mathbb{E}(X^{2})$ is similar, we write $\mathbb{E}(X^{2})=\sum_{x,y\in\mathcal{X}}\mathbb{P}(H_x\cap H_y)$ and estimate the probability directly. We get about constant contribution from each set $\{x,y:|x_i-y_i|\approx 2^{-n}\;\forall i\}$ for every $n$, hence the $|\log\varepsilon|$ term. We now make a somewhat stronger claim on the interaction between different $x$. We claim that there exists $\lambda>0$ such that, for any $x$, $$\mathbb{P}(H_x\cap\{X\le\lambda|\log\varepsilon|\})\le\frac{C}{|\log\varepsilon|}\mathbb{P}(H_x).\label{eq:conditioned}$$ To see this fix $x$ and let, for each scale $k\in\{1,\dotsc,\lfloor|\log\varepsilon|\rfloor\}$, $$\begin{gathered} X_{k}\coloneqq\sum_{y\in\cY_k}\mathbbm{1}\{H_y\}\\ \cY_k\coloneqq\{y\in\mathcal{X}:2^{k}\varepsilon\le|y_{i}-x_{i}|<2^{k+1}\varepsilon\quad\forall i\in\{1,\dotsc,6\}\}\end{gathered}$$ ($X_k$ depends on $x$, of course, but we omit this dependency from the notation). A calculation identical to the above shows that $\mathbb{E}(X_{k}\,|\,H_{x})\ge c$ and $\mathbb{E}(X_{k}^{2}\,|\,H_{x})\le C$ so $$\label{eq:shalosh vakhetzi} \mathbb{P}(X_{k}>0\,|\,H_{x})\ge c.$$ Further, the events $X_{k}>0$ (still conditioned on $H_{x}$) are approximately independent in the following sense: \[lem:pffff\]For each $x\in\cX$ and $k\in \{1,\dotsc,\lfloor\log(\delta/4 \eps)\rfloor\}$, $$\operatorname{cov}(X_{k}>0,X_{l}>0\,|\,H_{x})\le2e^{-c|k-l|}.\label{eq:cov}$$ Assume for concreteness that $k<l$ and that $l-k$ is sufficiently large (otherwise the claim holds trivially, if only the $c$ in the exponent is taken sufficiently small). Define two radii $r<s$ between $2^k\eps$ and $2^l\eps$ as follows: $$r\coloneqq 2^{(2/3)k+(1/3)l}\eps\qquad s\coloneqq 2^{(1/3)k+(2/3)l}\eps.$$ Next, define a sequence of stopping times: the even ones for exiting balls of radius $s$ and the odd ones for entering balls of radius $r$. In a formula, let $\tau_0=0$ and $$\begin{aligned} \tau_{2m+1}&\coloneqq\inf\Big\{t\ge \tau_{2m}:W(t)\in\bigcup_{i=1}^6 B(x_i,r)\Big\}\\ \tau_{2m}&\coloneqq\inf\Big\{t\ge \tau_{2m-1}:W(t)\not\in\bigcup_{i=1}^6 B(x_i,s)\Big\}\end{aligned}$$ Let $M$ be the first number such that $\tau_{2M+1}=\infty$. With probability 1 $M$ is finite. We now claim that $$\label{eq:Mis6} \PP(H_x\cap\{M\ge 6+\lambda\})\le \frac{\eps^6}{d(0,x)+\eps}\big(Cr/s\big)^\lambda\qquad\forall\lambda=1,2,\dotsc$$ To see assume $d(0,x)>c$ for simplicity. Then every visit to $B(x_i,\eps)$ from $\partial B(x_i,r)$ “costs” $\eps/r$ in the probability, while every visit of $B(x_j,r)$ from $\partial B(x_i,s)$ costs $r/s$ if $i=j$ and $r$ if $i\ne j$. Since $H_x$ requires a visit to all of $x_1,\dotsc,x_6$ we have to pay the costs $\eps/r$ and $r$ at least 6 times, and the costs of $r/s$ (or $r$, which is smaller) at least $\lambda$ times. Counting over the order in which these visits happen adds no more than a $C^\lambda$. This shows in the case that $d(0,x)>c$. The other case is identical and we skip the details. Since $\PP(H_x)\approx \eps^6/(d(0,x)+\eps)$ (recall ) this shows that the case $M>6$ is irrelevant. Indeed, if we define $\cK=\{X_k>0\}\cap\{M=6\}$ and $\cL=\{X_l>0\}\cap\{M=6\}$ then $$\label{eq:khamesh vakhetzi} |\operatorname{cov}(\cK,\cL|H_x)-\operatorname{cov}(X_k>0,X_l>0|H_x)|\le \frac{Cr}{s}$$ (for $l-k$ sufficiently large) and we may concentrate on $\operatorname{cov}(\cK,\cL|H_x)$. Let $\mu$ be the measure on $\RR^{36}$ giving the distribution of $W(\tau_1),\dotsc,W(\tau_{12})$ (we will not distinguish between $(\RR^{3})^{12}$ and $\RR^{36}$). For an event $E$ we will use $\PP(E|W=u)$ as a short for $$\PP(E\,|\,W(\tau_i)=u_i\;\forall i\in\{1,\dotsc,12\},M=6)$$ (which is of course a $\mu$-almost everywhere defined function). We next observe that for $E$ equal to any of $\cL$, $H_x$ and $\cK\cap H_x$ the function $\PP(E\,|\,W=u)$ is nearly constant i.e. $$\label{eq:constant} \frac{\operatorname{ess\, max}\PP(E\,|\,W=u)}{\operatorname{ess\, min}\PP(E\,|\,W=u)} \le 1+2 e^{-c|k-l|}$$ This is because $\cK$ and $H_x$ depend only on the behaviour inside the balls $B(x_i,2^k\eps)$ while $u_{2m+1}$ are on $\partial B(x_i,r)$. This follows from the well-known fact that the distribution of $W$ on the first hitting times (after $\tau_{2m+1}$) of $B(x_i,2^k\eps)$ is independent of $u_{2m+1}$, up to an error of $(2^k\eps)/r$; and similarly, the conditioning on exiting $B(x_i,s)$ at $u_{2m+2}$ only adds an error of $(2^k\eps)/s$. For the convenience of the reader we recall briefly how this is shown: consider Brownian motion started from a $y_1\in\partial B(x_i,r)$ and let $y_2\in \partial B(x_i ,2^{k+1}\eps)$ be the first point visited in $B(x_i,2 ^{k+1}\eps)$, let $y_3$ be the last, and let $y_4$ be the first point visited in $B(x_i,s)$. Then the joint distribution of $y_2$, $y_3$ and $y_4$ can be written easily using the Poisson kernel (see [@MP Theorem 3.44] for its formula). For example, the density of $y_2$ is $(r-\eps 2^{k+1})|y_2-y_1|^{-3}$ (the density in $\RR^3$) from which we need to subtract the density after exiting $B(x_i,s)$, which is given by an integral of similar expressions. The exact form does not matter, only the fact that the $y_1$ dependency comes from the term $|y_2-y_1|$ is nearly constant in $y_1$ in the sense above. The same holds for the density of the transition from $y_3$ to $y_4$ and the density between $y_2$ and $y_3$ is of course completely independent of $y_1$ and $y_4$. Conditioning on exiting in a given $y_4$ is merely restricting to a subspace and normalising, conserving the near independence. This justifies (\[eq:constant\]) in this case. We have ignored here the case that $0\in B(x_i,r)$ for some $i$, in which case $u_1$ is inside $B(x_i,r)$ rather than on its boundary, but in this case $u_1$ is constant and certainly does not affect anything. This shows for $E=H_x$ and $\cK\cap H_x$. The argument for the other case is similar, becuase $\cL$ depends only on what happens outside $B(x_i,2^l\eps)$ and $u_{2m}$ is on $\partial B(x_i,s)$ (this time without exceptions). Hence we have only an error of $s/(2^l\eps)$. All these errors are exponential in $l-k$. This shows is all 3 cases. In particular we get, for all three cases for which holds, that $$\label{eq:constant2} \PP(E\,|\,W=u)=\PP(E)(1+O(e^{-c|k-l|}))$$ which holds for $\mu$-almost every $u$. The last point to note is that, conditioning on $W=u$ makes $\cL$ independent of $H_x$ and of $\cK$ as the first depends only on what happens in the odd time intervals, i.e. between $\tau_{2m}$ and $\tau_{2m+1}$, $m=0,\dotsc,6$ while the other two depend on what happens in the even time intervals, between $\tau_{2m-1}$ and $\tau_{2m}$, $m=1,\dotsc,6$. Hence $$\begin{aligned} \PP(\cL\cap H_x)&=\int \PP(\cL\cap H_x\,|\,W=u)\,d\mu(u)\\ \textrm{by independence}\qquad &=\int\PP(\cL\,|\,W=u)\PP(H_x\,|\,W=u)\,d\mu(u)\\ \textrm{by \eqref{eq:constant2}}\qquad &= \PP(\cL)\PP(H_x)(1+O(e^{-c|k-l|})).\end{aligned}$$ A similar argument gives $$\PP(\cL\cap\cK\cap H_x)=\PP(\cL)\PP(\cK\cap H_x)(1+O(e^{-c|k-l|})).$$ Together these two inequalities bound $\operatorname{cov}(\cK,\cL\,|\,H_x)$. With the lemma is proved. With (\[eq:cov\]) established we can easily see (\[eq:conditioned\]), by using Chebyshev’s inequality for the variable $\#\{k:X_{k}>0\}$, with giving the first moment and the covariance. (In fact, it is not difficult to get a much better estimate than $C/|\log\varepsilon|$, an $\varepsilon^{c}$ is also possible. But we will not need it). Summing (\[eq:conditioned\]) over all $x$ and using gives $$\mathbb{P}(X\in(0,\lambda|\log\varepsilon|))\le\frac{C}{|\log\varepsilon|}$$ This, with $\mathbb{E}(X)\le C$ shows that $\mathbb{P}(X>0)\le C/|\log\varepsilon|$, proving lemma \[lem:no6\]. A $3$-dimensional Brownian motion contains arithmetic progressions of length 5, a.s. Let $\varepsilon$ and $X=X(\varepsilon)$ be as in the proof of the previous lemma (except we now fix $\delta$ to be, say, $\frac{1}{10}$). It is straightforward to calculate $$\mathbb{E}(X(\varepsilon))\ge\frac{c}{\varepsilon}\qquad\mathbb{E}(X(\varepsilon)^{2})\le\frac{C}{\varepsilon^{2}}$$ which show that $\mathbb{P}(X(\varepsilon)>0)\ge c$. A simple calculation shows that for some $\lambda>0$ we have that $X(\lambda\varepsilon)>0\implies X(\varepsilon)>0$. Hence $\{X(\lambda^{k})>0\}$ is a sequence of decreasing events with probabilities bounded below. This implies that $$\mathbb{P}\Big(\bigcap_{k}\{X(\lambda^{k})>0\}\Big)>0.$$ The event of the intersection can be described in words as follows: for every $k$ there exists $x_{1}^{(k)},\dotsc,x_{5}^{(k)}\in B$ which are $\frac{1}{10}$-separated and $\lambda^{k}$-approximate arithmetic progression such that $W\cap B(x_{i}^{(k)},\lambda^{k})\ne\emptyset$ for $i\in\{1,\dotsc,5\}$. Taking a subsequential limit we get $x_{i}^{(k_{n})}\to x_{i}$ and these $x_{i}$ will be $\frac{1}{10}$-separated, will form an arithmetic progression, and will be on the path of $W$. So we conclude $$\mathbb{P}(W\text{ contains a 5-term arithmetic progression in }B)>0.$$ Scaling invariance now shows that the probability is in fact $1$. [XYZ]{} P. M[ö]{}rters, and Y. Peres, Brownian motion. Cambridge Series in Statistical and Probabilistic Mathematics, 30. Cambridge University Press, Cambridge, 2010. xii+403 pp. Y. Peres, Intersection-equivalence of Brownian paths and certain branching processes. Comm. Math. Phys. 177 (1996)417-434.
Dive Brief: Applications from this group are up at 43% of responding colleges for the next academic year, nearly double the share saying so a year ago. International students could still face hurdles reaching the U.S., including delays in visa processing and uncertain vaccination protocols at colleges. Dive Insight: Colleges have provided flexibility and a range of support to international students during the pandemic, IIE found. Eight in 10 respondents said they enrolled international students in face-to-face and online classes this spring, and the vast majority of schools said these students took at least one virtual course. Respondents said they have adapted advising and course schedules to accommodate international students. Meanwhile, the share of colleges offering this cohort emergency funding increased from spring 2020, as did their communications to these students on health and wellbeing. A larger share of respondents also said they were allowing electronic signatures on documentation for students' visas. Looking ahead to fall, no respondents said they planned to offer only online instruction, and 90% said they'd offer international students an in-person study option in the U.S. This is critical for recruiting international students, as current guidance requires new students to be in a program with an in-person learning requirement in order to enter the U.S. Of the 90% of respondents who promised a face-to-face option for international students, more than half were schools planning a mix of in-person and online classes, while a third plan to provide only in-person courses for them. Federal agencies have loosened and clarified restrictions on international students, changes that could give colleges more confidence that they will be able to enroll here this fall. Higher education groups have also asked them to address delays in visa processing, including by streamlining the process. Vaccination protocols could be a hurdle, given that students outside the country may have received shots not approved for use in the U.S. Nearly two-thirds of respondents said they will offer vaccines on campus, and just under half said they don't plan to require vaccinations before a student gets to campus. Schools may opt to consider students with vaccines approved by the World Health Organization as vaccinated, The New York Times reported. Scientists and colleges are still exploring how to address students who got a vaccine that's not WHO-approved. Students who can't make it to the U.S. will be offered the option to defer to spring 2022 at more than three-quarters of responding schools, IIE's survey found. And just under half said they would allow international students to enroll online until they could come to campus.
As if new variants of Covid-19 were not concerning enough, now combinations of the virus with other diseases is resulting in the formation of formerly unheard of diseases. First, it was Delmicron, a condition resulting from simultaneous infections of both Omicron and Delta variants of Covid-19. And now, a new disease called "Florona" seems to have surfaced in Israel. What is Florona? Florona is apparently the name given to the condition when a patient contracts both Covid-19 and Influenza. According to a tweet by Arab News, Israel reported its first "Florona" case on Thursday, a day before the country began administering fourth vaccine shots to vulnerable populations. According to reports, Florona occurs after "double infection" or "co-infection" with both SARS-CoV-2 and the flu virus. The case was detected in an unvaccinated pregnant woman who had been admitted to a medical centre. Related Stories Unlike some of the claims made by fake news posts of social media, Florona is not a new Covid-19 variant. The last Covid-19 variant that was detected was Omicron and no further variants have since been identified by the World Health Organisation. The WHO does, however, confirm that co-infection with both Covid-19 (any variant) and the flu virus is indeed possible and added that the best way to avoid such a condition is to get vaccinated against both Covid-19 and Influenza. What are the symptoms of Florona? While both Covid-19 and influenza affect the respiratory system, there are some differences between the way flu and Covid-19 impacts health and manifests themselves. Symptoms of "Florona" include high fever, consistent chest pain or constriction, shortness of breath and loss of appetite. It can also lead to states of confusion and anxiety. According to the WHO, mild symptoms of a double Covid-19 and flu infection can be treated at home itself without requiring any hospitalisation. In severe cases, symptoms of Florona may also include pneumonia, myocarditis and inflammation in heart muscles. Is Florona a matter of concern? Both Covid-19 and flu are viruses that affect the respiratory tract and can cause severe illness and even death. While the symptoms and means of transmission for both viruses are the same, both have different treatments and different vaccines. Double infection with both viruses can cause complications the body and stress out the immune system. While "Florona" is not a new variant, the occurrence might be indicative of a weakened immune system under attack from two virus infections, Dr Nahla Abdel Wahab, Cairo University Hospital doctor, was quoted by Israeli media following the emergence of the disease. Amid the ongoing winter months, otherwise known as "flu season", season influenza breakouts are not uncommon in several countries. With Covid-19 cases also peaking across the world following the emergence of Omicron, fear of "Florona" cases spiking may not be unfounded. (Credit: Pixabay) How to prevent Florona? Following social distancing protocols, wearing masks and getting vaccinated against both Covid-19 and influenza is the only way to prevent Florona. Flu vaccines have been used since 1949, especially among vulnerable groups such as senior citizens. Meanwhile, a majority of countries have rolled out vaccination plans for their adult populations. Read More from Outlook Jeevan Prakash Sharma / Citing several examples of children's strong and natural immunity which was evident during the past two strong waves of Covid-19, top experts have launched the 'Happy 2022 For Kids’ campaign demanding immediate resumption of schools. Umar Khalid / You remain hopeful some judge will see through the absurdity of the charges. You also caution yourself about the perils of nurturing such hopes, writes Umar Khalid after spending 15 months in Tihar jail as an undertrial.
CROSS-REFERENCE TO RELATED APPLICATIONS BACKGROUND DETAILED DESCRIPTION Glossary DESCRIPTION DRAWINGS This application claims priority and benefit under 35 u.s.c. 119 to U.S. application Ser. No. 62/053,103, filed on Sep. 19, 2014, which is incorporated herein by reference in its entirety. The SCTE (Society of Cable Telecommunications Engineers) has defined an addressable advertising system architecture that utilizes different information systems to assist in the selection of digital ad content for insertion into or presentation with program content. The advertising decision system (SCTE 130-3) identifies and coordinates the insertion of ads into media systems which may include linear TV advertising and video on demand, among other possibilities. Ads and program content can be classified and described in the content information system (SCTE-4). An ad decision system component (ADS) may register with a content CIS system to search for content and receive alerts when specific types of content are available. A placement opportunity information service (POIS) (SCTE 130-5) may be operated to identify when advertising inventory is available for use. The subscriber information service (SIS) (SCTE 130-6) may be operated to obtain information related to subscriber activities (preferences or viewing habits). “plugable” in this context refers to a logic block that is part of a family of logic blocks, each having a common input interface (possibly with specific extensions that vary across the family) and a common output interface (again possible with extensions). The internal logic of plugable logic may vary from block to block, performing different processing/transformations of the inputs to the outputs. The following disclosure may make reference to these terms: Ad decision service (ADS)&#x2014;a system component that determine which ads are selected to be combined with other program content and how they will be combined. Decisions made by the ADS may be specific (date and time) or they may be a set of conditions and parameters (such as geographic zones and subscriber profile information). ADS is part of the SCTE 130 advertising specification series. Ad Management Service&#x2014;ADM&#x2014;a system component which controls which coordinates the insertion of advertising media into program streams. ADS units register with one or more ad management (ADM) devices which control the actual splicing of ads with program streams. Content Information Service&#x2014;CIS&#x2014;a system component that identifies and manages descriptive data (metadata) for programs and advertising messages. The CIS system allows for the searching, discovery, and alerting of the availability of media items and their classifications. Placement Opportunity Information Service&#x2014;POIS&#x2014;is system component that identifies and provides descriptions of placement opportunities for media (such as the availability to insert ads). The POIS may contain requirements and attributes that can include which platforms may be used, ownership rights, and policies that are used to coordinate the placement of media. Placement opportunities are content specific so they can vary based on the type of network, geographic location, or other associated content attributes. Subscriber Information Service&#x2014;SIS&#x2014;is a system component that can store, process, and access subscriber information that can assist in the selection of ads. SIS enables behavioral targeting of ads. Because SIS captures personal information of viewers, SIS systems may be required to control access and limit identification information to ensure viewer privacy. This disclosure may reference these abbreviations: ADM&#x2014;Ad management service ADS&#x2014;Ad decision service CIS&#x2014;Content information service CRM&#x2014;Customer relationship management DMP&#x2014;Data management platform EPG&#x2014;Electronic program guide PAID&#x2014;Provider/asset ID POIS&#x2014;Placement opportunity service PSN&#x2014;Placement status notification ODCR&#x2014;On demand commercial rating SCTE&#x2014;Society of Cable Telecommunication Engineers SIS&#x2014;Subscriber information service VOD&#x2014;Video on demand Disclosed herein are embodiments of a system and process for the extraction of measurement data for 3rd parties (e.g. Nielsen) from an [SCTE 130-3] based messaging system. Said system provides ad routing capabilities, and data is extracted from [SCTE 130-3] messages to provide keys that enable access to subscriber, program and ad asset metadata as well as decision, impression and engagement metrics. The specific fields to be extracted may be configurable. Input to the measurement system may be file (batch) based. Program asset metadata ad asset metadata subscriber metadata [SCTE 130-3] PlacementRequest messages, which define keys to look up and access program asset and subscriber metadata and to connect PlacementRequest and PlacementResponse messages together. [SCTE 130-3] PlacementResponse messages, which define keys to look up ad asset metadata and the key to connect PlacementResponse and PlacementStatusNotification messages together [SCTE 130-3] PlacementStatusNotification messages, which provide an impression metric. The measurement system may utilize the following: The system may implement and/or conform to various standardized technologies, such as [SCTE 130-3] ANSI/SCTE 130-3 2010&#x2014;Digital Program Insertion&#x2014;Advertising Systems Interfaces Part 3 Ad Management Service (ADM) Interface; [SCTE 130-4] ANSI/SCTE 130-4 2011&#x2014;Digital Program Insertion-Advertising Systems Interfaces Part 4 Content Information Service (CIS); [SCTE 130-5] ANSI/SCTE 130-5 2010&#x2014;Digital Program Insertion-Advertising Systems Interfaces Part 5-Placement Opportunity Information Service; [SCTE 130-6] ANSI/SCTE 130-6 2010&#x2014;Digital Program Insertion-Advertising Systems Interfaces Part 6-Subscriber Information Service (SIS). FIG. 1 100 102 104 106 108 110 112 114 116 118 104 112 118 120 illustrates an aspect of a digital advertisement measurement system in accordance with one embodiment. The system comprises an ADM , POIS , SIS , audience profile manager , ad server , CIS , measurement system , central controller , and ad router . The POIS , CIS , and ad router may be components of central runtime logic . 102 116 102 100 FIG. 1 The ADM is operated to provide a normalized [SCTE 130-3] interface between the ad decision components of the central controller and the rest of the service provider infrastructure. The ADM originates ad requests and provides normalized measurement data. A service provider source system (not shown in ) provides the digital advertisement measurement system with assets metadata. 118 106 112 104 110 114 The ad router is operated to service ad requests. Typically requests are received from a component involved in the downstream play out infrastructure that is sparsely populated. This component then invokes various request decoration services such as the SIS , CIS (and POIS in non-ODCR use cases) before invoking the ad server . This component also receives return path information and is responsible for disseminating it to the appropriate original decision maker, and to the measurement system . 110 The ad server component is operated to determine which available ad should be placed for a particular placement opportunity. 116 100 The central controller component operates to provide ingress/egress for metadata from the service provider system to the digital advertisement measurement system . It facilitates communication between the management and runtime components as well as external systems. 112 The CIS component is operate to provide content information service request decoration data based on content context utilizing [SCTE 130-4]. This component also provides ad breakpoint information for both the viewed and current episodes of a streamed content asset. 114 116 118 114 The measurement system is operated to transform metadata from the central controller as well as log data from the ad router and generate a set of measurement data. The measurement system may process data in batch (as files) and not operate as a real-time service. 104 118 The POIS is operated to provide ad placement opportunity and ownership information via [SCTE 130-5] calls made by the ad router . 108 108 108 The audience profile manager is operated to ingest data provided by the subscriber data source(s). The format of the subscriber data may be proprietary (i.e. non [SCTE 130-6]) and therefore may require a transforming adapter to normalize the provided data. The audience profile manager may be implemented as a browser based tool that allows users to understand reach of various audience qualifiers, create aggregated audience profiles from discrete audience qualifiers and designate them for use later on as user preference items. The audience profile manager may be operated to research and classify subscribers across all of the addressable platforms (e.g. QAM VOD, IP ABR, EPG, etc.). 106 108 106 The SIS is operated to receive data from the audience profile manager and provide the data to a user preference distribution component via [SCTE 130-6]. The same SIS may be used across all of the addressable platforms and may provide low latency, high throughput device based lookups as well as longer running asynchronous queries and query registration/notifications. FIG. 1 108 A measurement partner (not illustrated in ) may be implemented by a service provider system that provides subscriber data to the audience profile manager . The data may be source directly from the service provider (e.g. CRM and billing systems) via a data management platform (DMP) (e.g. Experian, Claritas, etc.) or from an advertiser/agency via a 3rd party blind match process facilitated by a DMP. 114 122 110 102 118 100 Measurements from the measurement system may be applied directly or via the measurement partner to operate an ad campaign manager , which in turn will affect the operation of the ad server , ADM , ad router , and other components of the digital advertisement measurement system , thus completing a control feedback system. FIG. 2 200 illustrates an aspect of a provisioning process for digital advertisement engagement measurement in accordance with one embodiment. 202 204 206 208 210 Asset metadata is distributed to BlackArrow Central CIS from BlackArrow Central (Management) at block . Placement opportunity metadata is distributed to BlackArrow Central POIS from BlackArrow Central (Management) at block . Asset metadata is distributed to BlackArrow Measurement component from BlackArrow Central (Management) at block . Audience metadata is distributed to BlackArrow Audience SIS from BlackArrow Audience Profile Manager at block . Audience metadata is distributed to BlackArrow Measurement component from BlackArrow Audience Profile Manager at block . FIG. 3 300 illustrates an aspect of a process to measure audience engagement with digital advertising in accordance with one embodiment. 300 The process to measure audience engagement with digital advertising may take place in four phases. (I) Distribute program, ad and subscriber metadata 1. A service provider (e.g., a cable television network operator) provides the system with program asset metadata (e.g., in the form of ADI 1.1). 2. The service provider provides the system with ad asset metadata (e.g., in the form of ADS 1.1). 3. The service provider provides subscriber metadata (e.g., in the form of a CSV). The metadata provided by service provider is ingested and stored by the measurement system. (II) Play VOD Title 1. A subscriber operates an EPG, companion or mobile device to discover VOD title (for example). 2. The subscriber selects the VOD title for playback. 3. This selection triggers series of events to initiate an ad request via the ADM. (III) Generate Decision Event Data 1. The ADM signals the Ad Router with an [SCTE 130-3] PlacementRequest at VOD session start. 2. The Ad Router records the [SCTE 130-3] PlacementRequest in message file. 3. The Ad Router performs request decoration and ad routing function. 4. The Ad Router creates a [SCTE 130-3] PlacementResponse message from unifying ad responses from service provider and content provider ad servers. 5. The Ad Router records the [SCTE 130-3] PlacementResponse in a message file. (IV) Generate Impression Event Data 1. The ADM calls the Ad Router with an [SCTE 130-3] PlacementStatusNotification message. 2. The Ad Router records the [SCTE 130-3] PlacementStatusNotification in the message file. 302 304 306 308 310 312 314 316 318 An [SCTE 130-3] PlacementRequest message sent to BlackArrow Central Ad Router from ADM at block . An [SCTE 130-4] content metadata request from BlackArrow Central CIS by BlackArrow Central Ad Router at block . An [SCTE 130-6] subscriber metadata request from BlackArrow Central SIS by BlackArrow Central Ad Router at block . An [SCTE 130-5] placement opportunity metadata request from BlackArrow Central POIS by BlackArrow Central Ad Router at block . An ad request made to ad server at block . The ad server could be the BlackArrow Campaign ADS or a 3rd party ADS. An [SCTE 130-3] PlacementReponse is then returned from the BlackArrow Central Ad Router to the ADM. The ADM makes an [SCTE 130-3] PlacementStatusNotification with measurement data to BlackArrow Central Ad Router at block . The BlackArrow Ad Router provides measurement data to ad server at block . Log data is distributed to BlackArrow Central (Management) from BlackArrow Central (Runtime) for standard reporting and analytics at block . Log data is distributed to BlackArrow Measurement from BlackArrow Central (Runtime) at block . Measurement data is made available to the Measurement Partner from the BlackArrow Measurement component. FIG. 4 400 illustrates an aspect of measurement system configuration logic in accordance with one embodiment. 114 402 118 The input controls to the measurement system component include a batch measurement control structure generated by the ad router . 402 114 404 406 408 102 410 114 The batch measurement control structure may comprise three types of sub-control structures to operate the measurement system : PlacementRequest sub-control , PlacementResponse sub-control , and PlacementStatusNotification sub-control . Due to the different ways in which control values may be presented within each of these controls by different types of the ADM , the implementation of parsing these control structures may be performed by plugable adaptor logic comprising a standard interface to the measurement system . 114 402 120 410 410 404 404 412 Time of placement request Placement request ID Device ID Program asset ID The measurement system component retrieves the batch measurement control structure from each of the central runtime logic instances which are then read and parsed by the plugable adaptor logic . When the plugable adaptor logic detects one of the PlacementRequest sub-control it activates logic specific to the specific variant of the PlacementRequest sub-control to extract the following control values which are then communicated to and impressed within the request store (e.g., a nonvolatile storage element comprising database management logic). 410 406 406 414 414 412 412 Time of placement request (from request store ) 412 Placement request ID (from request store ) 412 Device ID (from request store ) 412 Program asset ID (from request store ) Placement response ID Creative asset ID When the plugable adaptor logic detects one of the PlacementResponse sub-control it activates logic specific to the variant of the PlacementResponse sub-control to extract the following control values which are then communicated to and impressed within in the decision store . There may be a sub-control in the decision store for each individual ad asset that has a corresponding decision. The control values stored in decision request store are looked up by using the a reference value of or from the placement request ID. 410 408 408 416 416 412 412 Time of placement request (from request store ) 412 Placement request ID (from request store ) 412 Device ID (from request store ) 412 Program asset ID (from request store ) 412 Placement response ID (from response request store ) 414 Creative asset ID (from decision store ) Time of impression Placement status notification ID When the plugable adaptor logic detects one of the PlacementStatusNotification sub-control it logic to process this specific variant of PlacementStatusNotification sub-control to extract the following values which are then stored in the impression store . There may be a sub-control in the impression store for each individual ad asset that which has a corresponding measurement. The control values stored in the request store are looked up by using a reference value which may equal the placement request ID. 114 412 414 416 114 412 414 416 416 114 The measurement system operate based on the controls configured into the request store , and/or decision store , and or impression store to generate measurements. The measurement system may access each of the request store , decision store , and impression store , depending on the measurements being generated and their application. In one embodiment only the impression store influences the operation of the measurement system to produce applied measurements. 418 410 114 412 414 416 114 114 106 112 122 In one embodiment a single measurement apparatus comprises the plugable adaptor logic , measurement system , request store , decision store , and impression store . The measurement system may apply the program asset ID, creative asset ID and device ID to look up program, creative and subscriber controls, respectively. The measurement system may access these controls directly from the SIS and CIS in order to &#x2018;decorate&#x2019; the measurements with asset and subscriber data for application to modify or otherwise control the ad campaign manager . 114 112 106 114 120 112 106 In some embodiments, the measurement system may cache subscriber and asset/program metadata provisioned to the CIS and SIS and may not access these components at runtime for the asset and subscriber data to include with the measurements. In other embodiments, the measurement system may obtain subscriber and asset metadata from the central runtime logic , which maintains this data internally after provisioning to the CIS and SIS . 412 414 416 114 420 412 414 416 422 Control values from the request store , decision store , and impression store may have influence on the measurement system for a configurable time period, e.g., no less than 72 hours after which configurable time period the controls may be deactivated or de-configured. De-activator/de-configure logic may operate on the request store , decision store , and/or impression store in response to a configurable interval timer . FIG. 5 500 500 illustrates several components of an exemplary apparatus in accordance with one embodiment. This general apparatus may be adapted with logic to function as one or more of logic components described herein. 500 500 FIG. 5 In various embodiments, apparatus may include a desktop PC, server, workstation, mobile phone, laptop, tablet, set-top box, appliance, or other computing device that is capable of performing operations such as those described herein. In some embodiments, apparatus may include many more components than those shown in . However, it is not necessary that all of these generally conventional components be shown in order to disclose an illustrative embodiment. Collectively, the various tangible components or a subset of the tangible components may be referred to herein as “logic” configured or adapted in a particular way, for example as logic configured or adapted with particular software or firmware. 500 500 In various embodiments, apparatus may comprise one or more physical and/or logical devices that collectively provide the functionalities described herein. In some embodiments, apparatus may comprise one or more replicated and/or distributed physical or logical devices. 500 In some embodiments, apparatus may comprise one or more computing resources provisioned from a “cloud computing” provider, for example, Amazon Elastic Compute Cloud (“Amazon EC2”), provided by Amazon.com, Inc. of Seattle, Wash.; Sun Cloud Compute Utility, provided by Sun Microsystems, Inc. of Santa Clara, Calif.; Windows Azure, provided by Microsoft Corporation of Redmond, Wash., and the like. 500 502 508 506 510 504 Apparatus includes a bus interconnecting several components including a network interface , a display , a central processing unit , and a memory . 504 504 512 Memory generally comprises a random access memory (“RAM”) and permanent non-transitory mass storage device, such as a hard disk drive or solid-state drive. Memory stores an operating system . 504 500 516 These and other software components may be loaded into memory of apparatus using a drive mechanism (not shown) associated with a non-transitory computer-readable medium , such as a floppy disc, tape, DVD/CD-ROM drive, memory card, or the like. 504 514 200 514 508 Memory also includes database . In some embodiments, server (deleted) may communicate with database via network interface , a storage area network (“SAN”), a high-speed serial bus, and/or via the other suitable communication technology. 514 In some embodiments, database may comprise one or more storage resources provisioned from a “cloud storage” provider, for example, Amazon Simple Storage Service (“Amazon S3”), provided by Amazon.com, Inc. of Seattle, Wash., Google Cloud Storage, provided by Google, Inc. of Mountain View, Calif., and the like. References to “one embodiment” or “an embodiment” do not necessarily refer to the same embodiment, although they may. Unless the context clearly requires otherwise, throughout the description and the claims, the words “comprise,” “comprising,” and the like are to be construed in an inclusive sense as opposed to an exclusive or exhaustive sense; that is to say, in the sense of “including, but not limited to.” Words using the singular or plural number also include the plural or singular number respectively, unless expressly limited to a single one or multiple ones. Additionally, the words “herein,” “above,” “below” and words of similar import, when used in this application, refer to this application as a whole and not to any particular portions of this application. When the claims use the word “or” in reference to a list of two or more items, that word covers all of the following interpretations of the word: any of the items in the list, all of the items in the list and any combination of the items in the list, unless expressly limited to one or the other. “Logic” refers to machine memory circuits, non transitory machine readable media, and/or circuitry which by way of its material and/or material-energy configuration comprises control and/or procedural signals, and/or settings and values (such as resistance, impedance, capacitance, inductance, current/voltage ratings, etc.), that may be applied to influence the operation of a device. Magnetic media, electronic circuits, electrical and optical memory (both volatile and nonvolatile), and firmware are examples of logic. Logic specifically excludes pure signals or software per se (however does not exclude machine memories comprising software and thereby forming configurations of matter). Those skilled in the art will appreciate that logic may be distributed throughout one or more devices, and/or may be comprised of combinations memory, media, processing circuits and controllers, other circuits, and so on. Therefore, in the interest of clarity and correctness logic may not always be distinctly illustrated in drawings of devices and systems, although it is inherently present therein. The techniques and procedures described herein may be implemented via logic distributed in one or more computing devices. The particular distribution and choice of logic will vary according to implementation. Those having skill in the art will appreciate that there are various logic implementations by which processes and/or systems described herein can be effected (e.g., hardware, software, and/or firmware), and that the preferred vehicle will vary with the context in which the processes are deployed. “Software” refers to logic that may be readily readapted to different purposes (e.g. read/write volatile or nonvolatile memory or media). “Firmware” refers to logic embodied as read-only memories and/or media. Hardware refers to logic embodied as analog and/or digital circuits. If an implementer determines that speed and accuracy are paramount, the implementer may opt for a hardware and/or firmware vehicle; alternatively, if flexibility is paramount, the implementer may opt for a solely software implementation; or, yet again alternatively, the implementer may opt for some combination of hardware, software, and/or firmware. Hence, there are several possible vehicles by which the processes described herein may be effected, none of which is inherently superior to the other in that any vehicle to be utilized is a choice dependent upon the context in which the vehicle will be deployed and the specific concerns (e.g., speed, flexibility, or predictability) of the implementer, any of which may vary. Those skilled in the art will recognize that optical aspects of implementations may involve optically-oriented hardware, software, and or firmware. The foregoing detailed description has set forth various embodiments of the devices and/or processes via the use of block diagrams, flowcharts, and/or examples. Insofar as such block diagrams, flowcharts, and/or examples contain one or more functions and/or operations, it will be understood as notorious by those within the art that each function and/or operation within such block diagrams, flowcharts, or examples can be implemented, individually and/or collectively, by a wide range of hardware, software, firmware, or virtually any combination thereof. Several portions of the subject matter described herein may be implemented via Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs), digital signal processors (DSPs), or other integrated formats. However, those skilled in the art will recognize that some aspects of the embodiments disclosed herein, in whole or in part, can be equivalently implemented in standard integrated circuits, as one or more computer programs running on one or more computers (e.g., as one or more programs running on one or more computer systems), as one or more programs running on one or more processors (e.g., as one or more programs running on one or more microprocessors), as firmware, or as virtually any combination thereof, and that designing the circuitry and/or writing the code for the software and/or firmware would be well within the skill of one of skill in the art in light of this disclosure. In addition, those skilled in the art will appreciate that the mechanisms of the subject matter described herein are capable of being distributed as a program product in a variety of forms, and that an illustrative embodiment of the subject matter described herein applies equally regardless of the particular type of signal bearing media used to actually carry out the distribution. Examples of a signal bearing media include, but are not limited to, the following: recordable type media such as floppy disks, hard disk drives, CD ROMs, digital tape, flash drives, SD cards, solid state fixed or removable storage, and computer memory. In a general sense, those skilled in the art will recognize that the various aspects described herein which can be implemented, individually and/or collectively, by a wide range of hardware, software, firmware, or any combination thereof can be viewed as being composed of various types of “circuitry.” Consequently, as used herein “circuitry” includes, but is not limited to, electrical circuitry having at least one discrete electrical circuit, electrical circuitry having at least one integrated circuit, electrical circuitry having at least one application specific integrated circuit, circuitry forming a general purpose computing device configured by a computer program (e.g., a general purpose computer configured by a computer program which at least partially carries out processes and/or devices described herein, or a microprocessor configured by a computer program which at least partially carries out processes and/or devices described herein), circuitry forming a memory device (e.g., forms of random access memory), and/or circuitry forming a communications device (e.g., a modem, communications switch, or optical-electrical equipment). Those skilled in the art will recognize that it is common within the art to describe devices and/or processes in the fashion set forth herein, and thereafter use standard engineering practices to integrate such described devices and/or processes into larger systems. That is, at least a portion of the devices and/or processes described herein can be integrated into a network processing system via a reasonable amount of experimentation. BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWINGS To easily identify the discussion of any particular element or act, the most significant digit or digits in a reference number refer to the figure number in which that element is first introduced. FIG. 1 100 illustrates an aspect of a digital advertisement measurement system in accordance with one embodiment. FIG. 2 200 illustrates an aspect of a provisioning process for digital advertisement engagement measurement in accordance with one embodiment. FIG. 3 300 illustrates an aspect of a process to measure audience engagement with digital advertising in accordance with one embodiment. FIG. 4 400 illustrates an aspect of measurement system configuration logic in accordance with one embodiment. FIG. 5 500 illustrates an apparatus in accordance with one embodiment.
CROSS REFERENCE TO RELATED APPLICATIONS TECHNICAL FIELD BACKGROUND SUMMARY DETAILED DESCRIPTION Examples of System Architecture Examples of Operational Complexity Examples of Leveraging the Power of Distributed Computing Cloud This application claims priority to and is a continuation of U.S. application Ser. No. 14/214,547, filed on Mar. 14, 2014, entitled “GEO, SEGMENT, UNIQUES DISTRIBUTED COMPUTING SYSTEM”, which claimed priority to U.S. Provisional Patent Application No. 61/801,712, filed on Mar. 15, 2013, both of which are incorporated herein. The present document generally relates to digital video advertisement insertion. Many companies seek to attract customers by promoting their products or services as widely as possible. Online video advertising is a form of promotion that uses the Internet and World Wide Web for delivering video advertisements to attract customers. Online advertising is often facilitated through companies called online advertising networks that connect advertisers to web sites that want to sell advertising space. One function of an advertising network is aggregation of advertisement space supply from publishers and matching it with advertiser demand. Advertisement exchanges are technology platforms used by online advertising networks for buying and selling online advertisement impressions. Advertisement exchanges can be useful to both buyers (advertisers and agencies) and sellers (online publishers) because of the efficiencies they provide. Advertisement exchanges are, however, often limited by the types of advertisements they can buy and sell, their inventory size, and abilities to target specific viewers (e.g., potential customers). As the number of users accessing the Internet using video-playback capable wireless devices such as smartphones and tablet devices grows, improvements to online video advertising are useful. The disclosed techniques provide for techniques for calculating operational parameters of a video advertisement delivery system using a distributed computing system. Some example operational parameters include geo (e.g., information about geographic characteristics of consumers and advertisements delivered to the consumers), segments (e.g., consumer profiles) and unique impressions, i.e., video ad deliveries that can be counted as a single billing instance. In one example aspect, methods and systems are disclosed for computing operational parameters of a video advertisement delivery system using distributed computing cloud, including transferring a plurality of data files from a plurality of geographically distributed advertisement servers to a first storage resource in the distributed computing cloud, providing a script-based program to the distributed computing cloud, executing, using resources from the distributed computing cloud, the script-based program to perform analysis of the plurality of data files, and storing results of the analysis on a second storage resource, wherein the results include at least one operational parameter of the video advertisement delivery system. In certain embodiments, a machine-readable medium comprising machine-readable instructions for causing a processor to execute a method as described above is discussed. In the following detailed description, numerous specific details are set forth to provide a full understanding of the present disclosure. It will be obvious, however, to one ordinarily skilled in the art that the embodiments of the present disclosure may be practiced without some of these specific details. In other instances, well-known structures and techniques have not been shown in detail so as not to obscure the disclosure. In an increasingly connected society today, a large number of users, which may be in the millions at times, may be simultaneously using the Internet to access or browse certain web sites and load web pages into their user devices such as personal computers, laptops, mobile phones, tablets or other communication-enabled devices. Video advertisement tends to be an integral part of such user web activities and, accordingly, a video advertisement delivery system may have to process a large amount of advertisement insertion opportunities from around the world triggered by user web traffic. To provide effective video ads, such a video advertisement delivery system needs to be configured to process video ads with quick response time, e.g., less than 200 milliseconds in some cases, to the consumers. Furthermore, due to the voluminous amount of data generated related to advertisement delivery, billing may have to be streamlined by breaking into smaller portions of time, e.g., once every 15 minutes or once every hour. The techniques disclosed in this document facilitate the operation of a distributed video advertisement delivery system that can be scaled up and down based on real time demand. Furthermore, the disclosed system leverages the use of computational resources from cloud, thereby having the ability to use just the right amount of resources for the right stage of data processing. These, and other, aspects are described in greater detail below. As used herein, the term “1×1” means an “Impression Pixel.” The abbreviation ADM refers to an administrator's dashboard. For media's purposes, this tool may be used to see the fill rate of integrated publishers' ad calls on a daily basis to help optimize delivery of campaigns. As used herein, the term Billable Impressions means impressions that the advertisement exchange platform gets paid for. As used herein, the term Billable Revenue refers to the revenue generated from the campaigns, as tracked in the 3rd party reports. As used herein, the term Behavioral Targeting (referred to as “BT”) refers to targeting approach utilizing 3rd party data sets and segmentation to display ads to users who have expressed interest or intent to purchase in certain verticals. Example: in-market for a car, interested in animals/pets, golf enthusiast, etc. As used herein the term BRX (BrightRoll Ad Exchange) refers to, generally, a technology platform, enabling buyers and sellers to access video inventory in a self-service and scalable capacity where BrightRoll Ad Exchange is an example of such a system developed by BrightRoll. As used herein, the term Buy refers to a user interface for buyers (e.g. advertisers). As used herein, the term Companion (also called “300×250” or “banner”) refers to a banner running adjacent to preroll and usually remaining persistent and clickable after preroll is completed (size is typically 300×250 pixels). As used herein, the term Cost refers to publisher costs; tracked by media, paid by finance. As used herein, the term CPC (cost per click) refers to pricing model in which advertisers pay per click, instead of on a standard CPM model. As used herein, the term CPE (cost per engagement) refers to cost per video starts. As used herein, the term CPM (cost per thousand imps) refers to cost per (impressions/1,000). Pricing model for online advertising can be based on impressions or views where the advertiser pays the publisher a predetermined rate for every thousand impressions. As used herein, the term CPV (cost per view) refers to pricing model based on payment per completed view. As used herein, the term CTR refers to click through rate, which is a Standard metric used to gauge campaign performance. As used herein, the term Discrepancy refers to difference between two reporting systems' impression counts. As used herein, the term Fill rate refers to the percentage of a calls an integrated publisher sends that are filled by ads. For example, a publisher could send 500 calls but we may only have 400 ads to send them; therefore, the fill rate would be 80%. If we had 500 ads to send them, the fill rate would be 100%. As used herein, the term Flight refers to duration of a campaign or line item of an order; broken down by dates. As used herein, the term Impression pixel refers to a piece of code that tracks impression loads of an ad on a website (also referred to as a 1×1). As used herein, the term InBanner (shortened to IBV) refers to video running in regular display units (typically 300×250 in size). As used herein, the term Integrated Pub refers to publisher whom we've established both payment terms and completed an integration where we can serve videos directly into their player. As used herein, the term Inventory/Remnant Inventory refers to inventory is the volume of impressions a publisher has available. Remnant inventory is all the unsold inventory a publisher has not been able to sell to direct buyers, and then offers to networks at a discounted rate. As used herein, the term Margin refers to profit/revenue (in %). As used herein, the term Pacing—campaign delivery performance with date of flight taken into account refers to total delivered imps/(current days in flight*(total imps/total days)). As used herein, the term Performance Metrics—the metrics on which a campaign is judged (i.e.: click through rate, completion rate, acquisitions, etc.). As used herein, the term Preroll refers to an instream ad unit, running ahead of user initiated video content. As used herein, the term Search & Keyword retargeting refers to a module that allows advertisers to find relevant users identified in our network through use of third-party vendor data and cookie-ing. As used herein, the term Signed Pub refers to publisher (e.g., an ad viewer-visited web site) with established fixed payment terms with. As used herein, the term Survey/Study refers to research collected by a 3rd party vendor to establish campaign branding performance. As used herein, the term Start, Middle and End Pixels (Quartile Reporting) refers to pieces of code that track the duration of the video view. End pixels track completed views. Duration data cannot be gathered without implementing these pixels. As used herein, the term Third-Party Reporting refers to external ad-server reporting used by clients to verify proper ad delivery (typically DART or Atlas) As used herein, the term VAST refers to stands for Video Ad Serving Template. As used herein, the term Video Block refers to a product offering which allows advertisers to buy out a majority of our network during a 1-3 day period. Typically priced on a CPV basis. As used herein, the term VPAID refers to stands for Video Player-Ad Interface Definition. FIG. 1 100 102 104 104 106 106 108 108 110 110 104 depicts the simplified view of an example of a video advertisement system , e.g., a video advertisement system in which a video advertisement exchange is used for ad bidding. An ad viewer's device (e.g., a computer, a wireless device or a mobile device) is communicatively coupled (e.g., via the Internet and a wired or wireless connection) with an ad server . The ad server provides ad delivery data to an ad data infrastructure module , described further in detail below. The module can make ad metadata available to an administrator via an administrator's console , which allows an ad administrator to add/change delivery preferences of their advertising campaigns. The administrator's console is coupled to be in communication with an ad metadata processing engine (trafficker) . The trafficker compiles and makes ad delivery instructions/configurations available to an ad decisioning system . 104 102 104 104 102 The ad server may (perform functions such as handling incoming ad requests front multiple ad viewer devices , and respond with an ad or a “no ad” placement. The ad server may operate on a time budget, e.g., 50 to 150 msec., within which it must respond to an ad request. The ad server may provide ad data to the viewer device using VAST format. The decision about which advertisement to be sent may be based on various factors and real time data such as publisher placement, uniform resource locator (URL), a geographic location of the viewer device, time of day, demographic segment to which the viewer belongs, and so on. 104 104 In some implementations, the ad server infrastructure may include an event capture module that may capture defined events during the time a video advertisement is played back on the viewer device (e.g., video start, a mid-time, a specific video frame, the last frame, any other clicks made by the viewer while viewing the video, etc.). The ad server may also perform a real time bidding auction with third-party ad servers for the video advertisement. 106 106 106 100 106 110 The data infrastructure may gather log data from ad servers and togging servers, as further described below. A functional module in the data infrastructure may correlate impressions with bids to generate billable data. Another module within the data infrastructure may calculate financial data. Yet another module within the data infrastructure may provide data to operators and other users of the system (e.g., bidders, publishers, ad agencies, etc.) and other programmatic interfaces for monitoring and control of the advertisement system . Another functional module in the data infrastructure may audit data, as further described below. The ad data infrastructure may also provide results of delivery data computed to the trafficker . 108 108 108 The administrator's console may include a plurality of user interfaces (UIs). For example, the administrator's console may enable an operator to control tasks such as collection of information, e.g., advertisements, targeting data, publisher placements (ADM), etc. Another UI that may be included in the administrator's console is an UI that allows third party buyers a real-time-bidding (RTB) console to interact with the real time bidding process. 108 In some implementations, the administrator's console may include an UI that provides information to various users of the system, including, e.g., a media team for monitoring brand safety based on the video to be displayed to the viewer, reviewing creatives (e.g., look-and-feel of the viewer's screen immediately before, during and immediately after the video advertisement is displayed) that will be seen by the viewers. 108 Another UI may be provided in, e.g., the administrator's console for a research team to analyze audience data and determine whether targeting guarantees are met or not, etc. 108 Another UI may be provided in the administrator's console is an UI with views to collected data of advertisement requests and deliveries to entities such as advertisers, publishers and third party buyers. 108 Yet another UI in the administrator's console may be an UI that allows viewing and editing of configuration data such as placement tags, segment tags, host information, definition of cookies that are stored on the viewer device based on these tags, and so on. 110 110 The trafficker may compile data from various databases in the ad data infrastructure and controls site targeting (e.g., which region to focus an ad campaign on), pacing (e.g., how many ads per unit time to be sent out to the users, so that an ad campaign has a desired temporal distribution), pricing (e.g., should bid prices go up or down based on observed real time conditions), etc. The trafficker may communicate configuration files to the ad servers by first copying the files to the cloud, then issuing a notification that new configuration files have been generated and allowing the ad servers to go pick up the new configuration files. One or more modules may be deployed to ensure that prior to releasing of the new configuration files to the ad servers, the ad delivery data files from a previous time interval are copied out of the ad servers and available for processing. The operation of ad servers and the data infrastructure mechanism can run in a pipelined manner and periodically in time, while being asynchronous with each other. FIG. 2 200 230 202 204 203 206 208 depicts an example video advertisement insertion system in which various functional entities such as ad servers can operate, as described in this document. Resources in the computer network, collectively called the cloud , may be used for communication among various functional entities, e.g., ad servers , load balancers , barrier dispatchers , barrier processes and logs , as further described in the present document. FIG. 2 FIG. 1 104 106 202 102 202 202 202 Referring to , examples of additional detail of the ad server and data infrastructure of are illustrated and described. Ad servers , which can be substantially similar to ad servers , represent one or more machines that are responsible for delivering ads to end users. In operation, Ad servers may deliver firing pixels, impressions (these terms are explained elsewhere in this document) etc. over the Internet to end viewers. Each ad server may log events locally. The local logs may generate ad delivery data files. New files may be created every pre-determined time period. For example, in some implementations, ad servers may rotate new log files every 15 minutes. 230 202 Another module called the archiver module (e.g., Brightroll's BRX archiver) may be a part of the ad server or may be a stand-alone computer and may periodically copy over the completed log files to a cloud based service such as Amazon's S3 cloud based service. After a file is uploaded, the archiver module may send a message to a dispatcher module, which may be implemented on one or more hardware platforms. At an appropriate time (e.g., upon reaching a time period, or soon after receiving a notification that a new file is available), the dispatcher can download the file from the distributed computing cloud. The file may be enumerated and brought to the module. Individual keys, or line items, may be parsed. In addition, site placement data and segmentation data (e.g., geographical area associated with the delivery and a demographic profile of the viewer to whom the ads were delivered) may also be parsed. Each line item may contain information that can be processed to generate billing based on which ad was delivered to which viewer and other associated information (e.g., demographic or geographic information of the user, etc.). The module may provide messages as a result of parsing through the files to a next stage (barrier process) through a load balancer. The messages may be metadata files. These messages are waiting to be completed. 220 Another module called the checkin module , which also has a memory cache (MC), may receive notification that a given machine has sent data to S3. When all machines are checked in, the message in the barrier process that had been waiting to start processing will then be released to the next process. FIG. 2 222 202 220 202 202 222 228 The system in includes an api.facts module which provides a list of all the machines that exist in the ad server . The Checkin module may include a memory cache called mem cached. When an ad server does not have any data to report, the ad server may simply report into the checkin module via a message . 202 As machines check in, a list is updated when all machines check in, or a time period threshold (GoCode timeout) expires, a key called GoCode is used as follows. The GoCode key is set only if all ad server machines have checked in. If all machines have not checked in, but the GoCode timeout expires, then the messages waiting in the barrier process may go ahead and start next processing. In practical implementations, there can be hundreds of messages checking GoCode—whether it is set or not. 208 FIG. 2 In some embodiments, Brx logs is where actual computations may be performed. When messages are released for computation based on GoCode, a format called RQ format may be used. In some implementations, all inter-stage communication in may be implemented using the same data format (RQ format). Each message will contact S3 and get all the files needed for computation. There may be multiple types of files. For example, these files may only include as impressions or advertisement delivery data. Depending on the type of processing, different types of files are downloaded from. 208 212 212 Each type of process uses its own file type. The Brx logs produce two pieces of information Stats (actual computations) and manifests. The two pieces of information are sent to the loaders , and loaders can write them into databases. In some implementations, from the time interval between when the files are received to the ad delivery data computational results (e.g., billing data) is produced may be a computing latency interval. It is beneficial to have the computing latency interval to be smaller than the rotation of configurations. In one beneficial aspect, the amount of time gap between when a previous ad delivery data files are processed and may be indicative of the computational resources and the busyness at which the system is running. A capacity calculation may be made based on how much time difference is available between start of the next ad delivery data file processing and the end of the previous ad delivery data file processing. The system may be pipelined such that while one part is working on one set of ad delivery data files, another part of the system may be working on another set of ad delivery files before or after the currently worked ad delivery data file. In some implementations, the use of cloud based computational resources may allow easy allocation and de-allocation of resources (e.g., computing power, storage space, etc.) depending on which subsection of the pipeline described above is able to currently meet its allocated time budget or not. 204 The load balancers in the above-disclosed system architecture can also advantageously be used to provide isolation among different stages of the pipeline. Depending on run time conditions, different stages may require different type of computational power. Due to “isolation” offered by the load balancers, the number of computing platforms or resources made available to each stage can be changed independent of the other stage. FIG. 2 202 202 202 202 For example, in some implementations of , a higher number of ad servers may result in load being spread over multiple ad servers , thereby reducing the resource requirement of each individual ad server . However, due to the increase in the number of ad servers , a larger amount of messages may have to be processed by the downstream stage. 203 For example, the resource scaling for the barrier dispatcher may depend on the number of site placement and line items that need to be processed. In some implementations, the BRX logs may, e.g., be sharded to accommodate increasing and decreasing resources on as-needed basis. Sharding refers to partitioning of a database to introduce some type of efficiency in the computing (e.g., faster results). The sharding can be performed using business rules. For example, data that directly impacts billing or other revenue generating ability can be sharded in one logical group, while other data can be sharded into another logical group. In some embodiments, the various functional modules may be implemented on computing resources that are instantiated using cloud-based resources by specifying desired computing attributes. The attributes include, e.g., input/output bandwidth of a machine, cache storage capacity of a machine, computing power (CPU speed) of a machine, etc. For example, a platform that implements MemCache may be instantiated using large memory capacity. Whereas, a file parsing module may be instantiated using large i/o bandwidth. Or another functional module may be instantiated using higher number crunching capacity (e.g., CPU speed). 204 204 The load balancer themselves may also be virtual machines (i.e., computing resources in the cloud). The load balancer , e.g., could be HAProxy load balancing software. 218 218 232 218 222 218 202 The auditor validates data integrity. For example, auditor determines whether or not various data generated by the system is accurate by cross-checking data from different sources. To assist with auditing, the ad servers may include a module called Auditor Agent . The Auditor may request a list of all ad server hosts. In some implementation, the api.facts module may provide the list. At a given auditing time instant, the auditor may contact the ad server and request a list of files on the disk along with the same metadata. The auditor time instances may have a predetermined amount of delay from the epochs of ad delivery data file rotation. While auditing for a time period occurs after the time period elapses (or has begun), the periodicity of the auditing process need to be the same as the periodicity of rotating ad delivery data files. To help time syncing the auditing process with data parsing process, a “GoAudit” command may be generated every so often, and may include a start time/end time definition of an epoch of auditing. 218 208 208 202 218 In some implementations, the auditor may not duplicate all the calculations performed by BRX log , but may simply look for whether or not the BRX log used exactly the same files that the ad servers provided to the auditor . 216 202 218 A host manifest may be compiled and includes all the files that each host in an ad serve is aware of along with metadata such as file size and last updated time. That information is gathered. Then stats dB database is used to receive manifests generated by the BRX log module. These manifests tell which files were used by which BRX log machine to generate its data. The BRX log manifest also has the same metadata as the metadata received from ad server . The two data are compared to check if the files in the BRX log match the files received from hosts in ad server. A determination is made, e.g., by the auditor , about files that are present in the host manifest, but are not seen in the BRX log manifest and files that are not present in the host manifest but were included in the BRX log manifest. If certain files from host manifests were not used, then these computations are run through the BRX log one more time. When everything checks out, an entry is made into a database that the checked interval is audited and data is good. If things do not check out, then data is passed one more time through the BRX log to generate the corresponding BRX log entries. 218 208 208 In some implementations, a direct communication between the auditor and the BRX log may be used to ascertain whether or not the BRX log operation was finished. However, a message might still be being processed at the BRX log , therefore a handshake may not catch this case. 208 210 In some implementations, a BRX log reprocessing may be performed. A task that did not match out is queued up for BRX processing one more time for reprocessing (in BRX log validators ). If a file is missing, then all tasks that would have been computed using that file are queued up for reprocessing. 218 One reason why auditor may not be able to match is because files may not be delivered or may be delivered late through a cloud computing service. If data is lost and reprocessing is not successful either, manual intervention may be performed to find cause of error. 218 210 208 208 210 210 Additional computing resources may be used to cross-check the work performed by the auditor . These modules, called BRX log validators , may be configured to operate on a portion of the data processed by BRX log module . For example, when changes are made to code running in the system or to business logic, rather than lose revenue in the system due to erroneous computations, it may be beneficial to monitor accuracy of BRX log computations using the BRX log validator . The shadow BRX logs may be manually operated to verify the results of cross-check with the BRX log outputs. The BRX log validator may be running a new code base, while the BRX log may be running the existing code based. The same entries may be processed by both the new and old code based and semi-manual verification may be performed to ensure that the results of the two logics match. For example 1% of data may be used to perform such validation. Discrepancies may be resolved by manual intervention and debugging. FIG. 2 BRX archiver (not shown in ) may, in addition to sending data, may also send other data access logs, paid logs, etc. to the cloud based service. Data may be revenue impacting (paid data) or non-revenue impacting data (other) e.g., error pixels and segments. Error pixels are events that are generated by player or server when something gets wrong. Segment pixels are pixels that customers can drop on their page to correlate a viewer with visits to the customer's web site. This information may have different service layer agreement (e.g., 2 hours for paid vs. 8 hours). Sharding may be (performed among pools of servers that are isolated from each other may be working separately on paid data and other data. An auditor module may be dedicated to the paid data auditing and other data auditing. Each auditor blesses, or approves, its own data type. This way, the blessing, or approval, of paid data stats is not blocked due to some problems in the non-paid (other) data auditing. Amazon Elastic MapReduce (Amazon EMR) is a service from Amazon in which a user can specify a need for a number of machines. The user can pass a Pig script to the EMR, then the compiler of PIG will transform the scripts into a series of jobs that extracts and acts upon them. From time to time, in addition to the previously discussed files, the ad servers push various other data files into the cloud. These files include information that is not related to bitting or impressions, but includes information that may be beneficial for getting a better understanding of ad campaign effectiveness and overall operation of the media ad insertion system. For example, the data may include geographical information (geo) of ad delivery—e.g., which viewers in which area were delivered how many ads. As another example, the data may include viewer delivery identities so that unique impressions can be calculated. The data may also include segmentation data (e.g., user profiles). This data is stored into cache access log. Some of the data of may be re-used. A module called “EMR systems” may be used to run locally a job on the cached data. The EMR will instantiate and execute a job using PIG script. The cloud based mechanism may move the files to be used to a Hadoop file system (HDFS) and crunch the data and writes it back into the cloud. One advantageous aspect in which the EMR processing helps is to be able to identify “uniques” from the archived data. A unique represents a set of data that is (uniquely) identified, e.g., ad revenue during a certain time window (e.g., last month). In other words, the data has to be de-duplicated or made unique by comparing ad data from different time periods. As previously discussed, files are rotated and data is analyzed in chunks of data intervals. However, when data that does not belong to one specific ad delivery data file is to be analyzed, the above-discussed EMR technique could be advantageously used based on data files satisfying the search window. For example, it is not beneficial for a video ad insertion platform provider to generate billing information multiple times for a single video ad display to a viewer. Because the ad delivery data files by themselves do not contain any information about ad delivery data files in other time intervals, a process such as the above discussed EMR process, which operates outside of the intervals, may be beneficial. Raw data may be stored incrementally, while the unique calculation may elastically stretch over multiple intervals of ad delivery data file rotation. For example, multiple serving of the same advertisement to same person during two different interval may be detected and harmonized into a single “unique.” The above-discussed system may be deployed in real life to facilitate and track video advertisement placement over the Internet. The Internet may cover an entire nation, of may extend to larger geographic areas, up to covering the entire world. In some implementations, a 15 minute period may be used to turn the ad delivery data files that are generated by the ad servers. A similar period (e.g., 15 minutes of some other time interval) may be used to rotate configuration files that are transmitter by the trafficker to the ad servers. Each ad server may record hundreds of thousands of impressions (video ad deliveries) in its ad delivery data file. Every fifteen minutes, thousands (e.g., 5K to 15K) of ad servers may write their own ad delivery data files. The ad data infrastructure mechanism therefore may need to process several million line items on a per-fifteen minute basis. FIG. 2 203 206 208 218 106 204 203 206 216 As described in this document, the video advertisement insertion system may be implemented in several stages as illustrated in , e.g., the barrier dispatcher modules , the barrier process module , the Brx log modules , the auditor , and so on. In some implementations, the ad data infrastructure mechanism comprises a plurality of processing stages, as discussed above, each stage comprising multiple modules for performing certain tasks, wherein tasks to a given processing stage are assigned by a corresponding load balancer (LB) . Each intermediate processing stage (e.g., the barrier dispatcher , the barrier process ) receives results of operations of a preceding processing stage and provides results to a subsequent processing stage, with a last processing stage forwarding its results for storage in a database . 208 It will appreciated that the above-discussed system architecture provides several operational advantages. For example, the geographical reach of a video advertisement insertion system could be wide spread, spanning across a continent. The use of a cloud infrastructure, such as Amazon's S3, provides geographic ubiquity and data backup/transfer features to the ad data infrastructure. The use of intermediate load balancing stages (load balancers themselves could be instantiated as resources from the cloud computing service) allows scaling of resources deployed at each stage. In some implementations, the type of computing resources used at each stage may be different. For example, Brx log may perform significant amount of number crunching—e.g., data compare, subtraction, addition etc., while Barrier process may perform a significant amount of file transfers and may thus need high speed data input/output bandwidth. An operator can monitor the performance of each stage, e.g., the time taken for data processing at each stage, and accordingly easily deploy resources matching the needs by instantiating from cloud. In one advantageous aspect, a video advertisement insertion service provider can thus replace capex (e.g., the need to buy and maintain in-house several computing platforms of different capabilities to meet peak demand of each stage) with apex (i.e., rent or not rent computational resources front a cloud computing service, based on current load on the system). 202 203 230 In another advantageous aspect of the above-disclosed platform comprising multiple pipelined stages, a video advertisement insertion service provider can mix-and-match cloud computing resources with dedicated “in-house” resources. For example, some computational stages (e.g., ad servers and barrier dispatchers ) may communicate with each other by copying files (e.g., ad delivery data files or configuration files) to and front the computational resource cloud . One advantageous feature is that the data used to keep these stages lock-stepped is not lost and can be recovered from any machine anywhere by leveraging the distributed nature of a cloud computing service. On the other hand, communication via cloud based file read/writes may not be desirable for certain stages—or for sharding and distributing computational tasks among different computational platforms at each stage. This allocation of resources may therefore be performed using local control of sharding tasks, which may then be executed on local dedicated machines or resources front the cloud. FIG. 3 800 202 230 is a block diagram description of a system within the ad data infrastructure for performing analysis of additional data and configuration files is depicted. As discussed previously, the ad server may from time to time upload locally stored files to the cloud . FIG. 3 801 . File are uploaded from ad servers into clouds, including a PIG script. 802 . A chron job (which tracks time) kicks in at some time. EMR caches the data to be used for running a PIG script. 803 , EMR communicates with cloud based web service using a pre-defined API. 804 . AWS stores data to HDFS. 805 . The HDFS machine fetches the appropriate data from the cloud. 806 . The HDFS writes the results back to the cloud. Below is one example sequence of message transfers depicted in . 308 A module called api.rpt module may report that the results are available. As can be seen, resources from cloud can be utilized to produce operational parameters such as Geo-distribution of ad requests and impressions per site requests, the number of unique impressions (or video ads) delivered, segment data (consumer profile), and so on. By making the relevant data available through cloud for processing, several non-obvious operational advantageous can be gained. For example, in a pipelined video ad delivery data processing system, such as described in U.S. patent application Ser. No. ______, entitled “Audited Pipelined distributed system for video advertisement exchanges,” having Attorney Docket No. 77693-8002.US01, concurrently being filed herewith, which is incorporated by reference herein in its entirety, cloud computing resources may be instantiated or used on an “as needed” basis. The use of distributed computing resources as disclosed above streamlines the use of cloud computing resources by being able to optimize data and file movement in the cloud so that each pipelined stage is able to meet target time budgets. FIG. 4 FIG. 4 FIG. 4 FIG. 4 402 202 404 402 404 402 202 402 depicts an example architecture of a video advertisement insertion system in which video advertisements are inserted into content being browsed by a user using, e.g., the previously described bidding exchange technique. Only a few operational details are depicted in for clarity. From left to right of , a video player (e.g., in a user device) communications with ad server , using protcols . Referring to , video player on a user device receives a VAST formatted advertisement information . The video player may issue a request to ad serving subsystem (e.g., ad server ). The video player may be a plugin or a standalone application. One example of advertisement would be pre-roll advertisement. The video player's request may identify itself with a site placement identification, by which the ad serving system becomes aware of a location of the video player. This may be implemented as a specific number (e.g., 12345—which is understood as a site placement id). The site placement ID is provided by the ad serving system to the publisher that controls the video player. The publisher is then provided with a VAST document. In contains information about impression pixels to be fired, and so on. A typical VAST document may span two to five internee protocol (IP) packets. The Real Time Ad Serving (RTAS) may by a subsystem within the ad server system and provides this VAST document. The ad server system also includes one or more Medial Handling Engines (MHEs). Each MHE handles a portion of the load going to system. RTAS may use MHEs using a load balancing technique such as round robin. The RTAS and MHEs may be implemented on same platform, different platforms, in the same geographic location of different geographic locations. The MHEs take in a list of line items, geo location from IP address, how often the device has seen certain ads, etc. information that is included into a cookie that is received from the video player's device. The ad server module also has access to a configuration file that specified attributes of line items, how they should be targeted and so on. Within each MHE the, MHE runs auctions based on the line items. The result of the auction is returned as two prices. First price auction—receive bid from 5 bidders—they give a price—and you select your best. Second price auction—bidders give a bid and a maximum bid they will give. The winner will be a penny above the maximum of other people's bids or maximum bids. RTAS collects top two prices from all MHEs, and then gets a final bid price. The winner and the bid price is written down into a bid file. MHEs now generate a VAST document that is appreciate for the video player and is based on the winning bid. This VAST document is passed to RTAS (when to fire which impression, etc.). Impression pixels—a bid is not sufficient to know if ad platform should be paid. The video player, at the right time, fires the impression pixels, which indicate to the advertisement system that the advertisement was actually consumed. When and where to insert the impression pixels may be determined by the advertiser or may also be assisted by the ad platform using a shim. 404 The BRX servers receive impressions, parse it and generate log files based on these. The impressions are stateless. Correlation of log files is an important aspect. For example, the system may receive impressions for which there were no bids. This may indicate, e.g., some type of fraud or other error occurring in the system. Or there may be accidental duplication of impressions. For example, there might be a bid without impression (e.g., user turned off video player) which may mean no billing. 220 15 100 Each module may include an archiver process. The files are rotated every 15 minutes. The archiver process uploads the file to the cloud. Each file may be in the 100 to 200 Mbyte range. The archiver process sends a message to the checkin box to indicate that it has finished its archiving work. The archiver process also sends a message to the barrier boxes, the message lists the file uploaded. In some implementations, the message is sent directly to the checkin box only if no files exist on disk (meaning no upload to the cloud happened). This usually happens when the box is idle and not in rotation/live traffic. Otherwise, the message is sent to the barrier dispatcher box. At the end of dispatcher being done enumerating/mapping the IDs in that file, a message is sent to checkin with the host information that sent the message. The barrier boxes create a bunch of outgoing messages that split up the work, E.g., for line item go here. For line item go somewhere else. For each line item, a specific instruction may be provided about what needs to be done, e.g., on instruction may be to compute all impressions for a particular line item. One task performed may be de-duping of the message. As an example, fifteen ad server boxes may each receive responses from each ad server site. Thousands of messages may be de-duped to remove identical duplicate entries. A check-in is performed for each box to see if it has checked in. Once all messages have been de-duped and all machines have checked in, a start message is fired. (Barrier process). De-duping may only touch metadata, not log files themselves. For example, at a given time, 10,000 line items may be used in the system. Messages may of the type “process this line item for this interval” “hand all impressions for this video” etc. BRX log modules receive messages from the barrier process boxes. A load balancer may provide load balancing for tasks propagated from BRX process to BRX log. Every 15 minutes, e.g., the BRX logs may implement 10 different queries on the 100s of Megabytes of data. BRX log generates a CSV or SQL file of results (1 to 150 line items, e.g.,). A line item may have one or more creative associated with it. The log files may be generated—per 15 minutes per line item. In the network, 100 to 200 million responses may be received. BRX logs themselves access cloud service to receive the ad delivery data files. The cloud infrastructure may be used to ensure wide geographic availability of files, with backup copies available in the cloud. The BRX log may generate files at the rate of processing 100 GB per day. The results from computations may be small (few hundred megabyte), but thus require processing of large amount of data on a tight timing schedule (e.g., once every 15 minutes, or the period of rotation of files). In one exemplary aspect a method of operating a video advertisement (ad) system is disclosed. The method includes controlling an ad server configured to receive a plurality of ad requests from a plurality of viewer devices, provide a plurality of ad responses to the plurality of viewer device, generate an ad delivery data file that includes information about delivery of ads to the plurality of viewer devices, and copy the ad delivery data file to a distributed computing cloud. The method also includes controlling an ad data infrastructure mechanism to copy the cloud-based ad delivery data file into a local memory, process, using the plurality of ad responses, the copied ad delivery data file to generate a first billing data comprising information about ads that were placed. The method further includes controlling the ad data infrastructure mechanism to receive a configuration file from the ad server, verify that all items in the configuration file from the ad server were used in the generation of the first billing data, and remove non-verifiable items front the first billing data to generate a final billing data. In one exemplary aspect a method of operating a video advertisement (ad) system is disclosed. The method includes controlling an ad server configured to receive a plurality of ad requests from a plurality of viewer devices, provide a plurality of ad responses to the plurality of viewer device, generate an ad delivery data file that includes information about delivery of ads to the plurality of viewer devices, and copy the ad delivery data file to a distributed computing cloud. The method also includes controlling an ad data infrastructure mechanism to copy the cloud-based ad delivery data file into a local memory, process, using the plurality of ad responses, the copied ad delivery data file to generate a first billing data comprising information about ads that were placed. The method further includes controlling the ad data infrastructure mechanism to receive a configuration file from the ad server, verify that all items in the configuration file from the ad server were used in the generation of the first billing data, and remove non-verifiable items from the first billing data to generate a final billing data. FIG. 5 500 is a flowchart representation of a process of operating a digital media advertisement system. 502 At , a plurality of files are received from a plurality of ad servers, each file including a plurality of line items, wherein each line item corresponds to an ad delivery instance. 504 At , the plurality of files through a pipeline of multiple processing stages separated by intervening load balancers, wherein each processing stage receives its input data by reading from a cloud service and each processing stage writes its output data to the cloud service. The processing tasks are sharded across multiple hardware platforms in each processing stage, the sharding based on a logical partitioning of the corresponding input data. In some implementations, results of the processing the plurality of files are generated an headroom interval (e.g., 5 minutes) before expiration of a target time interval (e.g., 15 minutes) after the plurality of files are received from the plurality of ad servers. A measure how much earlier (e.g., 5 minutes earlier than the 15 minute budget) is used to measure load on the system and thus controlling the resources allocated to the pipeline of multiple stages. FIG. 6 600 600 602 604 606 608 is a flowchart representation of a process of computing operational parameters of a video advertisement delivery system using distributed computing cloud. The process includes transferring a plurality of data files from a plurality of geographically distributed advertisement servers to a first storage resource in the distributed computing cloud (), providing a script-based program to the distributed computing cloud (), executing, using resources from the distributed computing cloud, the script-based program to perform analysis of the plurality of data files (), and storing results of the analysis on a second storage resource, wherein the results include at least one operational parameter of the video advertisement delivery system (). The operational parameters may include one or more of a geographic data; a segment data and a unique impressions data. In some implementations, a system for computing operational parameters of a video advertisement delivery system using distributed computing cloud includes a distributed computing cloud, a module that transfers a plurality of data files from a plurality of geographically distributed advertisement servers to a first storage resource in the distributed computing cloud, a script module that provides a script-based program to the distributed computing cloud (e.g., Pig script), a computer that executes, using resources from the distributed computing cloud, the script-based program to (perform analysis of the plurality of data files; and a storage module (e.g., HDFS) that stores results of the analysis on a second storage resource, wherein the results include at least one operational parameter of the video advertisement delivery system. The disclosed and other embodiments and the functional operations and modules described in this document can be implemented in digital electronic circuitry, or in computer software, firmware, or hardware, including the structures disclosed in this document and their structural equivalents, or in combinations of one or more of them. The disclosed and other embodiments can be implemented as one or more computer program products, i.e., one or more modules of computer program instructions encoded on a computer readable medium for execution by, or to control the operation of, data processing apparatus. The computer readable medium can be a machine-readable storage device, a machine-readable storage substrate, a memory device, a composition of matter effecting a machine-readable propagated signal, or a combination of one or more them. The term “data processing apparatus” encompasses all apparatus, devices, and machines for processing data, including by way of example a programmable processor, a computer, or multiple processors or computers. The apparatus can include, in addition to hardware; code that creates an execution environment for the computer program in question, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, or a combination of one or more of them. A propagated signal is an artificially generated signal, e.g., a machine-generated electrical, optical, or electromagnetic signal, that is generated to encode information for transmission to suitable receiver apparatus. A computer program (also known as a program, software, software application, script, or code) can be written in any form of programming language, including compiled or interpreted languages, and it can be deployed in any form, including as a standalone program or as a module, component, subroutine, or other unit suitable for use in a computing environment. A computer program does not necessarily correspond to a file in a file system. A program can be stored in a portion of a file that holds other programs or data (e.g., one or more scripts stored in a markup language document), in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub programs, or portions of code). A computer program can be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a communication network. The processes and logic flows described in this document can be performed by one or more programmable processors executing one or more computer programs to perform functions by operating on input data and generating output. The processes and logic flows can also be performed by, and apparatus can also be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application specific integrated circuit). Processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer. Generally, a processor will receive instructions and data from a read only memory or a random access memory or both. The essential elements of a computer are a processor for performing instructions and one or more memory devices for storing instructions and data. Generally, a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto optical disks, or optical disks. However, a computer need not have such devices. Computer readable media suitable for storing computer program instructions and data include all forms of nonvolatile memory, media and memory devices, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices; magnetic disks, e.g., internal hard disks or removable disks; magneto optical disks; and CD ROM and DVD-ROM disks. The processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry. While this document contains many specifics, these should not be construed as limitations on the scope of an invention that is claimed or of what may be claimed, but rather as descriptions of features specific to particular embodiments. Certain features that are described in this document in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable sub-combination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a sub-combination or a variation of a sub-combination. Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. Only a few examples and implementations are disclosed. Variations, modifications, and enhancements to the described examples and implementations and other implementations can be made based on what is disclosed. BRIEF DESCRIPTION OF THE DRAWINGS The accompanying drawings, which are included to provide further understanding and are incorporated in and constitute a part of this specification, illustrate disclosed embodiments and together with the description serve to explain the principles of the disclosed embodiments. In the drawings: FIG. 1 illustrates a high level architecture for a video advertisement system. FIG. 2 is an architectural block diagram of a video advertising data processing platform. FIG. 3 is a block diagram of a batch report generation system. FIG. 4 is a block diagram representation of a portion of a video delivery system. FIG. 5 is a flowchart representation of a pipelined data computing process using cloud based resources. FIG. 6 is a flowchart representation of a process of computing operational parameters of a video advertisement delivery system using distributed computing cloud.
1. Field of the Invention The present invention relates to an article of manufacture for forming leads to interconnect semiconductor die with the next level of electronics packaging and a process for manufacturing the same. 2. Background of the Invention The term interconnects herein refers to the electrical connections between a semiconductor die and the next level of microelectronics packaging, such as lead frames, chip carriers, ceramic substrates, printed circuit boards, etc. Historically this first-level interconnect has been achieved using very fine gold or aluminum wire. Other methods of first-level interconnection which have been used somewhat successfully include processes known as Controlled Collapse Solder Bump, Flipchips, Beam Leads, and "TAB" (Tape for Automated Bonding). This invention expands the field for successful utilization of TAB. TAB in general consists of a very thin copper foil processed to produce a pattern of copper leads or traces which can be used in the same manner as gold or aluminum wires to form interconnections between the semiconductor die and the next level of microelectronic packaging. A semiconductor die typically has interconnect pads exposed on its surface. These pads, called I/O (input/output) pads, are typically termination conductive pads of the aluminum traces which carry the electrical signals internal to the semiconductor die. The surface of the semiconductor die is protected with a very thin passivation layer made for example of silicon dioxide, silicon nitride, or other inert dielectric materials, with windows or vias exposing the I/O pads. On these semiconductor I/O pads, the gold, aluminum, or copper leads are bonded to form an "innerlead bond". The gold, aluminum, or copper leads are also bonded to the next level of microelectronic packaging and this is termed "the outerlead" bond, completing the electrical interconnection. TAB has been an attractive alternative to gold and aluminum wire bonding in that it consists of a continuous tape of patterns of copper leads, each pattern corresponding to the pattern of I/O pads on the surface of the semiconductor die. This continuous tape has sprocket hole patterns along its edge, much like a motion picture film strip, allowing an automated transport of the tape across the bonding location where the semiconductor die is accurately positioned, and in one step all the copper leads are "innerlead" bonded to the semiconductor die. The semiconductor die at that point is lifted from the bonding location and becomes part of the continuous tape. Many times it is desirable to electrically test the semiconductor die after the innerlead bonds are made but before the die is permanently attached to its next level of packaging. Such testing is possible using a type of TAB tape on which the conductor pattern consists of electrically isolated copper leads with large test pad areas for each lead and a dielectric film under the conductor pattern for support during bonding and testing. After the innerlead bonds are made, the test pads can be contacted to electrically test the die and the innerlead bonds before the outerlead bonds are made. After testing, the continuous tape with the "innerlead" bonded semiconductor die is then fed across a second bonding location where the copper leads, typically in one operation, are excised from the continuous tape, formed if desired, and "outerlead" bonded to the next level of packaging. Prior art TAB tapes having dielectric support have been manufactured so that the dielectric support has been formed in an essentially continuous film bonded to the metal leads with the interface between the metal leads and dielectric support being essentially continuous and planar. These TAB tapes have been made by bonding a tape of dielectric support with an adhesive to a tape of metal lead patterns with holes punched in the dielectric for sprockets and for allowing access to the leads for bonding. Alternatively, the dielectric support has been formed on the metal leads by casting a continuous liquid dielectric film over a planar metal tape and then photolithographically etching the lead pattern into the metal tape. Yet another prior art method involves using an electroless copper deposition technique to apply a very thin layer of copper to a planar dielectric film tape which is then photolithographically processed so that the desired lead pattern can be pattern plated, or the entire surface of one side of the dielectric is electroplated to the desired thickness and then photolithographically processed so that the desired lead pattern can be etched into the electroplated metal, typically copper metal. In each of the last three processes, a photolithographic process is required so that certain areas of the dielectric film can be etched so as to open either sprocket holes and/or areas where the copper metal is accessible through the dielectric. Advantages of TAB, besides providing the capability of testing the innerlead bonds, include greater bond strengths, greater lead strengths, improved thermal dissipation and higher conductivity because TAB leads generally have greater cross-sectional area and bonding area than do fine wire leads for comparable bonding environments. Further, the manufacture of TAB leads allows greater control over the patterns of the leads providing greater design flexibility in the semiconductor die since the TAB leads can bond to I/O pads which are smaller and closer together than has been possible using wire bonding techniques. However, when TAB is used, because I/O pads on the semiconductor die are somewhat recessed below the surface of the passivation layer, it is necessary to provide a conductive mechanical stand-off, called a "bump", between the I/O pad on the die and the lead of the TAB tape. Without this bump, there is a danger of cracking the passivation layer on the die and destroying its functionality when the lead is bonded to the pad, and/or a danger of the leads contacting the exposed edge of the die and thereby creating electrical shorts. These bumps can be processed onto the die before the dies are separated out of the wafer form. This typically involves various layers of thin film metal depositions over the entire wafer to provide adhesion to the surface of the wafers, a thermal diffusion barrier, and a plateable surface. Then, using a photolithographic process, electroplated bumps are formed, typically of gold, over the I/O pads. A final step involves the removal of the thin film metal deposition layers that are exposed everywhere but under the electroplated bumps. These electroplated bumps can now be innerlead bonded to flat, or planar, TAB copper leads. If the bump is of gold plating, the copper leads of the TAB tape are typically plated with a very thin layer of tin or gold, to allow the formation of a gold-tin eutectic bond or a gold-gold thermocompression bond. Other metallurgies are also used. However, dies with "bumps" formed on the pads are available only to those companies that have control of the wafer fabrication, which is not typically the case. Therefore, alternative bumping techniques are needed to process the bumps as part of the metal leads of the TAB tape. Such techniques include selectively etching the metal to form the bumps on the innerlead tips of the copper leads or photolithographically processing the copper leads to allow the electroplating of bumps onto the innerlead tips of the copper leads. These techniques are entirely practical and are being utilized in commercial applications. However, these bumping processes have not been successfully applied to TAB tape using dielectric support for electrical isolation and thus testing of the leads after innerlead bonding. This is due in great part to the difficulties in processing either a selectively etched or electroplated bump on the innerlead tips of the metal leads while there is a continuous film of dielectric support material adjacent to one side of a continuous metal film. In sum, prior art technology provides a utilization of non-testable TAB tape where a bumped semiconductor die is innerlead bonded with a non-bumped planar all copper TAB tape, or a non-bumped semiconductor die is innerlead bonded with a bumped all copper TAB tape. Testable TAB can be used on a bumped semiconductor die that is innerlead bonded with a non-bumped planar TAB tape which has copper leads supported on a dielectric film. To expand the field of successful utilization of testable TAB tape, there is a need for a process for manufacturing a TAB tape which overcomes the difficulties of having a continuous dielectric film mated to a continuous metal film along a planar interface and a further need for a testable bumped TAB tape.
Regardless of how long we need to wait after Election Day until we know the results, they will have “enormous implications on everything, including long-term care,” American Health Care Association / National Center for Assisted-Living President and CEO Mark Parkinson told those listening to LTC Properties’ third-quarter earnings call on Friday. He was a guest speaker at the event. Parkinson knows a thing or two about government, having served in the Kansas House of Representatives and Senate and as governor of the Sunflower State before assuming his role with AHCA/NCAL, which represents 4,000 assisted living buildings and 10,500 skilled nursing facilities. And he has walked in the shoes of providers, having built, developed, owned and operated assisted living communities and skilled nursing facilities with his wife, Stacy, before taking on those government roles. And he brings perspective of having been a member of both of the country’s major political parties, too. So what are the enormous implications Parkinson sees? Aid One involves coronavirus-related relief for assisted living and skilled nursing operators. Although the outcome of the election won’t affect whether another stimulus bill will be passed and signed into law — Parkinson said he agrees with conventional wisdom, which suggests “a 100% chance” that the future has in store more relief — the election could affect the timing and amount of aid, he said. “If Republicans retain one part of the apparatus, whether it’s the presidency or the Senate, the stimulus bill is very likely to be of the size that was being discussed when the discussions fell apart this week, somewhere around $1.8 trillion to $2.2 trillion,” Parkinson said Friday. “And of that, significant additional funds will be added to the CARES Act funding that will then be available for both skilled nursing and assisted living.” If Democrats win the presidency and control over both chambers of Congress, however, then Congress may wait until a new president takes office in January to pass a new stimulus bill, he said. But the amount in the legislation will be higher, Parkinson predicted. “Whenever there is a stimulus bill, if the Democrats take over, it will be larger than the $1.8 trillion to $2.2 trillion. It’ll be more like the $3 trillion bill that they passed back in August, and that may just be the beginning,” he said. “They are clearly wanting to spend significant amounts of money to stimulate the economy [and] to fight COVID.” Liability protections Coronavirus-related liability protection is another area that could be affected by the election results, Parkinson said, adding that two schools of thought exist on the subject. “As long as the Republicans maintain one part of the federal apparatus, whether it’s the presidency or the Senate …I think that the Republicans will continue to maintain that any future stimulus bill needs to have the kind of liability protection that all healthcare providers need to function going forward.” On the other hand, he said, “If there is a Democratic sweep next Tuesday, there is the possibility that this liability discussion will be set aside,” although “the other school of thought is that if the Democrats sweep everything, they have an understanding that for the economy to move forward, there’s got to be some kind of liability protection.” If Democrats win big in the election, Parkinson said, it could be that a smaller stimulus bill will be passed during the lame duck session and will contain liability protections, followed by the passage of a larger stimulus bill in January or February. Federal regulation Another big question for assisted living, regardless of who wins the presidency or Senate and House races, Parkinson said, is whether the industry’s request for federal help with coronavirus-related expenses will lead to federal regulation. “It’s a very good question, and it’s a question that I think the boards of all of the assisted living associations asked themselves prior to making the decision to lobby for the funds,” he said. “Ultimately, the decision was made that the need for the funds outweighed any risk of potential additional regulation.” Although Democrats generally are more interested in regulation than are Republicans, Parkinson said, even if Democrats sweep national contests, “full-blown regulation of assisted living that looks anywhere near like what skilled nursing is regulated under is unlikely. I don’t think that that will occur.” Instead, he said, discussions most likely would focus on topics such as whether states should require assisted living communities to have stockpiles of personal protective equipment or infection control programs. Testing and vaccination What else does Parkinson think the future holds? He predicts “robust testing” and “a clear understanding in long-term care buildings who has COVID and who doesn’t” over the next few months. Test accuracy and pricing already are improving, and long-term care is being prioritized for the tests, Parkinson said. “And then, hopefully, we will get into the vaccine phase, and we’ll get people vaccinated,” he said. If a COVID-19 vaccine is approved by the end of the year, Parkinson said, “there is a possibility that all of our residents and all of our staff will be vaccinated in January and in February and it will create this wonderful time that I’m certainly looking forward to, where we would be able to say to people that the safest place in the country for an older person right now is in one of the long-term care facilities that are out there.” But “as all of this is going, we’ll continue to receive the funding that we need from the state and federal level to keep our heads above water,” he said.
https://www.mcknightsseniorliving.com/home/columns/editors-columns/parkinson-ponders-post-election-landscape/
The University of California, Davis, puts the highest priority on the safety and security of its students, faculty, staff and surrounding communities. Strong relationships of mutual trust between our police department and the communities we serve are critical in building and sustaining community trust, effective policing and safe communities. Part of that trust is transparency and sharing of our police policy which we reflect upon, train on, and regularly update to keep updated with the changing laws and needs of our communities. View or download the UC Davis Police Policy Manual for more details. From Chief Farrow, in the policy manual's preface: "The UC Davis Police Department policy manual is a living document that has been developed in partnership with the UC Davis community. This policy manual serves as a guide to all members of the UC Davis Police Department with the goal of providing the highest level of service to the community." Updates to UC police policies The University of California created a Presidential Task Force on Universitywide Policing to examine the policies of UC police departments, including those related to investigative practices, use of force, and training. This process is an effort to strengthen the UC departments’ practices and their relationships and interactions with the community. If you would like to get involved locally, please feel free to reach out to us UC Davis Police Department Outreach Team or the UC Davis Police Accountability Board.
https://police.ucdavis.edu/professional-standards/policies
The world-famous actor Sarah Bernhardt (1844–1923) cross-dressed to play the role of Hamlet in 1899. This postcard shows a photomechanical print of Bernhardt in the graveyard scene, giving the celebrated ‘Alas, poor Yorick’ speech. For the role, she used a human skull which had been given to her by the French novelist Victor Hugo. Hamlet played by a woman: what were the critics’ views? Hannah Manktelow discusses Bernhardt’s performance: A canny self-promoter, Bernhardt cultivated her image as a mysterious, exotic outsider. She claimed to sleep in a coffin and encouraged the circulation of outlandish rumours about her eccentric behaviour. In 1899, when Bernhardt was an established theatrical coach, manager and performer, she took the controversial decision to play Hamlet. Her production was an immediate success, touring extensively across Europe and America. In stark contrast to the melancholic interpretation of English tradition, Bernhardt’s Hamlet was youthful, energetic and volatile. She claimed to be more suited to the role than any man, arguing that “a boy of twenty cannot understand the philosophy of Hamlet”, while the older actor “does not look the boy, the light carriage of youth with … mature thought”. The critics, however, were not so sure. Many felt that Bernhardt and the actresses she inspired were fundamentally incapable of understanding male drives and emotions. Max Beerbohm wrote that [c]reative power, the power to conceive ideas and execute them, is an attribute of virility: women are denied it. In so far as they practise art at all, they are aping virility, exceeding their natural sphere. Never does one understand so well the failure of women in art as when one sees them deliberately impersonating men upon the stage. This extract is from Hannah Manktelow’s chapter entitled, ‘“Do you not know I am a woman?”: The Legacy of the First Female Desdemona, 1660’ in Shakespeare in Ten Acts (London: The British Library, 2016), pp. 94–95. - Full title: - Photomechanical print of Sarah Bernhardt as Hamlet, full-length portrait, standing, facing left, holding and looking at skull - Created: - 1899, London - Format: - Photograph / Image - Creator: - Lafayette [photographer] - Usage terms © Library of Congress, Prints and Photographs Division, Washington, D.C. - Held by - Library of Congress - Shelfmark: - 3g06529u This item is featured in: Explore further Related articles Ophelia, gender and madness - Article by: - Elaine Showalter - Themes: - Interpretations of ‘madness’, Tragedies, Gender, sexuality, courtship and marriage The character of Ophelia has fascinated directors, actresses, writers and painters since she first appeared on stage. Here Elaine Showalter discusses Ophelia's madness as a particularly female malady, showing how from Shakespeare's day to our own Ophelia has been used both to reflect and to challenge evolving ideas about female psychology and sexuality. Character analysis: Gertrude in Hamlet - Article by: - Tamara Tubb - Themes: - Tragedies, Gender, sexuality, courtship and marriage Focussing on key quotations and theatrical interpretations, Tamara Tubb explores the character of Gertrude in Hamlet and her role within the play. Women playing Shakespeare: The first female Desdemona and beyond - Article by: - Hannah Manktelow - Themes: - Shakespeare’s life and world, Gender, sexuality, courtship and marriage Hannah Manktelow charts the journey from the all-male playhouses of Shakespeare's day, to great actresses in female roles, and modern women cross-dressing to play male heroes like Hamlet. Related collection items Related works HamletCreated by: William Shakespeare Hamlet opens after the death of King Hamlet. His brother has succeeded him to the throne and quickly married the ...
https://www.bl.uk/collection-items/postcard-of-sarah-bernhardt-as-hamlet-in-1899
Globally, more than 3.2 Gt (billion tonnes) of grains, pulses and oilseeds (hereinafter collectively referred to as grains) are produced annually and stored at many points after harvesting, prior to being delivered to processors and domestic and international consumers. Post-harvest losses range from 2% in North America to 30% in developing countries. When spoilage occurs in an individual storage bin, 100% of the grain can become unfit for human consumers and sometimes even unfit as animal feed. Drs. Jayas and Jian have been working together for over 15 years towards the development of mathematical models as management tools for reducing quantitative and qualitative losses in stored grains. Our major contributions are to model insect movement and detection in grains and for drying of grains.
https://csbe-scgab.ca/about/awards/john-ogilvie-research-innovation-award/digvir-jayas-and-fuji-jian
Comments on the natural drink called Kombucha for Rheumatoid Arthritis Do you have a recipe you could share? dawn ? Yes I do. You have to buy one bottle of raw unpasteurized kombucha to grow a scoby. The one pictured is a good one to grow from. Then you brew regular black tea. Use 4 teabags and 7 cups boiling water, 1/2 cup white sugar and 1 cup bottled kombucha. Make your tea using about 2 cups of boiled water, half cup sugar and let steep for 20 mins and then remove teabags and add the rest of the filtered water. Let this cool until room temp and then add your store bought kombucha to the cooled sugar/tea mixture. Cover with a coffee filter and let sit 2 to 4 weeks. It depends on warm it is as to how fast the scoby will grow. I will add a link to the page with all the details and I will put it on a new post on this page. Ty dawn ...
https://askorreply.com/comments-on-the-natural-drink-called-kombucha-for-rheumatoid-arthritis
Q: Conjugate addition vs Electrophilic addition I was just doing an organic chemistry problem from an online source recently, on the topic of conjugate additions (or 1,4-additions). Below is an image illustrating the problem. I chose the upper route while the solution manual claims that the answer is the lower route. My rationale for choosing the upper route was that perhaps the alkene is not nucleophilic enough to react feasibly in an electrophilic addition with $\ce {HCl}$. This is due to the presence of the electron-withdrawing carbonyl. Thus, it is important that we first get rid of the carbonyl by using the ethyl Grignard. Then, we can easily add the $\ce {HCl}$ for the electrophilic addition. Upon reconsidering my reasoning, perhaps I was too narrow-minded in saying that the $\ce {HCl}$ must add in the fashion of electrophilic addition. Perhaps, conjugate addition could also take place. And also, by choosing the upper route, I cannot assure that my chlorine atom would end up in the desired position due considerations of the stability of the carbocation intermediate. Thus, I agree that my choice is incorrect. The correct choice is the lower route and this does ensure that the $\ce {Cl}$ is in the right place in the final product, as it adds via a conjugate fashion. But how is the writer of the solution so sure that the $\ce {HCl}$ would add in a conjugate fashion, instead of via an electrophilic addition? A: Grignards unless modified with Cu(I) salts add 1,2 not 1,4 so the EtMgBr will add to the ketone to give the t-alcohol and leaving the double bond intact. Consider then the HCl step - if it is strong enough to protonate the double bond leading to Cl addition then it will protonate the t-alcohol in preference leading to elimination or t-Cl formation. So the answer is to do the reaction on the double bond first then the Grignard addition. Protonation will occcur at the carbonyl oxygen but you have to remember that the double bond is in conjugation with the carbonyl system. This polarises the double bond, leading to Cl- attack at the end of the system. The Grignard is more nucleophile than base so you will get majority addition to the ketone though some deprotonation/elimination may occur depending on the reaction conditions.
Marginalized relationships: the impact of social disapproval on romantic relationship commitment. Personality and social psychology bulletin. 32 (1): 40-51; 2006. (English). [Record Source: PubMed] Little research has examined the effects of prejudice and discrimination on people's romantic relationships. The authors explored whether belonging to a socially devalued relationship affects consequential relational phenomena. Within the framework of the Investment Model, the authors (a) tested the association between perceived relationship marginalization and relationship commitment, (b) compared investment levels of individuals involved in marginalized versus nonmarginalized relationships, and (c) explored ways in which couples may compensate for decreased investments to maintain high commitment. Consistent with hypotheses, marginalization was a significant negative predictor of commitment. Moreover, individuals in marginalized relationships invested significantly less than individuals in nonmarginalized relationships. Despite investing less, marginalized relationship partners were significantly more committed than were their nonmarginalized counterparts. Thus, marginalized partners appeared to compensate for their reduced investments, with evidence suggesting that compensation occurs via reduced perception of relationship alternatives rather than via increased perception of relationship satisfaction.
https://www.ethicshare.org/node/552612
# Cosine similarity In data analysis, cosine similarity is a measure of similarity between two sequences of numbers. For defining it, the sequences are viewed as vectors in an inner product space, and the cosine similarity is defined as the cosine of the angle between them, that is, the dot product of the vectors divided by the product of their lengths. It follows that the cosine similarity does not depend on the magnitudes of the vectors, but only on their angle. The cosine similarity always belongs to the interval . {\displaystyle .} For example, two proportional vectors have a cosine similarity of 1, two orthogonal vectors have a similarity of 0, and two opposite vectors have a similarity of -1. The cosine similarity is particularly used in positive space, where the outcome is neatly bounded in {\displaystyle } . For example, in information retrieval and text mining, each word is assigned a different coordinate and a document is represented by the vector of the numbers of occurrences of each word in the document. Cosine similarity then gives a useful measure of how similar two documents are likely to be, in terms of their subject matter, and independently of the length of the documents. The technique is also used to measure cohesion within clusters in the field of data mining. One advantage of cosine similarity is its low complexity, especially for sparse vectors: only the non-zero coordinates need to be considered. Other names for cosine similarity include Orchini similarity and Tucker coefficient of congruence; the Otsuka–Ochiai similarity (see below) is cosine similarity applied to binary data. ## Definition The cosine of two non-zero vectors can be derived by using the Euclidean dot product formula: Given two vectors of attributes, A and B, the cosine similarity, cos(θ), is represented using a dot product and magnitude as where A i {\displaystyle A_{i}} and B i {\displaystyle B_{i}} are components of vector A {\displaystyle A} and B {\displaystyle B} respectively. The resulting similarity ranges from −1 meaning exactly opposite, to 1 meaning exactly the same, with 0 indicating orthogonality or decorrelation, while in-between values indicate intermediate similarity or dissimilarity. For text matching, the attribute vectors A and B are usually the term frequency vectors of the documents. Cosine similarity can be seen as a method of normalizing document length during comparison. In the case of information retrieval, the cosine similarity of two documents will range from 0 to 1, since the term frequencies cannot be negative. This remains true when using tf–idf weights. The angle between two term frequency vectors cannot be greater than 90°. If the attribute vectors are normalized by subtracting the vector means (e.g., A − A ¯ {\displaystyle A-{\bar {A}}} ), the measure is called the centered cosine similarity and is equivalent to the Pearson correlation coefficient. For an example of centering, if A = T ,  then  A ¯ = T ,  so  A − A ¯ = T . {\displaystyle {\text{if}}\,A=^{T},{\text{ then }}{\bar {A}}=\left^{T},{\text{ so }}A-{\bar {A}}=\left^{T}.} The term cosine distance is commonly used for the complement of cosine similarity in positive space, that is It is important to note, however, that the cosine distance is not a proper distance metric as it does not have the triangle inequality property—or, more formally, the Schwarz inequality—and it violates the coincidence axiom. One way to see this is to note that the cosine distance is half of the squared Euclidean distance of the L 2 {\displaystyle L_{2}} normalization of the vectors, and squared Euclidean distance does not satisfy the triangle inequality either. To repair the triangle inequality property while maintaining the same ordering, it is necessary to convert to angular distance or Euclidean distance. Alternatively, the triangular inequality that does work for angular distances can be expressed directly in terms of the cosines; see below. ### Angular distance and similarity The normalized angle, referred to as angular distance, between any two vectors A {\displaystyle A} and B {\displaystyle B} is a formal distance metric and can be calculated from the cosine similarity. The complement of the angular distance metric can then be used to define angular similarity function bounded between 0 and 1, inclusive. When the vector elements may be positive or negative: Or, if the vector elements are always positive: Unfortunately, computing the inverse cosine (arccos) function is slow, making the use of the angular distance more computationally expensive than using the more common (but not metric) cosine distance above. ### L2-normalized Euclidean distance Another effective proxy for cosine distance can be obtained by L 2 {\displaystyle L_{2}} normalisation of the vectors, followed by the application of normal Euclidean distance. Using this technique each term in each vector is first divided by the magnitude of the vector, yielding a vector of unit length. Then, it is clear, the Euclidean distance over the end-points of any two vectors is a proper metric which gives the same ordering as the cosine distance (a monotonic transformation of Euclidean distance; see below) for any comparison of vectors, and furthermore avoids the potentially expensive trigonometric operations required to yield a proper metric. Once the normalisation has occurred, the vector space can be used with the full range of techniques available to any Euclidean space, notably standard dimensionality reduction techniques. This normalised form distance is often used within many deep learning algorithms. ### Otsuka–Ochiai coefficient In biology, there is a similar concept known as the Otsuka–Ochiai coefficient named after Yanosuke Otsuka (also spelled as Ōtsuka, Ootsuka or Otuka, Japanese: 大塚 弥之助) and Akira Ochiai (Japanese: 落合 明), also known as the Ochiai–Barkman or Ochiai coefficient, which can be represented as: Here, A {\displaystyle A} and B {\displaystyle B} are sets, and | A | {\displaystyle |A|} is the number of elements in A {\displaystyle A} . If sets are represented as bit vectors, the Otsuka–Ochiai coefficient can be seen to be the same as the cosine similarity. In a recent book, the coefficient is misattributed to another Japanese researcher with the family name Otsuka. The confusion arises because in 1957 Akira Ochiai attributes the coefficient only to Otsuka (no first name mentioned) by citing an article by Ikuso Hamai (Japanese: 浜井 生三), who in turn cites the original 1936 article by Yanosuke Otsuka. ## Properties The most noteworthy property of cosine similarity is that it reflects a relative, rather than absolute, comparison of the individual vector dimensions. For any constant a {\displaystyle a} and vector V {\displaystyle V} , the vectors V {\displaystyle V} and a V {\displaystyle aV} are maximally similar. The measure is thus most appropriate for data where frequency is more important than absolute values; notably, term frequency in documents. However more recent metrics with a grounding in information theory, such as Jensen–Shannon, SED, and triangular divergence have been shown to have improved semantics in at least some contexts. Cosine similarity is related to Euclidean distance as follows. Denote Euclidean distance by the usual ‖ A − B ‖ {\displaystyle \|A-B\|} , and observe that by expansion. When A and B are normalized to unit length, ‖ A ‖ 2 = ‖ B ‖ 2 = 1 {\displaystyle \|A\|^{2}=\|B\|^{2}=1} so this expression is equal to In short, the cosine distance can be expressed in terms of Euclidean distance as The Euclidean distance is called the chord distance (because it is the length of the chord on the unit circle) and it is the Euclidean distance between the vectors which were normalized to unit sum of squared values within them. Null distribution: For data which can be negative as well as positive, the null distribution for cosine similarity is the distribution of the dot product of two independent random unit vectors. This distribution has a mean of zero and a variance of 1 / n {\displaystyle 1/n} (where n {\displaystyle n} is the number of dimensions), and although the distribution is bounded between -1 and +1, as n {\displaystyle n} grows large the distribution is increasingly well-approximated by the normal distribution. Other types of data such as bitstreams, which only take the values 0 or 1, the null distribution takes a different form and may have a nonzero mean. ## Triangle inequality for cosine similarity The ordinary triangle inequality for angles (i.e., arc lengths on a unit hypersphere) gives us that Because the cosine function decreases as an angle in radians increases, the sense of these inequalities is reversed when we take the cosine of each value: Using the cosine addition and subtraction formulas, these two inequalities can be written in terms of the original cosines, This form of the triangle inequality can be used to bound the minimum and maximum similarity of two objects A and B if the similarities to a reference object C is already known. This is used for example in metric data indexing, but has also been used to accelerate spherical k-means clustering the same way the Euclidean triangle inequality has been used to accelerate regular k-means. ## Soft cosine measure A soft cosine or ("soft" similarity) between two vectors considers similarities between pairs of features. The traditional cosine similarity considers the vector space model (VSM) features as independent or completely different, while the soft cosine measure proposes considering the similarity of features in VSM, which help generalize the concept of cosine (and soft cosine) as well as the idea of (soft) similarity. For example, in the field of natural language processing (NLP) the similarity among features is quite intuitive. Features such as words, n-grams, or syntactic n-grams can be quite similar, though formally they are considered as different features in the VSM. For example, words “play” and “game” are different words and thus mapped to different points in VSM; yet they are semantically related. In case of n-grams or syntactic n-grams, Levenshtein distance can be applied (in fact, Levenshtein distance can be applied to words as well). For calculating soft cosine, the matrix s is used to indicate similarity between features. It can be calculated through Levenshtein distance, WordNet similarity, or other similarity measures. Then we just multiply by this matrix. Given two N-dimension vectors a {\displaystyle a} and b {\displaystyle b} , the soft cosine similarity is calculated as follows: where sij = similarity(featurei, featurej). If there is no similarity between features (sii = 1, sij = 0 for i ≠ j), the given equation is equivalent to the conventional cosine similarity formula. The time complexity of this measure is quadratic, which makes it applicable to real-world tasks. Note that the complexity can be reduced to subquadratic. An efficient implementation of such soft cosine similarity is included in the Gensim open source library.
https://en.wikipedia.org/wiki/Angle_between_vectors
Histopathologic evaluation of surgical specimens is a well established technique for disease identification, and has remained relatively unchanged since its clinical introduction. Although it is essential for clinical investigation, histopathologic identification of tissues remains a time consuming and subjective technique, with unsatisfactory levels of inter- and intra-observer discrepancy. A novel approach for histological recognition is to use Fourier Transform Infrared (FT-IR) micro-spectroscopy. This non-destructive optical technique can provide a rapid measurement of sample biochemistry and identify variations that occur between healthy and diseased tissues. The advantage of this method is that it is objective and provides reproducible diagnosis, independent of fatigue, experience and inter-observer variability. Methods We report a method for analysing excised lymph nodes that is based on spectral pathology. In spectral pathology, an unstained (fixed or snap frozen) tissue section is interrogated by a beam of infrared light that samples pixels of 25 μm × 25 μm in size. This beam is rastered over the sample, and up to 100,000 complete infrared spectra are acquired for a given tissue sample. These spectra are subsequently analysed by a diagnostic computer algorithm that is trained by correlating spectral and histopathological features. Results We illustrate the ability of infrared micro-spectral imaging, coupled with completely unsupervised methods of multivariate statistical analysis, to accurately reproduce the histological architecture of axillary lymph nodes. By correlating spectral and histopathological features, a diagnostic algorithm was trained that allowed both accurate and rapid classification of benign and malignant tissues composed within different lymph nodes. This approach was successfully applied to both deparaffinised and frozen tissues and indicates that both intra-operative and more conventional surgical specimens can be diagnosed by this technique. Conclusion This paper provides strong evidence that automated diagnosis by means of infrared micro-spectral imaging is possible. Recent investigations within the author's laboratory upon lymph nodes have also revealed that cancers from different primary tumours provide distinctly different spectral signatures. Thus poorly differentiated and hard-to-determine cases of metastatic invasion, such as micrometastases, may additionally be identified by this technique. Finally, we differentiate benign and malignant tissues composed within axillary lymph nodes by completely automated methods of spectral analysis.
https://bmcclinpathol.biomedcentral.com/articles/10.1186/1472-6890-8-8
What's wrong with putting in the minimum amount of effort? What's wrong with cramming for a test the night before, instead of working hard every single day? What's wrong with only working when your boss is around to see what you're doing? What's wrong with cutting corners? What's wrong with any of it, provided the end result is the same? Are the top grades I got in my exams any less valuable than the top grades achieved by the "good" students who always put in lots of effort the whole time? Are the pay rises and promotions I get in the workplace any less valuable than the career achievements of the hard workers, who put in a great deal of effort that goes unnoticed and unrecognised? The things that I didn't bother to do, that nobody really cared about anyway, so nobody even noticed that I didn't do... does it matter that I didn't do those things? At the end of the day, the end results are the same. When I'm reduced to a piece of paper with my qualifications written on it, nobody has any idea how hard I worked - or didn't work - to achieve my qualifications. When I'm reduced to a job title, a salary and a few performance objectives, nobody has any idea whether I'm a productive, energetic, conscientious and diligent employee, or whether I'm a professional slacker. In fact, isn't it better if I'm getting maximum pay for minimum effort? Haven't I somehow won at the game of life, if I've avoided all that pointless unnecessary work, and yet I get all the rewards which were supposed to be reserved for the hard workers? The ones who put in the effort should be proportionately rewarded, right? Life doesn't seem to work like that. Living with a long term mental illness is utterly exhausting, even at the best of times. I don't think you are a slacker, I think you are doing what you have to, to get through.
https://www.manicgrant.com/2019/minimum-effort
We are looking for a Lead Front-End Developer with a passion for building high quality and user-friendly software for our data solutions business serving the life sciences industry. As a Lead Front-End Developer, you will work in capacity of a tech lead on essential tools & technologies to design & develop our web-based fully responsive data acquisition platform. As a member of the team, you can have a huge impact on everything from the user experience we deliver for our customers, to the web architecture of our systems, to the culture we build. This is a unique opportunity to play a key role in developing differentiable user experience with industry leading visualization for our new platforms. You’ll be leaned on as the subject-matter expert when it comes to all-things front end. Our technical challenges involve designing and developing the next generation user experience using leading-edge UI technologies. It also involves influencing design and co-developing the software APIs/microservices and integrating them to deliver optimal user experience. Familiarity with DevOps concepts and tools create new web sites, check health, and deployment to development and production environments along with passion for automation. Integrate third party libraries and UI components to create seamless user experience. Experience taking a lead role developing exceptional UI-centric software systems that have successfully been delivered to customers. Healthcare experience preferred but not required.
https://commercialcareers.syneoshealth.com/jobs/3347718-research-and-insights-lead-developer
Through the struggles of Aboriginal people for recognition and self-determination it has become common sense to understand Australia as made up of both Aboriginal and non-Aboriginal people and things. But in what ways is the Aboriginal/non-Aboriginal distinction being used and understood? In The Difference Identity Makes, 13 Aboriginal and non-Aboriginal academics examine how this distinction structures the work of cultural production and how Aboriginal producers and their works are recognised and valued. The editors introduce this innovative collection of essays with a path-finding argument that 'Indigenous cultural capital' now challenges all Australians to re-position themselves within a revised scale of values. Each chapter looks at one of 5 fields of Australian cultural production: - sport - television - heritage - visual arts and - music. It reveals that in each the Aboriginal/non-Aboriginal distinction has effects that are specific. This brings new depth and richness to our understanding of what 'Indigeneity' can mean in contemporary Australia. In demonstrating the variety of ways that 'the Indigenous' is made visible and valued the essays provide a powerful alternative to the 'deficit' theme that has continued to haunt the representation of Indigeneity. Appeals to teachers and students in Aboriginal studies, cultural studies, media studies, history and anthropology. Get a copy now from your favourite trusted store Disclosure: I get commissions for purchases made through the below links.
https://www.creativespirits.info/resources/books/the-difference-identity-makes
Ninety year old linguist Zecharia Sitchin is going for broke; staking his reputation and entire body of work on a proposed DNA test. Needless to say, Sitchin's ideas - like those of another ancient-astronaut author, Erich von Däniken - have been roundly scorned by the scientific community. But now Sitchin is asking that very community to help him with the mystery of Queen Puabi. Puabi's remains were unearthed from a tomb in present-day Iraq during the 1920s and 1930s, roughly the same time frame as the discovery and study of Tutankhamun's tomb in Egypt. Forensic experts at London's Natural History Museum determined that Puabi was about 40 years old when she died, and probably reigned as queen in her own right during the First Dynasty of Ur. Sitchin contends she was something more than a queen - specifically, that she was a "nin," a Sumerian term which he takes to mean "goddess." He suggests that Puabi was an ancient demigod, genetically related to the visitors from Nibiru. What if these aliens tinkered with our DNA to enhance our intelligence - the biblical tree of knowledge of good and evil - but held back the genetic fruit from the tree of eternal life? Does the story of Adam and Eve actually refer to the aliens' tinkering? The way Sitchin sees it, the ancient myths suggest that "whoever created us deliberately held back from us a certain thing - fruit, genes, DNA, whatever - not to give us health, longevity, and the immortality that they had. So what was it?" I'm a little surprised at his all or nothing approach but he's right that whether he positions himself that way or not it's how it will be taken in the increasingly blinkered scientific community. I, for one, don't share a number of Sitchin's conclusions but that doesn't mean we throw out the baby with the bathwater. There's no denying that many of his once ridiculed theories have been validated by newer scientific discoveries; such as elliptical planetary orbits. Some of these are discussed in the article and quoted interview. Sitchin is provocative and whether he's one hundred percent accurate or not is besides the point. He's done what innovators are supposed to do. He's raised very compelling questions and lent insight into the mysteries of ancient civilizations.
https://www.celestialhealing.com/2010/06/zecharia-sitchin-and-search-for-alien.html
1 Timothy 6:16 states that God alone possesses immortality. I am a Christian who believes, according to scripture, that no one is resurrected until the first resurrection. The Bible describes this as the resurrection when believers are brought back to life to reign with Christ. The second resurrection is described as the resurrection when everyone else is resurrected & faces judgement. Please help me to understand this in light of Catholicism. Thank you so much! How do Catholics interpret 1Timothy 6:16 I’m not sure what your exact question is, but I believe the passage is pointing to the truth that God is immortal by his very nature. God is immortal, God gives us eternal life. We are not immortal. On the last day, it’s judgement day for everyone. Heaven or hell. That’s our second judgement. Our first is when we die. Heaven, purgatory or hell. You can and do go from purgatory to heaven. Hell is a one way trip. I think the passage you read is in revelation? God’s the only one who naturally is eternal. We’re all dependent on his grace, even after resurrection, it’s only through God that we can live forever afterwards in glorified bodies. Two resurrections? Where does that come from? Agreed…what? Are you confusing it with the misunderstanding protestant theology of Christ coming again, twice? This is why I was confused by the question. Perhaps the OP can clarify. 1 Timothy 6 “But as for you, man of God, shun all this; pursue righteousness, godliness, faith, love, endurance, gentleness. Fight the good fight of the faith; take hold of the eternal life, to which you were called and for which you made the good confession in the presence of many witnesses. In the presence of God, who gives life to all things, and of Christ Jesus, who in his testimony before Pontius Pilate made the good confession, I charge you to keep the commandment without spot or blame until the manifestation of our Lord Jesus Christ, which he will bring about at the right time—he who is the blessed and only Sovereign, the King of kings and Lord of lords. It is he alone who has immortality and dwells in unapproachable light, whom no one has ever seen or can see; to him be honor and eternal dominion. Amen” Jesus, being the Word made flesh, is God. He, as the Son of God, always possessed immortality, and then gained eternal life (by living in the flesh in obedience to the Spirit) on behalf of those who have faith in Him. Yes, he always possessed immortality…but as the second person of the Holy Trinity, he as you stated already and always possessed immortality…he did not need to gain eternal life, he already had it (that is immortality)…living in the flesh and obedience to the Holy Spirit did not give him what he already had…since he was the same essence as the Holy Spirit, he could not go rogue…he was in total synch with the Holy Spirit, and equal and not subordinate to the spirit…it is our obedience in the Trinity that gives us eternal life…and it was his living in the flesh that regained the immortality and eternal life we lost at the fall. As far as God alone possessing immortality, I haven’t found any Catholic commentary on that particular verse. However, it probably means that God alone possesses immortality naturally and that those others who have it receive it as a gift from God. As far as the first and second resurrections mentioned in Revelation 20 go, Catholics tend to be amillenialists, understanding the thousand year reign of Christ and his saints mentioned in Revelation 20 to be symbolic of the time between Jesus’ resurrection almost 2000 years ago and Jesus second coming at the end of time when all the dead will be raised for the General Judgment, when Christ reigns in and through his church. If that is the case, then, I think, “the first resurrection” mentioned in Revelation 20 probably refers to the resurrection of the saints mentioned in Matthew 27:52-53: 52 the tombs also were opened, and many bodies of the saints who had fallen asleep were raised, 53 and coming out of the tombs after his resurrection they went into the holy city and appeared to many. By taking on flesh and living in obedience to the Spirit, He gained (merited) our Reconciliation to the Spirit, not His own. But the promise is to those who believe and remain in Him. It can not refer to them. They did not ascend into heaven. They lived again and went into the city and witnessed. If you read Revelation 20:4-6 it speaks of the first resurrection and how the people who partake in that are blessed & will reign with Christ a thousand years. These people are resurrected and rewarded for not taking the mark of the beast or worshiping his image. That has not happened yet. The rest of the dead are not brought back to life until the thousand years are ended. This is all in Scripture. Thank you. It is in scripture. Please see my reply to Todd Easton. The order esuteection is mentioned in Revelation 20:4-6. The second is mentioned in Revelation 20:11-15, which is the great white throne judgement. My point in mentioning all of this is to say how do Catholics reconcile this to their faith? The scripture states that only God holds immortality now. And we are not risen to life until the resurrection. Why would we be judged twice? Or resurrected twice? Thank you all for your comments. I am not asking this to be smug or rude. I truly want to understand the catholic view on all of this. I was actually surprised to hear some of you ask me where I heard of this, when it is in our Holy Bible. Thank you again sincerely! Sorry about the typo. “First resurrection” is obviously what was meant to be written. Lol Two judgments should not be confused with two resurrections…the Catholic teaching is that our souls are judged at our moment of death (should that come before the return of Christ), and based on this “Particular Judgement” our souls are destined for salvation (immediately, or through purgatory), but the “final Judgement” at the return of Christ will se the one and only resurrection, when we will be given a new body and it will be joined with ours souls, if we died before the return. The Catechism of the Catholic Church, paragraph 676, seems to condemn any form of millennialism that has Christ reigning on the earth in person with his resurrected saints before the end of the world. This is a small part of a very good (old) Catholic commentary on the 20th chapter of the Apocalypse. I added the link, below, in case you wanted to read the whole thing: HAYDOCK CATHOLIC BIBLE COMMENTARY APOCALYPSE 20 “Ver. 2. And bound him for a thousand years. I shall give the reader an abridgment of what S. Augustin has left us on this chapter, in his 20th book de Civ. Dei. From the 5th to the 16th chap. (t. vii. p. 578 et seq.) he treats upon these difficulties: What is meant by the first and second resurrection; by the binding and chaining up of the devil; by the thousand years that the saints reign with Christ; by the first and second death; by Gog and Magog, &c. As to the first resurrection, c. vi. he takes notice on the 5th verse, that resurrection in the Gospels, and in S. Paul, is applied not only to the body but also to the soul; and the second resurrection, which is to come, is that of the bodies: that there is also a death of the soul, which is by sin; and that the second death is that of soul and body by eternal damnation: that both bad and good shall rise again in their bodies. On those words, (v. 6) Blessed is he that hath part in the first resurrection; in these the second death hath no power. Such, saith he, (c. ix.) as have risen from sin, and have remained in that resurrection of the soul, shall never be liable to the second death, which is damnation.” God alone is eternal by His nature. He grants us immortality because He loves us. There is only one resurrection. But there will be two judgements. Each of us, when we die, must face the Particular Judgment. There we learn our fate: Heaven, Hell or Purgatory followed by Heaven. Upon the end of the age all will face the General Judgment, when every soul will witness the judgment of every other soul. Every one of us will see the justice and mercy of God with regard to every other one of us. One Resurrection, two Judgments. Are you questioning Jesus being resurrected or Our Lady? Part of revelation was written in code about the past. And some was written about the future. Catholics do not believe people on earth will be taken up and spared the antichrist. The end is actually the end. No one knows the end. People will be living like there will be a tomorrow. Do not be anxious about the end. The end is a single event in which there will be a new heaven and new earth. Everyone will be judged, doesn’t change the judgement on the day they actually died. The end is where our bodies come back to us. Actual end, not stages of ending. It’s all in the Bible. We do believe in the resurrection of the dead. The rapture is a recent phenomena of various scripture picked from various books in the bible and collected.
https://forums.catholic.com/t/how-do-catholics-interpret-1timothy-6-16/499282
Are you the type of person who believes that miracles are written in the stars above? You must have come across people you hold close to your heart, and some with whom it doesn’t “make sense”. You’d be surprised to know that this match is a work of the stars shining bright in the night sky! Constellations have been used for ages to make matches, and some also detect friendships and their compatibility. Read on to get to know more about these stars, and how they shape experiences throughout one’s life! Constellations for Friendship: 5 Stories Cygnus Also known as the Swan, Cygnus is one of the most popular Constellation, owing to the fact that it contains the first known black hole, Cygnus-X1. It is easily visible in the night sky and resembles its name. It was discovered in the 2nd Century CE by the Greek astronomer Ptolemy. Due to its magnificent size and its particular location, Cygnus is easily available in the night sky. It can be seen crossing paths with the Northern Cross and is most prominent during mid-August and September. In Greek methodology, Cygnus is a representation of friendship and long-lost love. Cygnus was a companion to Phaethon, the child of Apollo. Phaethon tried to drive Apollo’s chariot (the Sun) across the sky one day. He failed to keep a grip therefore was shot out of the sky by Zeus before he could cause any more harm to the Earth. Phaethon arrived in a stream, and Cygnus continued to plunge into the waterbody like a swan to track down his companion. Zeus was so intrigued by Cygnus’s faithfulness and fellowship that he transformed him into a swan and set his picture into the night sky. Although not classified as a Zodiac sign, Cygnus is known in the realm of friendships as the loyalty and will to go out of one’s way to be with the people they love, until nature finds a way to unite them for eternity. Aries Aries appears in the shape of a bull’s horns, so as to complement its name. Falling under the Zodiac category of constellations, Aries was discovered by the Greeks and Babylonians in ancient times, it was officially recognized in the year 1922 by the International Astronomical Union. Aries is not one of the stars to be easily viewed. It is best seen during winter darkness, particularly around 9 pm on a December night. It is found in the Northern fraction, with Taurus on the Eastside and Pieces on the west. In the eyes of the Greeks who discovered it, Aries was viewed as a Ram, symbolizing hope, strength, and sacrifice. The ram was offered to Zeus for sacrifice, the leader of the Gods. The ram’s fleece was viewed as sacred and placed in a temple, which later on became famous in the stories of Jason and the Argonauts; a tale of revenge and deception. Aries tend to be straight-forward and therefore are often seen as hard to be friends with. But once that initial barrier is crossed, Aries will prove to be friends who have your back and don’t talk behind it. To keep matters in check, it’s good to have an Aries with you for better or worse. Sagittarius The word Sagittarius means ‘Archer’ in Latin, and keeping this translation in mind, the constellation itself is symbolized by a bow being shot by a centaur (half-man/half-horse). The constellation was discovered by people in the Babylonian era, approximately during the 11th century BC. Sagittarius, when viewed from the Southern end is relatively easy to spot. The Archer perfectly points its bow towards our beloved Milky Way on a clear night in August or September. Composed of 32 stars inside it, the brightest star in the constellation forms a unique image of a teapot in the sky, making it easy to spot from afar. According to Greek mythology, Sagittarius is considered the children of Ixion and Nephele, a cloud nymph. They were seen as cowboys of outer space, who used their lasso to control the cattle up in the sky. The can be considered a representation of order and structure. During the shortest day of the year (solstice) on 21st December, the sun gleams precisely in front of this constellation. People with this star sign tend to be easygoing and good at making friends. They have extensive and diverse social circles; where one clicks, the friendship kicks off. Their outgoing personality coupled with their selflessness makes Sagittarius great friends and long-term companions. Gemini Placed in the Northern Hemisphere, Gemini is one of the oldest constellations. It was discovered in the 2nd century by Ptolemy, the Greek Astronomer. It is the 30th major constellation and is also the third Zodiac sign. Their visibility peaks around mid-December each year. Gemini is symbolized by twins, Pollux and Castor. According to Greek mythology, one was a great fighter and the other a talented horseman and both of them were inseparable. However, after the death of Castor, Pollux begged Zeus to bring his twin back to life. Zeus agreed but conditioned them to spend half their life on Earth and the other half in the sky. Since then, the two have always been seen together. It is believed that sailors who see the two stars together during journeys experience glad tidings while seeing just one brings misfortune to the observer. In terms of personalities, Geminis are able to make close connections owing to their flexibility and their will to socialize. Auriga In Latin, the word Auriga means “The Charioteer”. The constellation is named so because the collection of its stars forms the helmet of a Charioteer. Cataloged in the 2nd century CE by Ptolemy, Auriga is one of the oldest constellations. It is also the 21st largest constellation in the sky. Auriga is found in the Northern Hemisphere, specifically in the primary quarter. Auriga represents the Lame God who designed a chariot to travel around the world. Symbolically, Auriga is primarily based on is often depicted preserving a woman goat and her children, in conjunction with the wheels of a chariot. Although not a Zodiac sign, Auriga represents commitment and determination, coupled with the desire to explore the unknown despite the complications that exist all around. Constellations continue to be one of the most magical wonders of the universe we live in. Taken for their grandeur, they are often used for making life decisions and much more. However, gaining sufficient knowledge about them makes one know that the real enchantment lies in the unknown. Using that, shape a beautiful future, with or without a little help from the stars above. You might also like: - What Are Constellations Used For?
https://usvao.org/constellations-for-friendship/
Full papers may be submitted to the ATR Manuscript Central site by October 1, 2018. Instructions for authors may be found here. Informal inquiries may be made to Cameron Logan ([email protected]) or Janina Gosseye ([email protected]). This issue of ATR (23, no. 2) will be published in August 2019. Abstract: A special issue of Architectural Theory Review edited by Cameron Logan and Janina Gosseye In 1961, Elias Canetti published Crowds and Power. In this book, Canetti suggests that in essence there are two categories of crowds: the open and the closed crowd. The “open” crowd is a natural crowd: it gathers spontaneously, it exists as long as it grows, and it disintegrates as soon as it stops growing. On the other side of the spectrum is the “closed” crowd. This type renounces growth and emphasizes permanence. It has a boundary and creates a space for itself, which it fills. Canetti writes: “The building is waiting for them [the crowd]; it exists for their sake and, so long as it is there, they will be able to meet in the same manner. The space is theirs, even during the ebb, and in its emptiness it reminds them of the flood.” Although crowd historiography experienced a heyday in the 1950s and 1960s, concomitant research in architectural history, theory, urban planning and urban design remained largely absent. This is surprising given that the concept of the crowd is intricately bound to these disciplines. The organisation of space can both shape and obstruct the formation of crowds. The most celebrated examples of planning in the grand manner – think of Haussmann’s Paris – are often interpreted as projections of state or imperial power, designed to suppress or control crowd formation. Yet architecture and urban design is not always utilised in the service of state power and some public projects have a radical or democratic intent. Lina Bo Bardi’s MASP in São Paolo, for instance, is suspended from two massive portal frames, deliberately creating a large open area at street level to facilitate crowd formation along the Avenida Paulista, a longstanding site of protest. Certain building types are specifically designed for the assembly of “closed” crowds. Their design not only determines the size and organisation of the crowd, as well as its flows and rhythms, but their spaces often also reveal (intangible) aspects of the organisation of society. Mapped over time, these building types can express evolving concepts of community and citizenship; they can offer insight into changing customs and mores, and they can reveal structures of inclusion and exclusion. The Roman amphitheatres, for instance, were a central staple of the ancient world; in the Middle Ages, cathedrals were the site and scene of great assemblies of people; and in recent times shopping centres, sport stadia and concert halls have accommodated modern crowds – from political rallies and riots, to sporting events, flash mobs and raves. Yet beyond a few emblematic twentieth century projects, such as Albert Speer’s Zeppelinfeld in Nuremberg and Mies van der Rohe’s unbuilt Chicago Convention Centre, scholars of the built environment have only rarely touched upon the subject of architecture and the crowd. For this special issue of ATR, we invite submissions that investigate the relationship between architecture, urban design and the formation of crowds in two main ways: 1) through realised projects; and 2) by considering the way in which crowds have been depicted in architecture through various modes and media, including photomontage, drawings, computer generated imagery, etc. Submissions may address one or more of the following themes: crowds, architecture and urban identity; crowds, architecture and security; representation of crowds, citizenship and social identity in architecture and urban design; social exclusion and inclusion in the architecture of mass gatherings, especially the racialised and gendered visions of the collective; atmosphere and environment in crowded buildings and places; architecture and its relationship to collective effervescence, intersubjectivity and collective memory; architecture, urban design and mechanisms of crowd dispersal.
https://umrausser.hypotheses.org/7251
In physics, an electronvolt (symbol eV, also written electron-volt and electron volt) is the amount of kinetic energy gained by a single electron accelerating from rest through an electric potential difference of one volt in vacuum. When used as a unit of energy, the numerical value of 1 eV in joules (symbol J) is equivalent to the numerical value of the charge of an electron in coulombs (symbol C). Under the 2019 redefinition of the SI base units, this sets 1 eV equal to 1.602176634×10−19 J.
https://dbpedia.org/describe/?url=http%3A%2F%2Fdbpedia.org%2Fresource%2FElectronvolt
There are only some 1500 Southern Ground Hornbills left in South Africa; the Joburg Zoo runs a rearing programme to care for the endangered birds, teaching the fledglings social skills so they interact well with the older birds, and eventually breed. AIMING to ensure the survival of the Southern Ground Hornbill, Joburg Zoo staff successfully hand-reared three birds from the Kruger National Park and the Associated Private Nature Reserves (APNR). The Southern Ground Hornbill (Bucorvus leadbeateri), also referred to as a thunder, or rain, bird, is a flagship species for the savannah biome, along with cheetah, white rhino and several vulture species. Savannahs are open grasslands dotted with trees. South Africa classifies the birds as endangered, with numbers outside of formally protected areas still on the decline. Currently an estimated 1500 Ground Hornbills are left in South Africa, of which half are safe within the protected areas of the greater Kruger National Park. This year, Nelson, who hatched on 5 December, Mangake, on 8 December, and Tshukudu, on 13 December, have been nurtured to learn social skills so they can recognise and interact with their own species when they are old enough to breed. The Zoo collaborates with other organisations, including The Mabula Ground Hornbill Project, the Endangered Wildlife Trust, and the Percy Fitzpatrick Institute, through the APNR Ground Hornbill Project. Raising a Southern Ground Hornbill Southern Ground Hornbills lay two eggs but only rear one chick. This means the other chick is abandoned and needs to be rescued after it has hatched. Fieldworkers safely remove the abandoned chicks from the nests after three or four days. Annually, Lara Jordan – birds curator at the Zoo – and her team painstakingly tend to these second chicks and initially feed them every two hours. The first 21 days are crucial to ensure the chicks' survival. "Over the years we have tried various rearing techniques in an attempt to produce birds that are wild enough to be released. Techniques that have been found effective in limiting human imprinting on the birds includes mimicking humming noises to encourage the fledglings to open their mouths and feed as well as limiting the birds' view of humans by keeping them secluded," said Jordan. The chicks are socialised with the adult birds from the outside of the enclosure from as young as two weeks old. This continues up until they fledge at around 90 days old, when they are placed in the enclosure with the adults to learn disciplinary behaviours. "Socialising the younger birds with the group of already reared birds is crucial to get them to breed at a later stage, as well as ensuring we are producing birds that are considered wilder or less imprinted. "Successfully reared birds are sent into captive populations as part of a programme aligned to the African Association of Zoos and Aquaria. However, each year we try to move closer to producing a releasable bird that is not habituated," she said. Southern Ground Hornbills live in breeding groups of between two to nine birds, with one alpha male and one breeding female per group; the rest of the group are considered helpers. The birds' declining numbers have been attributed to the loss of its habitat to croplands; bush-encroachment; overgrazing and plantations; loss of nesting trees; and secondary poisoning and electrocution.
http://jhbcityparks.com/index.php/news-mainmenu-56/1063-southern-ground-hornbill-safe-at-joburg-zoo
We noticed you haven't updated your profile picture recently. We've upgraded your profile to allow for richer hi-resolution images. We invite you to take a moment to upload a new image that represents you in the community! Week 6 Articles Topic: Lesson Planning I read the article “Personalized Vocabulary Learning in the Middle School Classroom” by Robert Chesbro. This article discusses the importance of students making connections with vocabulary terms. Vocabulary is a huge component of lesson planning. In the first section, the author describes how students learn new vocabulary words by finding definitions and using the words in a sentence. The teacher allows students to use textbooks and dictionaries to define the terms, but that wastes instruction time. Chesbro (2016) states that teachers should give the students the definitions and expand on the terms. When students are defining and writing new vocabulary, they are just simply memorizing the terms. By just using memorization, students will not know how to apply the vocabulary word. Students need to make connections with vocabulary terms in order to retain them. Chesbro (2016) mentioned the vocabulary worksheet he created for students to use when learning new vocabulary terms. The worksheet is titled “In a Word, In a Symbol”. In a chart, students are to write the term, definition, a one-word summary, symbol, and a sentence or two explaining a weird personal connection they have with the vocabulary term. Chesbro (2016) concludes that his approach for vocabulary terms make learning more personalized rather than memorization for students. Using this chart, students will be more engaged when learning new vocabulary terms. Reference Chesbro, Robert. (2016). Personalized vocabulary learning in the middle school classroom. Science Scope, April/May, 35-38. Topic: Metric Measurement I read the article “Science 101: Why Do We Need Standard Units?” by Bill Robertson. The article begins with how the author as a graduate student had a tour guide job at the National Bureau of Standards now renamed the National Institute of Standards and Technology. For people on the tours, he would show them a movie about standards. Robertson (2014) notes that the movie explained the importance of having standard units with firefighters. A fire had broke out in Baltimore and burned for 30 hours. Fire hydrants outside of the city of Baltimore did not match the firefighters’ hoses, making it a struggle to get the fire out. If all of the fire hydrants matched the water hoses, the fire would have been put out quickly. Having standard units allows for items such as water hoses and fire hydrants to be perfect matches no matter the location. In science, it is important to have exact measurement of items. Exact measurements are needed so scientists can compare their findings with each other. This concept of exact measurements is what students need to know when performing science experiments. Robertson (2014) mentions an activity that can be done in the classroom about exact measurements using paper clips. Students are given different sized paper clips and asked to measure their desks. The goal is for students to realize that due to the different sized paper clips, their answers will not be the same. It is important to use standard units because the units are the same regardless of location and to compare findings of data. Reference Robertson, Bill. (2014). Science 101: Why do we need standard units? Science and Children, December, 62-64.
https://learningcenter.nsta.org/discuss/default.aspx?tid=yT1Xje/HqYo_E
By way of this book, Norman Schneidewind has officially bridged the gap between the two disparate fields. Filled with many real-world examples drawn from industry and government, Systems and Software Engineering with Applications provides a new perspective for systems and software engineers to consider when developing optimal solutions. This unique approach to looking at the big picture when addressing system and software reliability can benefit students, practitioners, and researchers. Excel spreadsheets noted in the book are available on CD-Rom for an interactive learning experience. Read Systems and Software Engineering with Applications and learn how to: Quantitatively analyze the performance, reliability, maintainability, and availability of software in relation to the total system - Understand the availability of software in relation to the total system - Use standards as part of the solution - Evaluate and mitigate the risk of deploying software-based systems - Apply models dealing with the optimization of systems through quantitative examples provided to help you understand and interpret model results Some of the areas the book focuses on include: - Systems and software models, methods, tools, and standards - Quantitative methods to ensure reliability - Software reliability and metrics tools - Integrating testing with reliability - Cyber security prediction models - Ergonomics and safety in the workplace - Scheduling and cost control in systems and software The 18 full papers presented were carefully reviewed and selected from 186 submissions. The papers are... Within the space of just a few years, the Unified Modeling Language (UML) has emerged as the design medium of choice for developing large-scale distributed object applications. The UML's standard semantics and notation for describing object structure and behavior make it particularly well suited to this function. Augmented by... In a modern home, it is no longer unusual for family members to own multiple computers and network devices. In most houses, you can find at least a desktop computer, which is generally used for more performance-intensive tasks such as gaming or professional work of any kind. Parents bring home their work laptops or notebooks, which they...
https://book.pdfchm.net/systems-and-software-engineering-with-applications/9780738158525/
Developers are increasingly building large software in the form of highly configurable systems, systems with features that can be toggled on and off. The major risk for highly configurable systems is that some bugs, called configuration-dependent faults, only cause failures when certain features are combined, being invisible otherwise. My talk will first discuss the techniques we currently have to combat configuration-dependent faults and show that they all exploit a common idea, which we term feature locality. I will then present some newly discovered forms of feature locality and explain how they are helping us better prevent, find, mitigate, and repair configuration-dependent faults.Refreshments will be served at 4:15 p.m. in the Computer Science Commons (Noyce 3817). Mr. Garvin's talk, "Configuration-dependent faults and feature locality," will follow at 4:30 p.m. in Noyce 3821. Everyone is welcome to attend. At noon on Friday, April 30, in Noyce 3821, Myra Cohen of the Department of Computer Science and Engineering at the University of Nebraska at Lincoln will speak on the role of combinatorics in the design of test suites for software: Software systems today are magnitudes of order larger and more complex than their recent ancestors. Instead of building single systems, we now build families of systems. User interfaces are graphical and programs event-driven. The software/hardware interfaces we once kept distinct have become blurred. Developing reliable and affordable software presents an increasing number of challenges. As glitches in these large-scale systems continue to make newspaper headlines, developing reliable and affordable software presents an increasing number of challenges. In this talk we examine advances in software testing that focus on the difficulty caused by one simple but ubiquitous concept -- system configurability. Configurable systems include software such as web browsers and office applications, families of products customized by businesses for different market segments, and systems that dynamically reconfigure themselves on the fly. We show how theory from combinatorial mathematics, combined with heuristic search algorithms, can help us to test these systems more efficiently and effectively. Pizza and soda will be served shortly before noon. Professor Cohen's talk, Combinatorics, heuristic search, and software testing: Theory meets practice, will begin promptly thereafter. Everyone is welcome to attend! This Friday we will consider one of the famous pieces of writing on software engineering, by Frederick P. Brooks, a primary architect of OS/360. Brooks, Frederick P., Jr. (April 1987). "No Silver Bullet - Essence and Accidents of Software Engineering". IEEE Computer 20 (4): 10-19. Grinnell College's CS Table is a weekly gathering of folks on campus (students, faculty, staff, alums, etc.) to talk about issues relating to computer science. CS Table meets each Friday at noon in JRC 224A, the Day Public Dining Room (PDR) in the Joe Rosenfeld '25 Center (JRC). All are welcome, although computer science students and faculty are particularly encouraged to attend. The Spring 2010 theme of CS Table is Software Design. Contact Professor Rebelsky for further information or for a printed copy of the reading.
http://drupal.cs.grinnell.edu/drupal6/taxonomy/term/177
Five Constitutional Amendments You Didn’t Know Existed If you woke up feeling doubly patriotic today, we’ve got an explanation. It’s both Constitution Day and Citizenship Day. The dual-date comes from a chain of observances that were codified into its current arrangement by Congress in 2004. Citizenship Day celebrates being an American, including the act of immigrants obtaining citizenship, while Constitution Day recognizes the importance of the document that’s the bedrock of our country. In that spirit, we decided to do our duty as citizens by recognizing a few of the overlooked Amendments to the U.S. Constitution. Sure, everybody knows what’s in I and II, but what about XII, for example, and why is that important? Put on your tri-cornered hat and read about your rights, citizens. 1. The Third: No Quartering Given It’s possible that the First and Second Amendments get the most mentions in daily news and conversation. However, the Third is extremely important and definitely a product of its time. It expressly forbids soldiers from being quartered in private residences without the owner’s consent during times of conflict, and makes it completely illegal during peacetime. This is a direct rejection of a common practice during British rule, when colonists would frequently be forced to share their homes with British troops. Ratified on September 15, 1791. 2. The Ninth: It’s Got You Covered The Ninth Amendment is a pretty clever legal instrument. While the other amendments in the original Bill of Rights are “enumerated” because they are directly written into the document, the Ninth spells out that you as a citizen have other “unenumerated rights,” even if they aren’t spelled out. Various Supreme Court cases have decided that these rights include things like the right to travel, the right to keep medical records private, the right to privacy regarding one’s marriage, the right to vote, and the right to make one’s own health care decisions. Ratified on September 15, 1791. 3. The Twelfth: No Office for Second Place In our first presidential elections, the candidate who finished first became president, and the second-place finisher became vice-president. This amendment throws that out, forwarding the idea that presidential and vice-presidential candidates run together and are elected together. The first time that this happened in the U.S. was in 1804, when Thomas Jefferson was elected with his running mate, George Clinton (funny, we always pictured him in Parliament). Ratified on June 15, 1804. 4. The Sixteenth: Along with Death, the Other Certain Thing In public debate, people frequently ask where income tax began. The concept is ancient, but it was enshrined into American law with this amendment. It allows for a federal income tax that is not spread out among the states; the monies levied go the federal level, rather than the state. It’s pretty straightforward, yet many people don’t realize the amendment is there. Ratified on February 3, 1913. 5. The Twenty-Seventh: Congress Can’t Give Itself an Immediate Raise The most recent amendment prevents any pay raise voted upon by Congress to take effect until after the next election. This prevents, in theory, Congress voting for a raise for itself and then immediately benefitting. A senator or representative up for re-election would have to run, win, and be sworn back in before taking advantage of the raise. It’s a minor technicality, really, but it does allay some fears about corruption in the process. Ratified on May 5, 1992. Citizenship Test Answers These are the answers that the U.S. Customs and Immigration Service will accept for the ten questions we chose. - What does the Constitution do? - sets up the government - defines the government - protects basic rights of Americans - What do we call the first ten amendments to the Constitution? - the Bill of Rights - How many amendments does the Constitution have? - 27 - What is the economic system in the United States? - capitalist economy - market economy - Who makes federal laws? - Congress - Senate and House (of Representatives) - (U.S. or national) legislature - The House of Representatives has how many voting members? - 435 - We elect a U.S. Senator for how many years? - 6 - Under our Constitution, some powers belong to the states. What is one power of the states? - provide schooling and education - provide protection (police) - provide safety (fire departments) - give a driver’s license - approve zoning and land use - What happened at the Constitutional Convention? - The Constitution was written - The Founding Fathers wrote the Constitution - Name one U.S. territory.
https://www.saturdayeveningpost.com/sep-keyword/constitution/
A new study shows that when women are aware of both negative and positive stereotypes related to performance, they identify more closely with the positive stereotype, according to Science Daily. The research by cognitive scientists at Indiana University pertains specifically to women and math ability, but has broad implications for other groups affected by “stereotype threat.” While studies — including this one — have shown that women perform worse on mathematical tasks if made aware of the stereotype that women are weaker at math than men, this is the first study to examine the influence of concurrent and competing stereotypes. The study also demonstrates how negative stereotypes encroach on working memory, leaving less brainpower for the mathematical task at hand. Positive stereotypes had no such effect, however, and even when coupled with the negative stereotype, erased its drain on working memory. “This research shows that because people are members of multiple social groups that often have contradictory performance stereotypes (for example, Asian females in the domain of math), making them aware of both a positive group stereotype and a negative stereotype eliminates the threat and underperformance that is usually seen when they dwell only on their membership in a negatively stereotyped group,” said Professor Robert Rydell, a lead author of the study. Read more: http://www.sciencedaily.com/releases/2009/05/090504094300.htm More in "New Resources" - New Book: Shared Leadership in Higher Education - New Report: Building Students’ Resilience: Strategies to Support Their Mental Health - Civics Education Digital Toolkit, PA Courts Stay Current in Philly's Higher Education and Nonprofit Sector We compile a weekly email with local events, resources, national conferences, calls for proposals, grant, volunteer and job opportunities in the higher education and nonprofit sectors.
https://phennd.org/update/new-article-stereotyping-and-educational-performance/
At Trongate’s ‘Street Level Photoworks’, presents a stunning selection of black and white visual pieces. The exhibition is a celebratory collection of the Jewish Community that live and thrive within various locations throughout Scotland. It is a fascinating insight into this particular heritage as the Scottish Jewish Community is substantially a lot smaller than most cultural heritages that are situated within Glasgow and other Scottish locations. It is a particular culture that has been present in Scotland since the 1700s and was the largest non-Christian Community. Even though the community have prospered in many admirable achievements and careers in science, medicine, creative arts, agricultural, manufacturing and even whiskey distilling, it is a culture that has remained on the quieter scale in comparison to how much we are aware of of cultural heritages that are present in Scotland. The project focuses on a visual representation that explores the most prominent aspects of the Scottish Jewish Community’s lifestyle and culture. What is so visually striking about the exhibition is that it explores how the Jewish Scots have managed to hold onto the defining characteristics if their heritage yet still entwine it with the Scottish culture that they have embraced so comfortably. There are shots taken that show children at a Synagogue in Glasgow getting taught the traditional practices of the religion which then moves over to images of them attending football matches in Glasgow. It also features an image of “Kosher” haggis being created in a kitchen at “Mark’s Deli” based in Glasgow that then gets exported world-wide to Scottish – Jewish populations in order to be consumed for Burns Night celebrations.These are strong example of how the community are absorbing their two cultures in equal measure. Michael Mail is the the creator and organiser of this exhibition. He has wanted to give viewers the chance to be involved in this exploration of his heritage that has blended in quietly for these past centuries. The photographer Judah Passow has proved to be the ideal candidate to be involved in this visual style of storytelling. “I was looking for away to recognise and celebrate the story of the remarkable, yet little known Scottish Jewish Community – my community. When I camedee across Judah Passow’s photography, I immediately realised that he had the skill, sensitivity and artistry to take on this subject and create a truly memorable piece of work.” Michael Mail The images that have been captured serve an almost cinematic quality to the stories being told. There is one particular striking shot of a Jewish wedding of a groom taking part in “Hora”, the famous chair dance in which the bride and groom are seated in chairs then bravely hoisted into the air by their guests. The way the vision has been captured is timed perfectly with the groom being photographed in mid-flight, you can almost feel the impact of him soaring above the crowd. There is other fantastic landscape shots of Sottish scenery which is an effective contrast to the shots taking place within the cities. “This project has been a real voyage of discovery across the spiritual and cultural landscape of Scotland. One of its more remarkable features is the warm, proud Jewish community that has become so tightly woven into the national fabric. I hope people looking at those photographs will see what I saw – a people deeply devoted to their heritage both as Jews and Scots.” Judah Passow The project was supported by Creative Scotland and started its commencement by Judah on Hogmany, 31st December 2012 in Stonehaven. The project was rounded off with the last set of photographs taken on a weekend pheasant shoot in Hurlford, Ayrshire, December 2013. With the vast variety of shots that have been taken that feature so much movement and vitality, it is clear to show that a lot of time and in depth research has been put into this project.The exhibition is free of entry and also provides merchandise so visitors can have they chance to look over the images after they have left the exhibition. The exhibition commenced on the 12th of February 2015 and will continue to be shown until 12th April 2015 at ‘Street Level Photoworks’ 103 Trongate.
https://sixfootgallery.co.uk/2015/04/04/judah-passow-scots-jews-identity-belonging-and-the-future/
Cave-dwelling invertebrates in the Sataplia-Tskaltubo karst cave massif of the Imereti region, western Georgia, have been poorly investigated. Only 34% of the 49 caves have been studied biospeleologically (i.e., the formal biological study of cave-dwelling organisms). Out of 80 recorded invertebrate species, 15 are endemic to the Sataplia-Tskaltubo karst massif. However, only one species, the ground beetle Inotrechus kurnakovi, has been given a conservation status (CR) in the Georgian Red List. Knowledge about caves and cave-dwelling animals in the Imereti region is very limited among local communities. Consequently, there is increased anthropogenic pressure on at least nine caves within this region, including pollution, quarrying and vandalism. This project aims to investigate 24 karst caves biospeleologically to update the conservation status of the 15 endemic species on the global IUCN Red List, and to educate local communities about the biodiversity and threats facing cave biota. We plan to collect detailed biodiversity data regarding the invertebrate species inhabiting each target cave. Thereby, through the biospeleological investigation of these caves, the project outcomes will include: increased awareness about cave invertebrates and more positive attitudes towards them among local people; improved status of I. kurnakovi; and an updated conservation status for some rare cave-dwelling species, according to the global IUCN Red List categories and criteria.
https://www.conservationleadershipprogramme.org/project/conservation-actions-and-invertebrates-investigation-in-sataplia-tskaltubo-karst-caves-georgia/
This webpage provides answers for the Genesis Bible study daily questions on chapter ten. We pray that God will provide you with revelation of his will for your life as you study His Holy Word. Click here to download a worksheet with answers (PDF file) to compare your answers for the lineage of Noah. 1. Once you see the worksheet, print it out and study Genesis Chapter 10 so you can fill in the names for each box with a border. This gives you a picture of the family tree of Noah given to us in Genesis Chapter 10. Hint: all names belonging in the boxes with a border are fathers 2. Now that you have filled in all the boxes with a border on them you can see the family tree of Noah. Now highlight or trace the line from Noah to Abram on the chart. This is the earthly lineage of the Savior Jesus Christ. Click here to download a worksheet with the Christ line highlighted in gold (PDF file) to compare your answer. 3. Verses 5, 20, & 31 gives us a four-fold division of each family. What are the four categories? NIV- territory, clans, nation, language KJV- land, family, nation, tongue 4. How are the descendants of Noah depicted in Ezekiel 38:1-6? Which line is mentioned? They are attacking Israel The line of Japeth is mentioned. 5. In Ezekiel 38:13 another line is mentioned. Which is it and what are they doing? The line of Ham - They are weakly trying to confront their attackers. 6. Can you determine which of the three families of Noah you probably trace your lineage to? Your answer goes here because only you can answer this question. Why do we take pride in our ancestry? Many times we hear people say such things as: “I am ¾ Italian, ¼ German etc.” Why does this mean so much to some folks? Whatever the reason we do these things, they are really silly when you think about it. We all have the same ancestors really and we should think of ourselves as God thinks of us. God is no respecter of persons (Acts 10:34) and therefore, we should not be either. Learn More About Pride in this Bible Study 7. How do the four divisions mentioned in verses 5, 20, & 31 appear in Revelation 5:9 and what do we learn about Jesus as He relates to these four categories? The same categories are there in reference to Jesus Christ. His blood purchased men for God from every race and place in the world. We see the beginning of God’s plan to populate the earth in Genesis and the reason why in Revelation. 8. What are these same peoples doing in Revelation 7:9 & 10? They are standing before the throne, in front of the Lamb, Jesus Christ, giving glory to Him.
https://www.free-online-bible-study.com/genesis-10-answers.html
History of Soccer in the USA Many historians and game followers have argued over the years with regards to the origin of the game of soccer in the United States. It was believed for a considerably long periods of time that the game of soccer entered the United States sometime in the 1860s from the area of the Ellis Island. However, recent discoveries have hinted towards the fact that as per evidence, the Scottish, Irish, English as well as the German immigrants entering the United States did so from the port of New Orleans. Considering that these immigrants brought the game with them, it can be safely assumed that the game of soccer entered the United States from the port of New Orleans, and it was in this area that the first of the many games of soccer were held. One of the first association football clubs in the United States was called the Oneida Club. This club, although as short lived as it was, is known to have originated the concept of club football. It was majorly unclear as to the rules that were being followed by the club for a long time, however, according to the Britannica encyclopedia, the club and its players followed the ‘Boston Game’ in which the players were allowed to travel with the ball in their hands and also to kick it around in the field. Officially, the first recorded instance of a match being played in football over US soil as per FA rules was the football match played between Rutgers and Princeton University on the 6th of November 1869. The match was played with the rules allowing only the kicking of the ball and not carrying it in the hand. Each team comprised of a total of 25 members. Rutgers won the match with a score of 6 to 4. In the year 1884, the American Football Association was formed. Its aim was mainly to formulate and standardize the rules for American football or soccer. Its area of influence was limited to the regions of New York and New Jersey initially but by the year of 1886, the American Football Association had spread its wings to Massachusetts and Pennsylvania as well. The American Football Association was instrumental in organizing the first non league cup in the history of American soccer. The cup was called the American cup and was started within a year of the founding of the American Football Association. The cup was a huge hit and ran very well till the year 1899 when it was suspended due to an internal conflict in the American Football Association and remained suspended till it was restarted in the year 1906. Most of the soccer leagues in the United States used the name football. For example, the following leagues were formed in the earlier days of the history of soccer in the United States: 1. American Football Association ( 1884) 2. American Amateur Football Association (1893) 3. American League of Professional Football (1894) 4. National Association Foot Ball League (1895) 5. Southern New England Football League (1914) The use of the term soccer was resorted to in order to avoid the confusion between Association football and American football. The word soccer was originally seen as a British slang for the term association. Its widespread use was observed in the late 1910s and early 1920s. In the year 1911, the American Football Association had a competitor that called itself the American Amateur Football Association (AAFA). This association was seen spreading wide and was quick to form its own cup in the year 1912 and called it the American Amateur Football Association Cup. FIFA was the oldest and the most recognized body that dealt with the matters of football. Internal conflicts within the AFA coupled with the rivalry and the competition with the AAFA led both parties to register with FIFA in the year 1913 in order to settle the matter of a nationwide body that would regulate soccer in the United States. The AFA tried to gain advantage by claiming its older status and the success of the American Cup and would have succeeded but for the fact that many members of the AFA joined hands with the AAFA thus giving the American Amateur Football Association the upper hand. By the time of the onset of the great depression in the year 1929, there were many football leagues and clubs that had come into existence in the United States. The Unites States Soccer Federation, the United States Football Association were some of the many. Of these, the one that was widely popular was the American Soccer League, which is making a comeback this year (2014). It was responsible to make football a craze so big that it was ranked as the second most popular sport in the United States after Baseball. However, even the American Soccer League could not survive the depression and this led to the end of football in the United States for a very long period of time. The early 1960s saw the re emergence of the sport with college soccer picking up speed in many regions. In the year 1967, the two main professional soccer leagues came into being called the National Professional Soccer League and the United Soccer Association. These ended up merging together to come out stronger as one organization called the North American Soccer League in the year 1968. This started the trend of American soccer and it caught on like wild fire thereafter. In the 1970s and 1980s, there was an increased turnout for this game in colleges nationwide. Thereafter came the era of increased funding for the women soccer matches with the passing of Article IX which mandates equal funding for women’s sports as like men’s sport activities. The 1990s saw an increase in the following of the game and this was largely supported and due to the fact that the FIFA world cup of 1994 was held for the first time in the United States. As to date, more than 24 million Americans are known to play Soccer. Many top notch clubs from around Europe have invested both their time and money in the pre season games in the United States and these are large crowd magnets when it comes to fan following contributing to a great extent towards the sport as well as the general economy that benefits in many ways.
http://azsoccer.net/history-of-soccer-in-the-usa/
Best resources for computer and information science research. Home Programming & Web Design Core Resources Faulkner Advisory for IT Studies Comprehensive, web-based library covering critical issues, emerging trends, products and services driving the IT industry. Also includes a wide range of tutorials on best practices and standards, programming techniques, and more. OneSearch Combined search engine, searches most of the Edgewood databases & catalogs (does NOT include Faulkner Advisory for IT Studies). From the Advanced Search page you can limit results to "Computer Science" or "Information Technology." Computers & Applied Sciences Complete Covers the research and development spectrum of the computing and applied sciences disciplines. Provided by Badgerlink.net Open Access Articles CoRR - Computing Research Repository Search, browse and download papers from the online repository of arXiv.org (part of Cornell University). CiteSeerX A scientific literature digital library and search engine that focuses primarily on the literature in computer and information science. Additional Resources Business Source Complete Covers business aspects of computer hardware and software. Science Reference Center Research database. Coverage includes computer and technology sciences. Google Scholar See help with Edgewood full-text to find out if we have access. Kanopy Includes computer science films and documentaries. From the main page, click Subjects at the top, then Sciences, and then Computer Science & Technology. Tutorials & Training Useful Websites Association for Computing Machinery International scientific and educational organization dedicated to advancing the arts, sciences, and applications of information technology. Interaction-Design.org Free website featuring educational materials in the areas of human-centered aspects of technology: interaction design, information architecture, usability, user experience, human-computer interaction, social media and much more. World Wide Web Consortium (W3C) An international community where members work together to develop Web standards. Includes developer resources, discussion forums, and other useful resources. Ask a Librarian Call 608-663‑3300 during library hours . Fill out the email form or use [email protected] . Text your question to 608-554‑1480. Make an appointment to video chat with a librarian . Browse or Search our FAQs Subject Librarian Jonathan Bloy he/him Contact: For immediate help, use Ask a Librarian . Contact Jonathan directly at: [email protected] or 608-663-3385 Subjects: Business , Career Guidance , Computing & Information Science , History , Philosophy , Political Science , Religious Studies Next: Programming & Web Design >> Last Updated: Jul 22, 2021 1:42 PM URL: https://library.edgewood.edu/computer-science Print Page Staff Login Report a Problem Subjects: Computing & Information Science Tags: computer , information technology , programming , technology ,
https://library.edgewood.edu/computer-science
(SACRAMENTO, Calif.) — UC Davis researchers have made a significant step forward in the search for ways to reduce heart attack and stroke risk. Published in the July 5 issue of Circulation, their study shows that a protein known as nonmuscle myosin light chain kinase — or nmMLCK — causes cells that normally seal the inner surface of blood vessels to contract, creating gaps that allow fats and cell debris to leak through tissue barriers and form plaques inside arterial walls. Eventually, the plaques harden and narrow blood vessels, leading to atherosclerosis and greatly increasing the likelihood of coronary and neurovascular events. “It is well known that the cells forming the interface between circulating blood and vascular tissues — the endothelium — is transformed during atherosclerosis,” said Sarah Yuan, a UC Davis physician, professor of surgery and senior author of the study. “But the specific processes that make this change happen aren’t well understood. Our outcome clarifies that nmMLCK compromises the natural barrier function of endothelial cells, leaving arteries susceptible to injury.” Yuan and UC Davis assistant project scientist Chongxiu Sun previously discovered that nmMLCK increased the permeability of blood vessel walls in response to inflammation. That initial outcome led them to consider that this protein could also play a role in plaque formation. To find out if this was the case, Yuan and Sun “knocked out” the gene that produces nmMLCK in mice, and then fed them diets high in fat and cholesterol. They fed the same diet to mice with no genetic alterations. After 12 weeks, the knock-out mice developed aortic lesions less than half the size of lesions in mice with unaltered nmMLCK gene. The team also measured fat-carrying lipid and monocyte levels in blood and aortas and found that these key plaque-forming agents penetrate the endothelial barrier more easily in mice with intact nmMLCK genes. The knock-out mice had far less lipid content and fewer macrophage deposits (created when monocytes migrate through the endothelium) in their arterial walls. “Eliminating nmMLCK significantly reduced the severity of arterial damage,” said Sun, lead author of the current study. “The protein turns cells that are normally very protective into heart disease facilitators.” Sun and Yuan are planning to further characterize the contributions of nmMLCK to atherosclerosis by determining the unique structure of the protein. This research will give scientists the tools they need to develop drugs that block nmMLCK. They are particularly interested in a molecular pathway called Src (pronounced “sarc”) that is activated by nmMLCK. Src is known to be involved in other diseases, including osteoporosis and certain cancers. “We need to fully profile this protein and its interactions with other molecules to identify the best possible way to block its function,” said Yuan, whose research focuses on the cellular and molecular regulation of cardiovascular functions. “Understanding exactly how it works is critical to providing new treatment options for patients.” This study was supported by grants from the National Institutes of Health awarded to Sarah Yuan and Mack Wu, UC Davis associate professor of surgery and study co-author. About the UC Davis Division of Cardiovascular Medicine: State-of-the-art prevention and treatment programs and caring, experienced health-care teams distinguish UC Davis as a unique resource for cardiovascular care in this region. Basic scientists, clinical researchers and physicians throughout UC Davis also are working together to better understand the foundations of, improve treatments for and eventually cure cardiovascular disease — the number one cause of disease-related death in the United States. For more information, visit www.ucdmc.ucdavis.edu/heart.
https://www.healthcanal.com/medical-breakthroughs/19086-uc-davis-researchers-identify-new-drug-target-for-atherosclerosis.html
--- abstract: 'We consider the scattering problem for the nonlinear Schrödinger equation with a potential in two space dimensions. Appropriate resolvent estimates are proved and applied to estimate the operator $A(s)$ appearing in commutator relations. The equivalence between the operators $\left(-\Delta_{V}\right)^{\frac{s}{2}}$ and $\left(-\Delta \right)^{\frac{s}{2}}$ in $L^{2}$ norm sense for $0\leq s <1$ is investigated by using free resolvent estimates and Gaussian estimates for the heat kernel of the Schrödinger operator $-\Delta_{V}$. Our main result guarantees the global existence of solutions and time decay of the solutions assuming initial data have small weighted Sobolev norms. Moreover, the global solutions obtained in the main result scatter.' address: - 'Department of Mathematics, University of Pisa, Largo Bruno Pontecorvo 5, Pisa, 56127, Italy' - 'Department of Mathematics, College of Science, Yanbian University, No. 977 Gongyuan Road, Yanji City, Jilin Province,133002, China' author: - Vladimir Georgiev - Chunhua Li title: On the scattering problem for the nonlinear Schrödinger equation with a potential in 2D --- Introduction and main results ============================= We consider the following nonlinear Schrödinger equation $$\begin{aligned} \left\{ \begin{array}{l} i \partial_t u + \frac{1}{2}(\Delta-V) u = \lambda |u|^{p-1}u, \\ u(1, x) = u_{0} (x) \end{array} \right. \label{NLS}\end{aligned}$$ in $(t,x) \in \mathbb{R}\times \mathbb{R}^{2}$, where $\Delta$ is the 2-dimensional Laplacian, $u=u(t,x)$ is a complex-valued unknown function, $t\geq 1$, $p>2, \lambda \in \mathbb{C}\backslash \{0\} $, $\text{Im}\lambda \leq 0$, $V(x)$ is a real valued measurable function defined in $\mathbb{R}^{2}$. In this paper we assume the time-independent potential $V(x)$ satisfying the following three hypotheses. [(H1) ]{} The real valued potential $V(x)$ is of the $C^1$ class on $\mathbb{R}^{2}$ and satisfies the decay estimate $\left|V(x)\right|+\left|x\cdot \nabla V(x)\right|\leq \frac{c}{<x>^\beta}$, where $c>0$ and $\beta>3$; [(H2) ]{} The potential $V(x)$ is non-negative; [(H3) ]{} Zero is a regular point. We notice that the operator $\Delta_{V}= \Delta -V(x)$ is self-adjoint one by the assumption [(H1)]{}. The assumption [(H2)]{} and the spectral theorem guarantee that the spectrum of $-\Delta_{V} \subset [0,\infty).$ The short range decay assumption [(H1)]{} implies that $-\Delta_{V}$ has no positive eigenvalues due to Agmon’s result in [@Agmon2]. Combining this fact, the assumption [(H3)]{} and Theorem 6.1 in [@Agmon], we see that the spectrum of $-\Delta_{V} \subset [0,\infty)$ is absolutely continuous ( as it was deduced also in [@Sch]). The assumption [(H3)]{} is not always necessary. We can see in Appendix II that stronger decay of the potential $V(x)$ with $\beta > 10$ in [(H1)]{} can guarantee that zero is a regular point provided $V \geq 0$ by Theorem 6.2 in [@JN01]. Another situation, [(H3) ]{} and appropriate resolvent estimates are obtained by Theorem 8.2 and Remark 9.2 in [@Mo18] under the additional assumption $\partial_r (r V(x)) \leq 0.$ The importance of self-adjointness of quantum Hamiltonians has been shown, since the work of Von Neumann about 1930 (see [@simon17]). After the Gross-Pitaevskii equation was presented in 1960s, many crucial problems in quantum mechanics can be reduced to the study of (\[NLS\]). However, there is few research about the asymptotic behavior of solutions for the nonlinear Schrödinger equation (\[NLS\]) (see [@CGV2014], [@GV2012], [@Li2017], [@Mi07]). In the case of $V(x)\equiv0$, it is well known that $p=1+\frac{2}{n}$ can be regarded as a boarderline of the short range and long range interactions to the equation (\[NLS\]) (see [@MS91], [@Str81], [@str74] and [@Ba1984]). The existence of modified wave operators of the cubic nonlinear Schrödinger equation (\[NLS\]) with $V(x)\equiv0$ and $\lambda \in \mathbb{R}\setminus\{0\}$ in $\mathbb{R}$ was first studied by Ozawa in [@Ozawa]. In the case of the space dimension $n=1, 2, 3$, Hayashi and Naumkin showed the completeness of scattering operators and the decay estimate of the critical nonlinear Schrödinger equations (\[NLS\]) with $V(x)\equiv0$ and $\lambda \in \mathbb{R}\setminus\{0\}$ in [@HN98]. The initial value problem for the critical nonlinear Schrödinger equation (\[NLS\]) with $V(x)\equiv0$ and $\lambda \in \mathbb{R}\setminus\{0\}$ in space dimensions $n \geq 4$ was considered by Hayashi, Li and Naumkin in [@HLN18]. They obtained the two side sharp time decay estimates of solutions in the uniform norm. There have been some research about decay estimates of solutions to the subcritical nonlinear Schrödinger equation (\[NLS\]) with $V(x)\equiv0$ and $\text{Im}{\lambda}<0$ for arbitrarily large initial data (see e.g. [@JJL] and [@KS]). Segawa, Sunagawa and Yasuda considered a sharp lower bound for the lifespan of small solutions to the subcritical Schrödinger equation (\[NLS\]) with $V(x)\equiv0$ and $\text{Im}{\lambda}>0$ in the space dimension $n=1,2,3$ in [@SaSuYa]. For the systems of nonlinear Schrödinger equations, the existence of modified wave operators to a quadratic system in $\mathbb{R}^2$ was studied in [@HLN16], and initial value problem for a cubic derivative system in $\mathbb{R}$ was investigated in [@LS16]. When $V(x)\not\equiv 0$, the existence of wave operators for three dimensional Schrödinger operators with singular potentials was proved by Georgiev and Ivanov in [@GI2005]. Georgiev and Velichkov studied decay estimates for the nonlinear Schrödinger equation (\[NLS\]) with $p>\frac{5}{3}$ in $\mathbb{R}^{3}$ in [@GV2012]. In [@CGV2014], Cuccagna, Georgiev and Visciglia considered decay and scattering of small solutions to the nonlinear Schrödinger equation (\[NLS\]) with $p>3$ in $\mathbb{R}$. Li and Zhao proved decay and scattering of solutions for the nonlinear Schrödinger equation (\[NLS\]) with $1+\frac{2}{n}<p\leq 1+\frac{4}{n-2}$ in [@Li2017], when the space dimension $n\geq 3$. $L^p$-boundedness of wave operators for two dimensional Schrödinger operators was first studied by Yajima in [@Ya99]. In [@Mi07] Mizumachi studied the asymptotic stability of a small solitary wave to the nonlinear Schrödinger equation (\[NLS\]) with $p\geq 3$ in $\mathbb{R}^{2}$. As far as we know, the time decay and scattering problem for the supercritical nonlinear Schrödinger equations (\[NLS\]) with $p>2$ in $\mathbb{R}^{2}$ has not been shown. In this paper, our aim is to study the time decay and scattering problem for (\[NLS\]) with $V(x)$ under the assumptions $\rm {(H1) }-\rm {(H3) }$ for $p>2$. We now introduce some notations. $L^{p}(\mathbb{R}^{n})$ denotes usual Lebesgue space on $\mathbb{R}^{n}$ for $1\leq p \leq \infty$. For $m,s\in \Bbb{R}$, weighted Sobolev space $ H^{m,s}\left( \Bbb{R}^{n}\right) $ is defined by $$H^{m,s}\left( \Bbb{R}^{n}\right) =\left\{ f\in \mathcal{S}^{\prime} \left( \Bbb{R}^{n}\right) ;\left\| f\right\| _{H^{m,s}\left( \Bbb{R}^{n} \right) }=\left\|(1+|x|^{2})^{\frac{s}{2}} (I-\Delta)^{\frac{m}{2}} f \right\| _{L^{2} \left( \Bbb{R}^{n}\right) }<\infty \right\} . \notag$$ We write $H^{m,0}\left( \mathbb{R}^{n}\right) =H^{m}\left( \mathbb{R}^{n}\right)$ for simplicity. For $s \geq 0,$ the homogeneous Sobolev spaces are denoted by $$\dot{H}^{s,0}\left( \Bbb{R}^{n}\right) =\left\{ f\in \mathcal{S}^{\prime} \left( \Bbb{R}^{n}\right) ;\left\| f\right\| _{\dot{H}^{s,0}\left( \Bbb{R}^{n} \right) }=\left\|(-\Delta)^{\frac{s}{2}} f \right\| _{L^{2} \left( \Bbb{R}^{n}\right) }<\infty \right\} \notag$$ and $$\dot{H}^{0,s}\left( \Bbb{R}^{n}\right) =\left\{ f\in \mathcal{S}^{\prime} \left( \Bbb{R}^{n}\right) ;\left\| f\right\| _{\dot{H}^{0,s}\left( \Bbb{R}^{n} \right) }=\left\||x|^{s} f \right\| _{L^{2} \left( \Bbb{R}^{n}\right) }<\infty \right\}. \notag$$ For $1 \leq p \leq \infty$ and $s>0$, we denote the space $L^{p,s}$ with the norm $$\begin{aligned} \|f\|_{L^{p,s}}=\|<x>^{s}f\|_{L^{p}\left(\mathbb{R}^{2}\right)}.\notag\end{aligned}$$ We define the dilation operator by $$\left(D_{t}\phi \right)(x)=\frac{1}{(it)^{\frac{n}{2}}}\phi \left(\frac{x}{t}\right) \notag$$ for $t \neq 0$ and define $M(t)=e^{-\frac{i}{2t} |x|^{2}}$ for $t \neq 0$. Evolution operator $U(t)$ is written as $$U(t)=M(-t)D_{t}\mathcal{F}M(-t), \notag$$ where $\mathcal{F}$ and $\mathcal{F}^{-1}$ denote the Fourier transform and its inverse respectively. The standard generator of Galilei transformations is given as $$J(t)=U(t)xU(-t)=x+it\nabla, \notag$$ which is also represented as $$J(t)=M(-t)it \nabla M(t) \notag$$ for $t \neq 0$. Fractional power of $J(t)$ is defined as $$|J|^{a}(t)=U(t)|x|^{a}U(-t), a>0, \notag$$ which is also represented as (see [@HO1988]) $$|J|^{a}(t)=M(-t)(-t^{2}\Delta)^{\frac{a}{2}}M(t) \notag$$ for $t \neq 0$. Moreover we have commutation relations with $|J|^{a}$ and $L=i\partial_{t}+\frac{1}{2}\Delta$ such that $[L,|J|^{a}]=0$. In what follows, we denote several positive constants by the same letter $C$, which may vary from one line to another. If there exists some constant $C>0$ such that $A \leq CF$, we denote this fact by $``A \lesssim F"$. Similarly, $``A \sim F"$ means $``A \lesssim F"$ and $``F \lesssim A"$. Let $A$ be a linear operator from Banach space $X$ to Banach space $Y$. We denote the operator norm of $A$ by $\|A\|_{X\rightarrow Y}$. Our main theorem is stated as follows: \[main theorem\] Assume that $V(x)$ satisfies $\rm {(H1) }-\rm {(H3) }$. Let $p>2$. Then there exist constants $\epsilon_{0}>0$ and $C_{0}>0$ such that for any $\epsilon \in (0,\epsilon_{0})$ and $\|u_{0}\|_{H^{\alpha}(\mathbb{R}^2)\cap \dot{H}^{0,\alpha}(\mathbb{R}^2)} \leq \epsilon,$ where $1<\alpha<2$, the solution $u$ to (\[NLS\]) satisfies the time decay estimates $$\begin{aligned} \|u\|_{L^{\infty}\left(\mathbb{R}^{2}\right)}\leq C_{0}t^{-1} \epsilon \label{timedecay}\end{aligned}$$ for $t\geq1$. Moreover there exists $u_{+}\in L^{2}(\mathbb{R}^{2})$ such that $$\begin{aligned} \lim_{t\rightarrow \infty}\|u(t)-e^{i\frac{t}{2}\Delta_V}u_{+}\|_{L^{2}\left(\mathbb{R}^{2}\right)}=0. \label{scattering}\end{aligned}$$ To prove Theorem \[main theorem\], we introduce the operators $|J_{V}|^{s}$ and $A(s)$ derived from some commutation relations. The properties of operators $|J_{V}|^{s}$ and $A(s)$ are shown in Section \[Operators\]. We present Strichartz estimates by Proposition \[proposition 2.2\] and Proposition \[theorem 2.2\] in Section \[Strichartz Estimates\]. We have $$\begin{aligned} \label{c2} \|A(s)u\|_{L^{q}(\mathbb{R}^{2})} \lesssim \|u\|_{L^{q^{\prime}}(\mathbb{R}^{2})}\end{aligned}$$ for $1\leq q\leq 2$ and $ 1<s<2$ by using resolvent estimates (Lemma \[Lemma Resolvent Estimate 2 \]) in Section \[The estimates of $A(s)$\], where $\frac{1}{q}+\frac{1}{q^{\prime}}=1$. Then we show $$\begin{aligned} \label{c1} \left\| \left(-\Delta_{V}\right)^{\frac{s}{2}}f\right\|_{L^{2}(\mathbb{R}^{2})} \sim \left\| \left(-\Delta\right)^{\frac{s}{2}} f\right\|_{L^{2}(\mathbb{R}^{2})}\end{aligned}$$ for all $0 \leq s< 1$ (see Lemma \[equi lemma\]) and $$\begin{aligned} \label{c3} \left\|\left(-\Delta_{V}\right)^{\frac{s}{2}}f-\left(-\Delta \right)^{\frac{s}{2}}f\right\|_{L^{2}(\mathbb{R}^{2})}\lesssim \|(-\Delta)^{\frac{s}{2}}f\|_{L^{2}(\mathbb{R}^{2})}^{\frac{\sigma }{s}}\|f\|_{L^{2}(\mathbb{R}^{2})}^{1-\frac{\sigma }{s}},\end{aligned}$$ for all $1 \leq s< 2$ and $0<\sigma<1$ (see (\[n.m0\]) in Lemma \[Lemma Jv\]) in Section \[Equivalence\]. We prove our main theorem by using Strichartz estimates, (\[c2\]) and (\[c3\]) in Section \[Proof\]. In Section \[Appendix\], we give the proofs of properties of operators $|J_{V}|^{s}$ and $A(s)$. We show that zero is not resonance in Section \[resonance\]. Operators $|J_{V}|^{s}$ and $A(s)$ {#Operators} ================================== We will introduce the operators $|J_{V}|^{s}$ and $A(s)$ to consider appropriate Sobolev norms and to study the asymptotic behavior of solutions to the equation (\[NLS\]). Setting $M(t)=e^{-\frac{i}{2t}|x|^{2}},$ we may define $|J_{V}|^{s}(t):=M(-t)\left(-t^{2}\Delta_{V}\right)^{\frac{s}{2}}M(t)$. We shall use the standard notation $[B, D]=BD-DB$ for the commutator of two operators $B$ and $D$. The key commutator properties of the operator $|J_{V}|^{s}(t)$ are given in the following two propositions. \[proposition 1.1\] Let $A(s):=s\left(-\Delta_{V}\right)^{\frac{s}{2}}+\left[x\cdot\nabla, \left(-\Delta_{V}\right)^{\frac{s}{2}}\right].$ For $s>0,$ we have $$\left[ i\partial_{t}+\frac{1}{2}\Delta_{V},|J_{V}|^{s}(t)\right]=it^{s-1} M(-t)A(s)M(t) \label{1.11}$$ in two space dimensions. We also have \[proposition 1.01\] Let $W:=2V+x\cdot \nabla V$. For $0<s<2,$ we obtain $$\begin{aligned} A(s)=c(s)\int_{0}^{\infty}\tau^{\frac{s}{2}}\left(\tau-\Delta_{V}\right)^{-1}W \left(\tau-\Delta_{V}\right)^{-1}d\tau \label{4.3.3}\end{aligned}$$ in two space dimensions, where $c(s)^{-1}=\int_{0}^{\infty} \tau^{\frac{s}{2}-1}(\tau+1)^{-1}d\tau$. Proposition \[proposition 1.1\] and Proposition \[proposition 1.01\] are well-known from [@CGV2014] for the case of one-dimensional Schrödinger equation (\[NLS\]) with a potential. For the convenience of readers, we give proofs of these propositions in the appendix I of this paper. Strichartz Estimates {#Strichartz Estimates} ==================== Strichartz estimates are important tools to investigate asymptotic behavior of solutions to some evolution equations, such as Schrödinger equations and wave equations. The well known homogeneous Strichartz estimate $$\begin{aligned} \left\|e^{\frac{i}{2}t\Delta}f\right\|_{L^{p_{2}}(\mathbb{R};L^{q_{2}}(\mathbb{R}^{n}))}\lesssim\|f\|_{L^{2}(\mathbb{R}^{n})} \notag\end{aligned}$$ and inhomogeneous Strichartz estimate $$\begin{aligned} \left\|\int_{s<t} e^{\frac{i}{2}(t-s)\Delta}F(s, \cdot)ds\right\|_{L^{p_{2}}(\mathbb{R};L^{q_{2}}(\mathbb{R}^{n}))}\lesssim \|F\|_{L^{p_{1}^{\prime}}(\mathbb{R};L^{q_{1}^{\prime}}(\mathbb{R}^{n}))} \notag\end{aligned}$$ hold for $n\geq 2$, $f \in L^{2}(\mathbb{R}^{n})$, and $F \in L^{p_{1}^{\prime}}(\mathbb{R};L^{q_{1}^{\prime}}(\mathbb{R}^{n}))$ if $\frac{2}{p_{j}}+\frac{n}{q_{j}}=\frac{n}{2}, 2\leq p_{j} \leq \infty,2\leq q_{j} \leq \frac{2n}{n-2}, q_{j} \neq \infty,$ $p_{j}^{\prime},q_{j}^{\prime}$ are the dual exponents of $p_{j}$ and $q_{j}, j=1, 2$ (see e.g. [@KeTao98]). We note that both endpoints $(p_{j},q_{j})=(\infty,2)$ and $(p_{j},q_{j})=(2, \frac{2n}{n-2})$ are included in the situation of $n \geq 3$, and only the endpoint $(p_{j},q_{j})=(\infty,2)$ is included in the case of $n=2$ for $j=1, 2$. In recent years, a large number of works on Strichartz estimates for Schrödinger equations with potentials $V(x)$ have been investigated (see e.g. [@BPSTZ2004], [@BPSTZ2003], [@GV2012], [@AF2008], [@AFVV], [@Mi07], [@Mo18], [@ste]). However, the study of Strichartz estimates for 2d Schrödinger equations is essentially restricted to the cases of smallness of the magnetic potential and electric potential (see [@ste]), smallness of the magnetic potential while the electric potential can be large (see [@AF2008]), very fast decay of the potential and assumption that zero is a regular point (see [@Mi07]), or $V \geq 0$ and $\partial_r (rV) \leq 0$ (see [@Mo18]). In [@BPSTZ2003], Strichartz estimates for Schrödinger equations with the inverse-square potential $\frac{a}{|x|^2}$ in two space dimensions were considered by Burq, Planchon, Stalker and Tahvildar-Zadeh, where $a$ is a real number. In [@Mi07], Mizumachi presented Strichartz estimates by the $L^{\infty}-L^{1}$ estimates in [@Sch]. To state the dispersive estimate in [@Sch], we recall the notion zero is a regular point as follow: [(see [@Sch])]{} \[def\] Let $V\not\equiv 0$ and set $U=signV, v=|V|^{\frac{1}{2}}$. Let $P_v$ be the orthogonal projection onto $v$ and set $Q=I-P_v$. And let $$(G_0 f)(x):=-\frac{1}{2\pi}\int_{\mathbb{R}^2}\log|x-y|f(y)dy.$$ We say that zero is a regular point of the spectrum of $-\Delta_V$, provided $Q(U+vG_0v)Q$ is invertible on $QL^2(\mathbb{R}^2)$. We have \[Sch\] [(Dispersive Estimate in [@Sch])]{} Let $V:\mathbb{R}^2\rightarrow \mathbb{R}$ be a measurable function such that $|V(x)|\leq C(1+|x|)^{-\beta}, \beta>3.$ Assume in addition that zero is a regular point of the spectrum of $-\Delta_V$. Then we have $$\|e^{-\frac{i}{2}t\Delta_V}P_{ac}(H)f\|_{L^{\infty}(\mathbb{R}^2)}\lesssim|t|^{-1}\|f\|_{L^{1}(\mathbb{R}^2)}$$ for all $f \in L^{1}(\mathbb{R}^2)$. The requirement that zero is a regular point is the analogue of the usual condition that zero is neither an eigenvalue nor a resonance (generalized eigenvalue) of $-\Delta_V$. Under the assumptions of Proposition \[Sch\], the spectrum of $-\Delta_V$ on $[0, \infty)$ is purely absolutely continuous, and that the spectrum is pure point on $(-\infty, 0)$ with at most finitely many eigenvalues of finite multiplicities (See [@Sch]). Moreover, any point on the real line different from zero is not a resonance due to the results in [@GV2007]. Therefore, unique candidate for resonant point is the origin and the assumption zero is regular means that zero is not resonance too. Next, we need the definition of admissible couples appearing in the Schrichartz estimates. The couple $(p,q)$ of positive numbers $p \geq 2, q \geq2 $ is called Schrödinger admissible if it satisfies $$\begin{aligned} \frac{1}{p}+\frac{1}{q}=\frac{1}{2}, (p,q)\neq(2,\infty). \label{admissible pair}\end{aligned}$$ We have the following homogeneous Strichartz estimate by Proposition \[Sch\], Theorem 6.1 in [@Agmon], and the methods in [@KeTao98]. We omit the proof. [\[proposition 2.2\]]{} [(Homogeneous Strichartz Estimate)]{} Let $(p,q)$ be a Schrödinger admissible pair. If $\rm{(H1)} - \rm{(H3)}$ are satisfied, then we obtain $$\begin{aligned} \left\|e^{\frac{i}{2}t\Delta_{V}}f\right\|_{L^{p}(\mathbb{R};L^{q}(\mathbb{R}^{2}))}\lesssim\|f\|_{L^{2}(\mathbb{R}^{2})}\end{aligned}$$ holds for all $f \in L^{2}(\mathbb{R}^{2})$. By using Proposition \[proposition 2.2\] and a result of Christ-Kiselev lemma (Lemma A.1 in [@BM16]), we have the following result. We skip the proof here. [\[theorem 2.2\]]{} [(Inhomogeneous Strichartz Estimate)]{} Let $a, b \in \mathbb{R}$ and let $(p_{j},q_{j})$ be Schrödinger admissible pairs for $j=1, 2$. Assume $V(x)$ satisfy the hypotheses $\rm{(H1)}-\rm{(H3)}$. Then we have $$\begin{aligned} \left\|\int_{a}^{t}e^{\frac{i}{2}(t-s)\Delta_{V}}F(s,\cdot)ds\right\|_{L^{p_{2}}( [a, b];L^{q_{2}}(\mathbb{R}^{2}))}\lesssim\|F\|_{L^{p_{1}^{\prime}}([a, b];L^{q_{1}^{\prime}}(\mathbb{R}^{2}))}, \label{3.1}\\ \forall F\in L_{loc}^{1}([a, b],L^{2}(\mathbb{R}^{2}))\cap L^{p_{1}^{\prime}}([a, b],L^{q_{1}^{\prime}}(\mathbb{R}^{2})), \notag\end{aligned}$$ where $p_{j}^{\prime},q_{j}^{\prime}$ are the dual exponents of $p_{j}$ and $q_{j}, j=1, 2$. The estimates of $A(s)$ {#The estimates of $A(s)$} ======================= To derive estimates of $A(s)$, we use free resolvent estimates, following the approach of [@Li2017]. [(Free Resolvent Estimates)]{} \[Lemma Resolvent Estimate 2 \] i) : For any $ 1< q <\infty, \ \ 0 < s_0 \leq 1,$ one can find $C = C(q,s_0) > 0$ so that for any $\tau >0$ we have $$\label{ReEs2} \left\|\left(\tau-\Delta\right)^{-1}f\right\|_{L^{q}(\mathbb{R}^{2})} \leq C \tau^{-s_0} \|f\|_{L^{k}(\mathbb{R}^{2})}, \ \ \frac{1}{k}=\frac{1}{q}+1-s_0;$$ ii) : For any $$1< q <\infty, \ \ 0 < s_0 \leq 1, \ \ a > 2(1-s_0),$$ one can find $C=C(q,s_0,a)>0$ so that for any $\tau >0$ we have $$\label{ReEs21} \left\|\left(\tau-\Delta\right)^{-1}<x>^{-a}f\right\|_{L^{q}(\mathbb{R}^{2})} \leq C \tau^{-s_0} \|f\|_{L^{q}(\mathbb{R}^{2})};$$ iii) : For any $$1< q <\infty, \ \ 0 < s_0 \leq 1, \ \ a > 2(1-s_0),$$ one can find $C=C(q,s_0,a)>0$ so that for any $\tau >0$ we have $$\label{ReEs22} \left\|<x>^{-a}\left(\tau-\Delta\right)^{-1}f\right\|_{L^{q}(\mathbb{R}^{2})} \leq C \tau^{-s_0} \|f\|_{L^{q}(\mathbb{R}^{2})}.$$ To prove (\[ReEs2\]) we take advantage of the fact that the Green function $$G(x-y;\tau) = \left(\tau-\Delta\right)^{-1}(x-y)$$ of the operator $\left(\tau-\Delta\right)^{-1}$ can be computed explicitly, indeed we have $$G(x;\tau) = (2\pi)^{-2}\int_{\mathbb{R}^2} e^{-\mathrm{i} x\xi} \frac{d\xi}{\tau+|\xi|^2} = (2\pi)^{-1} K_{0}(\sqrt{\tau}|x|),$$ where $K_0(r)$ is the modified Bessel function of order $0.$ We have the following estimates of $K_0(r),$ $$\label{eq.be1} |K_0(r)| \lesssim \left\{ \begin{array}{ll} |\ln r |, & \hbox{if $0 \leq r \leq 1$;} \\ e^{-r}/ \sqrt{r}, & \hbox{if $r>1.$} \end{array} \right.$$ This estimate implies $K_0(|x|) \in L^m(\mathbb{R}^2)$ for any $m \in [1,\infty).$ In this way we deduce $$\int_{\mathbb{R}^2} \left| G(x;\tau) \right|^m d x = (2\pi)^{-m} \int_{\mathbb{R}^2} \left| K_{0}(\sqrt{\tau}|x|) \right|^m d x$$ $$\hspace{3.2cm} = \frac{(2\pi)^{-m}}{\tau} \int_{\mathbb{R}^2} \left| K_{0}(|y|) \right|^m d y = \frac{c_1}{\tau},$$ where $y=\sqrt{\tau}|x|$ and we can write $$\label{eq.be2} \left\|K_0(\sqrt{\tau} \ \cdot)\right\|_{L^{m}(\mathbb{R}^{2})} = \frac{c_2}{\tau^{1/m}}, \ \forall m \in [1,\infty).$$ Applying the Young inequality $$\left\|\left(\tau-\Delta\right)^{-1}f\right\|_{L^{q}(\mathbb{R}^{2})} = \left\|K_0(\sqrt{\tau}\ \cdot \ )*f\right\|_{L^{q}(\mathbb{R}^{2})}$$ $$\hspace{4.5cm} \leq \left\|K_0(\sqrt{\tau}\ \cdot \ )\right\|_{L^{m}(\mathbb{R}^{2})}\|f\|_{L^{k}(\mathbb{R}^{2})},$$ where $1+1/q=1/m+1/k,$ combining with and choosing $s_0 = 1/m$, we deduce . So the assertion i) is verified. To get , we apply the estimate $$\left\|\left(\tau-\Delta\right)^{-1}<x>^{-a}f\right\|_{L^{q}(\mathbb{R}^{2})} \lesssim \tau^{-s_0} \|<x>^{-a} f\|_{L^{k}(\mathbb{R}^{2})}, \ 1+1/q=s_0+1/k,$$ and via the Hölder inequality $$\|<x>^{-a} f\|_{L^{k}(\mathbb{R}^{2})} \lesssim \| f\|_{L^{q}(\mathbb{R}^{2})}, a> 2 \left( \frac{1}{k}- \frac{1}q \right)$$ and we arrive at . Finally (\[ReEs22\]) follows from by a duality argument. This completes the proof. The estimates (\[ReEs2\]), (\[ReEs21\]) are not valid for $q=\infty,$ $s_0=0,$ $k=1,$ but they are valid for $$1 \leq q \leq \infty, \ \ 0 < s_0 \leq 1, \ \ \frac{1}{k}=\frac{1}{q}+1-s_0.$$ In particular, they are true for $q=k=1,s_0=1.$ Further, (\[ReEs22\]) holds when $1 < q \leq \infty$ as in Lemma \[Lemma Resolvent Estimate 2 \], but also in the following case $$q=1, \ 0 < s_0 \leq 1, \ a >2(1-s_0).$$ In Proposition \[proposition 1.01\], for $0<s<2$ we have $$\begin{aligned} A(s)=c(s)\int_{0}^{\infty}\tau^{\frac{s}{2}}\left(\tau-\Delta_{V}\right)^{-1}W \left(\tau-\Delta_{V}\right)^{-1}d\tau,\end{aligned}$$ where $W=2V+x\cdot \nabla V$. Let $G(t,x,y) = e^{t\Delta_V }(x,y) $ be the heat kernel of the Schrödinger operator $-\Delta_{V},$ i.e. it solves $$\begin{aligned} \left\{ \begin{array}{l} \partial_{t}G=(\Delta-V) G, \notag \\ G(0, x,y) = \delta (x-y), \notag \end{array} \right.\end{aligned}$$ where $y\in \mathbb{R}^{2}$. Similarly, $$e^{t\Delta}(x, y)= c_3 t^{-1} \exp \left\{-\frac{ |x-y|^{2}}{4t}\right\}$$ is the heat kernel of $-\Delta,$ so that $$e^{ \alpha t \Delta}(x, y) = c_4 t^{-1} \exp \left\{-\frac{|x-y|^{2}}{4\alpha t}\right\}, \ \ \forall \alpha >0.$$ Since we consider the case $V(x) \geq 0,$ one can use Feynman-Kac formula and the the results in [@Si05] deduce the heat kernel estimate $$\begin{aligned} \label{eq.hk1} 0 \leq e^{t \Delta_V}(x,y) \lesssim t^{-1} \exp \left\{-\frac{|x-y|^{2}}{4\beta t}\right\},\end{aligned}$$ where $\beta>0$. Using , we get the following estimate \[lemma 4.1\] Assume that the hypotheses $\rm {(H1) }$ and $\rm {(H2) }$ are satisfied. Then there exists positive $\beta$ such that $$\begin{aligned} 0 \leq e^{t\Delta_V }(x,y) \lesssim e^{\beta t \Delta}(x, y). \label{4.13}\end{aligned}$$ Without assumption $V \geq 0$ one can use the estimates from [@AM1998] and deduce only the estimate $$\begin{aligned} \left|e^{t\Delta_V}(x,y)\right| \lesssim e^{\gamma t \Delta}(x, y) e^{\omega t},\end{aligned}$$ where $\gamma>0$ and $\omega$ depends on $V(x)$. This is not sufficient for our goal to control the solution to (\[NLS\]) with a potential $V(x)$. Further, we get the following Lemma \[Lemma A(s)\] Let $V(x)$ satisfy $\rm{(H1)}$ and $\rm{(H2)}$. We have $$\begin{aligned} \left\|A(s)f\right\|_{L^{q}(\mathbb{R}^{2})} \lesssim \left\|f\right\|_{L^{q^{\prime}}(\mathbb{R}^{2})}\label{A(s)1}\end{aligned}$$ for $1\leq q \leq 2$ and $1< s<2$, where $\frac{1}{q}+\frac{1}{q^{\prime}}=1$. By $(\tau+a)^{-1}=\int_{0}^{\infty}e^{-(a+\tau)t}dt,$ we obtain $$\begin{aligned} \left(\tau-\Delta_{V}\right)^{-1}f_\pm&=&\int_{0}^{\infty}e^{-\left(\tau-\Delta_{V}\right)t}f_\pm dt \notag \\ &=&\int_{0}^{\infty}e^{-\tau t}e^{t\Delta_{V}}f_\pm dt. \label{4.23}\end{aligned}$$ Since $e^{t\Delta_V}(x, y)$ is the heat kernel of $-\Delta_V$, then we have $$\begin{aligned} e^{t\Delta_{V}}f_\pm(x)=\int_{\mathbb{R}^{2}} e^{t\Delta_{V}}(x,y)f_\pm(y)dy. \label{add2}\end{aligned}$$ By the estimate (\[4.13\]) in Lemma \[lemma 4.1\] and (\[add2\]), from (\[4.23\]) there exist positive $\beta$ such that $$\begin{aligned} \left| \left(\tau-\Delta_{V}\right)^{-1}f_\pm \right| & \lesssim &\int_{0}^{\infty}e^{-\tau t} e^{\beta t \Delta}f_\pm dt \notag \\ &\lesssim & \left(\frac{\tau}{\beta}-\Delta\right)^{-1}f_\pm. \label{4.25}\end{aligned}$$ Given any $q \in [1,2]$ we can apply Proposition \[proposition 1.01\], (\[4.25\]), and via the Hölder inequality to get $$\begin{aligned} \left\|A(s)f\right\|_{L^{q}(\mathbb{R}^{2})}&\lesssim& \int_{0}^{\infty}\tau^{\frac{s}{2}} \left\|\left(\tau-\Delta_V \right)^{-1}W \left(\tau-\Delta_{V}\right)^{-1}f\right\|_{L^{q}(\mathbb{R}^{2})}d\tau \notag\\ &\lesssim &\left(\|f_+\|_{L^{q^{\prime}}(\mathbb{R}^{2})}+\|f_-\|_{L^{q^{\prime}}(\mathbb{R}^{2})}\right)\|W\|_{L^{\omega(q)}(\mathbb{R}^{2})} \notag \\ && \times \int_{1}^{\infty}\tau^{\frac{s}{2}} \left\|\left(\frac{\tau}{\beta}-\Delta\right)^{-1}\right\|_{L^{q}(\mathbb{R}^{2})\rightarrow L^{q}(\mathbb{R}^{2}) } \notag \\ &&\qquad \times \left\|\left(\frac{\tau}{\beta}-\Delta\right)^{-1}\right\|_{L^{q^{\prime}}(\mathbb{R} ^{2})\rightarrow L^{q^{\prime}}(\mathbb{R}^{2})}d\tau \notag \\ &&+ \left(\|f_+\|_{L^{q^{\prime}}(\mathbb{R}^{2})}+\|f_-\|_{L^{q^{\prime}}(\mathbb{R}^{2})}\right)\|<x>^{2a}W\|_{L^{\omega(q)}(\mathbb{R}^{2})} \notag \\ && \times \int_{0}^{1}\tau^{\frac{s}{2}} \left\|\left(\frac{\tau}{\beta}-\Delta\right)^{-1}<x>^{-a}\right\|_{L^{q}(\mathbb{R}^{2})\rightarrow L^{q}(\mathbb{R}^{2}) } \notag \\ &&\qquad \times \left\|<x>^{-a}\left(\frac{\tau}{\beta}-\Delta\right)^{-1}\right\|_{L^{q^{\prime}}(\mathbb{R} ^{2})\rightarrow L^{q^{\prime}}(\mathbb{R}^{2})}d\tau \notag, \end{aligned}$$ where $ \omega=\omega(q)$ is determined by $\frac{1}{\omega}= \frac{2}{q}-1$ and $a = a(q) $ is an appropriate parameter to be chosen so that we can apply Lemma \[Lemma Resolvent Estimate 2 \] (with $s_0=1$ in (\[ReEs2\]), $s_1=3/4$ in (\[ReEs21\]) and (\[ReEs22\])), i.e. we have to require $$\label{eq.pr1} \frac{1}{2} < a(q) < 1 + \frac{\beta}2 - \frac{2}q,$$ where $\beta>3$ is from our assumption $\mathrm{(H1)}$. Then we can write $$\begin{aligned} \left\|A(s)f\right\|_{L^{q}(\mathbb{R}^{2})} &\lesssim& \|f\|_{L^{q^{\prime}}(\mathbb{R}^{2})}\|W\|_{L^{\omega}(\mathbb{R}^{2})} \int_{1}^{\infty}\tau^{\frac{s}{2}-2}d\tau \notag \\ &&\qquad+ \|f\|_{L^{q^{\prime}}(\mathbb{R}^{2})}\|<x>^{2a}W\|_{L^{\omega}(\mathbb{R}^{2})} \int_{0}^{1}\tau^{\frac{s}{2}-\frac{3}{2}}d\tau \notag \\ & \lesssim & \|f\|_{L^{q^{\prime}}(\mathbb{R}^{2})}\|W\|_{L^{\omega}(\mathbb{R}^{2})}+ \|f\|_{L^{q^{\prime}}(\mathbb{R}^{2})}\|<x>^{2a}W\|_{L^{\omega}(\mathbb{R}^{2})}. \notag\end{aligned}$$ Note that our choice guarantees that we have $$\begin{aligned} && \left\|<x>^{2a}W\right\|_{L^{\omega}(\mathbb{R}^{2})} \lesssim\|V\|_{L^{\omega,2a}\left(\mathbb{R}^{2}\right)}+\left\|\sum_{j=1}^{2}x_{j}\frac{\partial V}{\partial x_{j}}\right\|_{L^{\omega, 2a}\left(\mathbb{R}^{2}\right)}\leq C. \label{4.31}\end{aligned}$$ So the assertion is proved. \[remark A(s)\] By using the similar method as Lemma \[Lemma A(s)\], we also have $$\begin{aligned} \left\|A(s)f\right\|_{L^{q}(\mathbb{R}^{2})}\lesssim \left\|f\right\|_{L^{q}(\mathbb{R}^{2})} \label{add}\end{aligned}$$ for $1\leq q \leq \infty$ and $1<s<2$. Equivalence of $\left(-\Delta_{V}\right)^{\frac{s}{2}}$ and $\left(-\Delta \right)^{\frac{s}{2}}$ in $L^{2}(\mathbb{R}^{2})$ norm sense {#Equivalence} ======================================================================================================================================= To estimate $\left\| |J_{V}|^{s} (|u|^{p-1}u)\right\|_{ L^{2}(\mathbb{R}^{2})}$ which will be mentioned below, we study the operator $\left(-\Delta_{V}\right)^{\frac{s}{2}}$ via heat kernels of some Schödinger operators $-\Delta_{V}$ and $-\beta \Delta$ on $\mathbb{R}^{2}$, where $\beta>0$. By Lemma \[lemma 4.1\], we obtain the following lemma. \[lemma 4.2\] Assume that the hypotheses $\rm {(H1) }$ and $\rm {(H2) }$ are satisfied. For $s\geq 0$, we have $$\begin{aligned} \left\| \left(-\Delta\right)^{\frac{s}{2}} f\right\|_{L^{2}(\mathbb{R}^{2})} \lesssim \left\| \left(-\Delta_{V}\right)^{\frac{s}{2}}f\right\|_{L^{2}(\mathbb{R}^{2})}. \label{4.9}\end{aligned}$$ Obviously (\[4.9\]) holds in the case of $s=0$. We focus our attention to the situation of $s>0$. We show that $$\begin{aligned} \left\| \left(-\Delta_{V}\right)^{-\frac{s}{2}} \left(-\Delta\right)^{\frac{s}{2}}f\right\|_{L^{2}(\mathbb{R}^{2})} \lesssim \left\| f \right\|_{L^{2}(\mathbb{R}^{2})} \label{04.10}\end{aligned}$$ for $s>0$.\ Using $$\begin{aligned} a^{-\frac{s}{2}}&=&\frac{1}{\Gamma (\frac{s}{2})}\int_{0}^{\infty}e^{-at}t^{\frac{s}{2}-1}dt\notag\end{aligned}$$ for $s>0$, we have $$\begin{aligned} \left(\left(-\Delta_{V}\right)^{-\frac{s}{2}}g\right)(x)=\frac{1}{\Gamma (\frac{s}{2})}\int_{0}^{\infty}e^{\Delta_{V}t}g(x)t^{\frac{s}{2}-1}dt.\label{04.12}\end{aligned}$$ Since $\rm {(H1) }$ and $\rm {(H2) }$ say $-\Delta_{V}$ is a positive self-adjoint operator on $L^{2}(\mathbb{R}^{2})$, then we have for every $t>0$, $e^{\Delta_{V}t}$ has a jointly continuous integral kernel $e^{\Delta_{V}t}(x, y)$. Thus we have $$\begin{aligned} e^{\Delta_{V}t}g(x)=\int_{\mathbb{R}^{2}}e^{\Delta_{V}t}(x, y) g(y)dy.\notag\end{aligned}$$ By the estimate (\[4.13\]) in Lemma \[lemma 4.1\], we have $$\begin{aligned} 0\leq e^{\Delta_{V}t}g(x)&\lesssim& \int_{\mathbb{R}^{2}} e^{\beta t \Delta}(x,y)g(y)dy \notag \\ &=&e^{\beta t \Delta}g(x) \label{add4.7}\end{aligned}$$ for $g\geq0$, where $e^{\beta t \Delta}(x,y)$ is the heat kernel of the Schödinger operator $-\beta\Delta, \beta>0$. Then we have from (\[04.12\]) and (\[add4.7\]) $$\begin{aligned} \left(-\Delta_{V}\right)^{-\frac{s}{2}}g(x) \lesssim \left(-\Delta \right)^{-\frac{s}{2}}g(x)\notag\end{aligned}$$ for $s>0$ and $g\geq0$. Thus we have $$\begin{aligned} \left(-\Delta_{V}\right)^{-\frac{s}{2}}\left(-\Delta \right)^{\frac{s}{2}}f(x) \lesssim f(x)\notag\end{aligned}$$ for $s>0$ and $f\geq0$, where $f=\left(-\Delta\right)^{-\frac{s}{2}}g$ . Let $B_s=\left(-\Delta_{V}\right)^{-\frac{s}{2}}\left(-\Delta \right)^{\frac{s}{2}}$. Decomposing $f = f_+-f_-, g=g_+-g_-$ we can deduce $$\begin{aligned} |<B_sf, g>|&=&|<B_s(f_{+}-f_{-}), g_{+}-g_{-}>|\\ &\leq&|<B_sf_{+}, g_{+}>|+|<B_sf_{+}, g_{-}>|\\ \qquad &+&|<B_sf_{-}, g_{+}>|+|<B_sf_{-}, g_{-}>|\\ &\lesssim& \|f\|_{L^{2}(\mathbb{R}^2)}\|g\|_{L^{2}(\mathbb{R}^2)}\end{aligned}$$ for $s>0$, without requiring $f \geq 0, g \geq 0.$ The inequality $$\|B_sf\|_{L^2} \lesssim\|f\|_{L^{2}(\mathbb{R}^2)}$$ holds for $s>0$. Therefore we have the estimate (\[04.10\]).\ It is difficult to obtain Gaussian estimates for heat kernel of the Schödinger operator $-\Delta+V(x)$, especially for $V \leq 0$. There are some sharp Gaussian estimates for heat kernel of the Schödinger operator $-\Delta+V(x)$ with $V \geq 0 $ (see e.g. [@AM1998], [@BDS2016] and [@Zhang2001]). Especially, the sharp Gaussian Estimates for heat kernel of the Schödinger operator $-\Delta+V(x)$ with nontrivial $V\geq 0$ in $\mathbb{R}^{2}$ or $\mathbb{R}^{1}$ fail (see [@BDS2016] ). \[Lemma Jv0\] Let $V(x)$ satisfy $(\rm{H1})$ and $(\rm{H2})$. For any $0<s<2$ and for any $ q> 2$, we have $$\begin{aligned} \label{n.m0} \left\|\left(-\Delta_{V}\right)^{\frac{s}{2}}f-\left(-\Delta \right)^{\frac{s}{2}}f\right\|_{L^{2}(\mathbb{R}^{2})}\lesssim \|f\|_{L^{q}(\mathbb{R}^{2})}.\end{aligned}$$ Since $(\tau+a)^{-1}=\int_{0}^{\infty}e^{-(a+\tau)t}dt,$ we have $$\begin{aligned} \left(\tau-\Delta_{V}\right)^{-1}g(x)=\int_{0}^{\infty}e^{-\tau t}e^{\Delta_{V}t}g(x) dt.\end{aligned}$$ Let $e^{t\Delta_V}(x,y)$ be the heat kernel of the Schödinger operator $-\Delta_{V}$. Then we have $$\begin{aligned} e^{\Delta_{V}t}g(x)=\int_{\mathbb{R}^{2}}e^{t\Delta_V}(x,y)g(y)dy.\end{aligned}$$ By Lemma \[lemma 4.1\], we have positive $\beta$ such that $$\begin{aligned} \label{a1} \left(\tau-\Delta_{V}\right)^{-1}g(x)&\lesssim&\int_{0}^{\infty}e^{-\tau t} e^{\beta \Delta t}g(x) dt \notag \\ &\lesssim&\left(\frac{\tau}{\beta}-\Delta\right)^{-1}g(x)\end{aligned}$$ for $g\geq 0$. Then we have $$\begin{aligned} \|\left(\tau-\Delta_{V}\right)^{-1}g\|_{L^2(\mathbb{R}^{2})}\lesssim\left\|\left(\frac{\tau}{\beta}-\Delta\right)^{-1}g\right\|_{L^2(\mathbb{R}^{2})} \label{a1}\end{aligned}$$ without requiring $g\geq 0$. Now we can use the relation $$\begin{aligned} \left(-\Delta_{V}\right)^{\frac{s}{2}}f=c(s)(-\Delta_{V})\int_{0}^{\infty}\tau^{\frac{s}{2}-1} \left(\tau-\Delta_{V}\right)^{-1}fd\tau \notag\end{aligned}$$ with $$\begin{aligned} c(s)^{-1}=\int_{0}^{\infty}\tau^{\frac{s}{2}-1}(\tau+1)^{-1}d\tau \notag\end{aligned}$$ for $0<s<2$. Therefore, we have $$\begin{aligned} \left(-\Delta_{V}\right)^{\frac{s}{2}}f&=&c(s)(-\Delta_{V})\int_{0}^{\infty}\tau^{\frac{s}{2}-1} \left(\tau-\Delta_{V}\right)^{-1}fd\tau \notag\\ &=&c(s)(-\Delta_{V})\int_{0}^{\infty}\tau^{\frac{s}{2}-1}\left[\left( \tau-\Delta_{V}\right)^{-1}-\left(\tau-\Delta\right)^{-1}\right]fd\tau \notag \\ &&\qquad+c(s)(-\Delta_{V}) \int_{0}^{\infty}\tau^{\frac{s}{2}-1} \left(\tau-\Delta\right)^{-1} f d\tau.\end{aligned}$$ Using the relations $$\begin{aligned} &&\qquad(-\Delta_{V})\left[\left(\tau-\Delta_{V}\right)^{-1}-\left(\tau-\Delta\right)^{-1}\right] \\ &&= -(-\Delta_{V})\left(\tau-\Delta_{V}\right)^{-1}V\left(\tau-\Delta\right)^{-1} \\ &&=-\left(\tau-\Delta_{V}\right) \left(\tau-\Delta_{V}\right)^{-1}V\left(\tau-\Delta\right)^{-1} + \tau \left(\tau-\Delta_{V}\right)^{-1}V\left(\tau-\Delta\right)^{-1}\\ &&= -V\left(\tau-\Delta\right)^{-1} + \tau \left(\tau-\Delta_{V}\right)^{-1}V\left(\tau-\Delta\right)^{-1}, \end{aligned}$$ we find $$\begin{aligned} \left(-\Delta_{V}\right)^{\frac{s}{2}}f &= &-c(s)\int_{0}^{\infty}\tau^{\frac{s}{2}-1} V\left(\tau-\Delta\right)^{-1} fd\tau + c(s)\int_{0}^{\infty}\tau^{\frac{s}{2}}\left(\tau-\Delta_{V}\right)^{-1} V\left(\tau-\Delta\right)^{-1} fd\tau \notag \\ &&\qquad-c(s)\Delta\int_{0}^{\infty}\tau^{\frac{s}{2}-1} \left(\tau-\Delta\right)^{-1} fd\tau+c(s) \int_{0}^{\infty}\tau^{\frac{s}{2}-1} V\left(\tau-\Delta\right)^{-1} f d\tau \notag\\ &=&\left(-\Delta\right)^{\frac{s}{2}}f+c(s)\int_{0}^{\infty}\tau^{\frac{s}{2}}\left( \tau-\Delta_{V}\right)^{-1}V\left(\tau-\Delta\right)^{-1}fd\tau. \notag\end{aligned}$$ Therefore we obtain $$\begin{aligned} \left(-\Delta_{V}\right)^{\frac{s}{2}}f &=&\left(-\Delta\right)^{\frac{s}{2}}f + c(s)\int_{0}^{\infty}\tau^{\frac{s}{2}}\left( \tau-\Delta_{V}\right)^{-1}V\left(\tau-\Delta\right)^{-1}fd\tau \label{4.180}\end{aligned}$$ for $0<s<2$. By (\[a1\]) and the Hölder inequality with $1/q+1/r=1/2$ , we have $$\begin{aligned} & \quad \quad \int_{0}^{\infty} \left\|\tau^{\frac{s}{2}}\left( \tau-\Delta_{V}\right)^{-1}V\left(\tau-\Delta\right)^{-1}f\right\|_{L^{2}(\mathbb{R}^{2})}d\tau \label{estimate} \\ &\lesssim \|f\|_{L^{q}(\mathbb{R}^{2})}\|V\|_{L^{r}(\mathbb{R}^{2})} \int_{1}^{\infty}\tau^{\frac{s}{2}} \left\|\left(\frac{\tau}{\beta}-\Delta\right)^{-1}\right\|_{L^{2}(\mathbb{R}^{2})\rightarrow L^{2}(\mathbb{R}^{2}) } \left\|\left(\tau-\Delta\right)^{-1}\right\|_{L^{q}(\mathbb{R} ^{2})\rightarrow L^{q}(\mathbb{R}^{2})}d\tau \notag \\ &+ \|f\|_{L^{q}(\mathbb{R}^{2})}\|<x>^{2a}V\|_{L^{r}(\mathbb{R}^{2})} \notag \\ & \times \int_{0}^{1}\tau^{\frac{s}{2}} \left\|\left(\frac{\tau}{\beta}-\Delta\right)^{-1}<x>^{-a}\right\|_{L^{2}(\mathbb{R}^{2})\rightarrow L^{2}(\mathbb{R}^{2}) } \left\|<x>^{-a}\left(\tau-\Delta\right)^{-1}\right\|_{L^{q}(\mathbb{R} ^{2})\rightarrow L^{q}(\mathbb{R}^{2})}d\tau, \notag \end{aligned}$$ here $ a$ is chosen so that $ a > 1-s/2.$ Now taking $$a= 1-\frac{s}{2} + \varepsilon, \ \ s_0 = \frac{1}2 + \frac{s}4 - \frac{\varepsilon}4,$$ where $0<\varepsilon<\frac{s}{2} $. we can apply Lemma \[Lemma Resolvent Estimate 2 \], since $$\frac{a}2 + s_0 = 1 + \varepsilon \left( \frac{1}2 - \frac{1}4 \right) > 1$$ and get $$\begin{aligned} & &\int_{0}^{\infty} \left\|\tau^{\frac{s}{2}}\left( \tau-\Delta_{V}\right)^{-1}V\left(\tau-\Delta\right)^{-1}f\right\|_{L^{2}(\mathbb{R}^{2})}d\tau \\ &\lesssim & \|f\|_{L^{q}(\mathbb{R}^{2})}\|V\|_{L^{r}(\mathbb{R}^{2})} \int_{1}^{\infty}\tau^{\frac{s}{2}-2}d\tau + \|f\|_{L^{q}(\mathbb{R}^{2})}\|<x>^{2a}V\|_{L^{r}(\mathbb{R}^{2})} \int_{0}^{1}\tau^{\frac{s}{2}-2 s_0}d\tau \notag \\ &\lesssim & \|f\|_{L^{q}(\mathbb{R}^{2})}, \notag\end{aligned}$$ since $$\frac{s}2 - 2 s_0 = -1 + \frac{\varepsilon}2 > -1$$ and $$r(3-2a)=r(1+s-2\varepsilon)>2.$$ \[Lemma Jv\] Let $V(x)$ satisfy $(\rm{H1})$ and $(\rm{H2})$. Then we have the estimates a) : for any $0 \leq s < 1$ we have $$\label{eq.n.m11} \left\|\left(-\Delta_{V}\right)^{\frac{s}{2}}f\right\|_{L^{2}(\mathbb{R}^{2})} \lesssim \left\|\left(-\Delta \right)^{\frac{s}{2}}f\right\|_{L^{2}(\mathbb{R}^{2})};$$ b) : for any $ 1 \leq s <2,$ and $0 < \sigma < 1,$ we have $$\begin{aligned} \label{n.m0} \left\|\left(-\Delta_{V}\right)^{\frac{s}{2}}f-\left(-\Delta \right)^{\frac{s}{2}}f\right\|_{L^{2}(\mathbb{R}^{2})}\lesssim \|(-\Delta )^{\frac{s}{2}}f\|_{L^{2}(\mathbb{R}^{2})}^{\frac{\sigma}{s}}\|f\|_{L^{2}(\mathbb{R}^{2})}^{1-\frac{\sigma}{s}}.\end{aligned}$$ We have (\[eq.n.m11\]) by Lemma \[Lemma Jv0\] and the Sobolev embedding $$\begin{aligned} \label{add1} \|f\|_{L^{q}(\mathbb{R}^{2})} \lesssim \|(-\Delta)^{\frac{s}{2}}f\|_{L^2(\mathbb{R}^2)}, \end{aligned}$$ where $\frac{s}{2}= \frac{1}{2}-\frac{1}{q}$ and $q>2$.\ Applying the Sobolev embedding (\[add1\]) with $\sigma=s,$ where $\frac{\sigma}{2}=\frac{1}{2}-\frac{1}{q}$ and $q>2,$ and the interpolation inequality $$\|(-\Delta)^{\frac{\sigma}{2}}f\|_{L^2(\mathbb{R}^2)} \lesssim \|(-\Delta)^{\frac{s}{2}}f\|_{L^2(\mathbb{R}^2)}^\theta \|f\|^{1-\theta}_{L^2(\mathbb{R}^2)},$$ with $\theta=\frac{\sigma }{s}$, we get from Lemma \[Lemma Jv0\]. For any $1<s<2$, we have $$\begin{aligned} \label{n.m1} \left\|\left(-\Delta_{V}\right)^{\frac{s}{2}}f-\left(-\Delta \right)^{\frac{s}{2}}f\right\|_{L^{2}(\mathbb{R}^{2})}\lesssim \|(-\Delta_V)^{\frac{s}{2}}f\|_{L^{2}(\mathbb{R}^{2})}^{\frac{\sigma}{s}}\|f\|_{L^{2}(\mathbb{R}^{2})}^{1-\frac{\sigma}{s}},\end{aligned}$$ due to and Lemma \[lemma 4.2\]. By Lemma \[lemma 4.2\] and (\[eq.n.m11\]) in Lemma \[Lemma Jv\], we have the following equivalence property directly. \[equi lemma\] Suppose that $\rm{(H1)}$ and $\rm{(H2)}$ are satisfied. For any $0\leq s<1$, we have $$\begin{aligned} \label{equi} \left\|\left(-\Delta_{V}\right)^{\frac{s}{2}}f\right\|_{L^{2}(\mathbb{R}^{2})}\sim \|(-\Delta )^{\frac{s}{2}}f\|_{L^{2}(\mathbb{R}^{2})}.\end{aligned}$$ To estimate $\left\|(-\Delta_{V})^{\frac{s}{2}}M(t)\left(|u|^{p-1}u\right)\right\|_{ L^{2}(\mathbb{R}^{2})}$ with $1<s<2$, we need the estimate about $\left\|u\right\|_{L^{\infty}(\mathbb{R}^{2})}$. We consider the following lemma. \[Lemma 5.10\] Suppose that $\rm{(H1)}$ and $\rm{(H2)}$ are satisfied. Then for any $1<s<2$, we have $$\begin{aligned} \label{n.m10} \left\|f\right\|_{L^{\infty}(\mathbb{R}^{2})} \lesssim \|(-\Delta_V)^{\frac{s}{2}}f\|^{\frac{1}{s}}_{L^{2}(\mathbb{R}^{2})} \left\|f\right\|^{1-\frac{1}{s}}_{L^{2}(\mathbb{R}^{2})}\end{aligned}$$ By the Hölder inequality, we have $$\begin{aligned} \|\mathcal{F}f\|_{L^{1}(\mathbb{R}^{2})}& \lesssim&\tau \|\mathcal{F}f\|_{L^{2}(|\xi|\leq \tau)} \notag \\ &&\quad + \||\xi|^{s}\mathcal{F}f\|_{L^{2}(|\xi|\geq \tau)}\||\xi|^{-s}\|_{L^{2}(|\xi|\geq \tau)} \label{FV1} \\ & \lesssim& \tau \|f\|_{L^{2}(\mathbb{R}^{2})}+\sqrt{\frac{1}{2(s-1)}} \tau^{1-s}\left\|(-\Delta)^{\frac{s}{2}}f\right\|_{L^{2}(\mathbb{R}^{2})} \notag\end{aligned}$$ for any $s>1$ and $\tau >0$. Let $\tau=\left( \sqrt{\frac{1}{2(s-1)}} \|(-\Delta)^{\frac{s}{2}}f\|_{L^{2}(\mathbb{R}^{2})} \right)^{\frac{1}{s}}\|f\|_{L^{2}(\mathbb{R}^{2})}^{-\frac{1}{s}}$. Then we have $$\begin{aligned} \tau \|f\|_{L^{2}(\mathbb{R}^{2})}&=&\sqrt{\frac{1}{2(s-1)}}\tau^{1-s}\left\|(-\Delta)^{\frac{s}{2}}f\right\|_{L^{2}(\mathbb{R}^{2})} \label{FV2}\\ &=& \left(\frac{1}{2(s-1)}\right)^{\frac{1}{2s}}\|(-\Delta)^{\frac{s}{2}}f\|^{\frac{1}{s}}_{L^{2}(\mathbb{R}^{2})} \left\|f\right\|^{1-\frac{1}{s}}_{L^{2}(\mathbb{R}^{2})} . \notag\end{aligned}$$ By (\[FV1\]), (\[FV2\]) and Lemma \[lemma 4.2\], we have our desired result. \[Remark 6.1\] By Lemma \[Lemma 5.10\] and $|J_{V}|^{s}(t)=M(-t)(-t^{2}\Delta_{V})^{\frac{s}{2}}M(t)$, we have $$\begin{aligned} \left\|f(t,\cdot)\right\|_{L^{\infty}(\mathbb{R}^{2})} & \lesssim& \|(-\Delta_{V})^{\frac{s}{2}}M(t)f(t, \cdot)\|^{\frac{1}{s}}_{L^{2}(\mathbb{R}^{2})} \left\|M(t)f(t,\cdot)\right\|^{1-\frac{1}{s}}_{L^{2}(\mathbb{R}^{2})} \label{16.1} \\ & \lesssim& t^{-1} \||J_{V}|^{s}(t)f(t, \cdot)\|^{\frac{1}{s}}_{L^{2}(\mathbb{R}^{2})} \left\|f(t,\cdot)\right\|^{1-\frac{1}{s}}_{L^{2}(\mathbb{R}^{2})} \notag\end{aligned}$$ for $1<s<2$. Proof of Theorem \[main theorem\] {#Proof} ================================= We define the function space $X_{T}$ as follows $$\begin{aligned} X_{T}=\left\{f\in C\left([1,T];\mathcal{S}^{\prime}\right); |||f|||_{X_{T}} =\left\||J_{V}|^{\alpha}f\right\|_{L^{\infty}\left([1,T];L^{2}(\mathbb{R}^{2})\right)} +\sup_{t\in[1,T]}\|f\|_{L^{2}(\mathbb{R}^{2})}\notag \right\},\end{aligned}$$ where $T>1$ and $1<\alpha<2$. Since we can obtain the local existence of solutions to the equation (\[NLS\]) by the standard contraction mapping principle, we skip the proof in this section. Multiplying both sides of the equation (\[NLS\]) by $|J_{V}|^{\alpha}$ and using Proposition \[proposition 1.1\], we have $$\begin{aligned} & & \hspace {1cm}\left(i \partial_t + \frac{1}{2}\Delta_V\right)|J_{V}|^{\alpha}u \label{1.15} \\ && \hspace {0.5cm} = it^{\alpha-1}M(-t)A(\alpha)M(t)u+\lambda|J_{V}|^{\alpha}\left( |u|^{p-1}u\right).\notag \end{aligned}$$ Let $ |J_{V}|^{\alpha}u=u_{\alpha}^{\uppercase\expandafter{\romannumeral 1}}+u_{\alpha}^{\uppercase\expandafter{\romannumeral 2}}$. We consider $$\begin{aligned} \left\{ \begin{array}{l} \left(i \partial_t + \frac{1}{2}\Delta_{V}\right) u_{\alpha}^{\uppercase\expandafter{\romannumeral 1}} = \lambda |J_{V}|^{\alpha}\left( |u|^{p-1}u\right), \\ u_{\alpha}^{\uppercase\expandafter{\romannumeral 1}}(1) = |J_{V}|^{\alpha}(1)u_{0}, \end{array} \right. \label{NLS1}\end{aligned}$$ and $$\begin{aligned} \left\{ \begin{array}{l} \left(i \partial_t + \frac{1}{2}\Delta_{V}\right) u_{\alpha}^{\uppercase\expandafter{\romannumeral 2}} = it^{\alpha-1}M(-t)A(\alpha)M(t)u, \\ u_{\alpha}^{\uppercase\expandafter{\romannumeral 2}}(1) = 0 \end{array} \right. \label{NLS2}\end{aligned}$$ for $p>2$, and $t\geq 1$, where $u=u(t,x)$ is a real valued unknown function, $x \in \mathbb{R}^{2}$, $\Delta_{V}=\Delta-V(x)$, $|J_{V}|^{\alpha}(t)=M(-t)\left(-t^{2}\Delta_{V}\right)^{\frac{\alpha}{2}}M(t)$, $M(t)=e^{-\frac{i}{2t}|x|^{2}}$ and $A(\alpha)=\alpha\left(-\Delta_{V}\right)^{\frac{\alpha}{2}}+\left[x\cdot\nabla,\left(-\Delta_{V}\right)^{\frac{\alpha}{2}}\right]$. First we consider the integral equation $$\begin{aligned} u_{\alpha}^{\uppercase\expandafter{\romannumeral 1}}=e^{\frac{i}{2}t\Delta_{V}}e^{-\frac{i}{2}\Delta_{V}}|J_{V}|^{\alpha}(1)u_{0} -i\lambda\int_{1}^{t}e^{\frac{i}{2}(t-\tau)\Delta_{V}}|J_{V}|^{\alpha}\left(|u|^{p-1}u\right)(\tau)d\tau \label{NLS3}\end{aligned}$$ associated with (\[NLS1\]). For simplicity, we let $|J_{V}|^{\alpha}\left(|u|^{p-1}u\right)=F_{\alpha}.$ Then from (\[NLS3\]) we have $$\begin{aligned} u_{\alpha}^{\uppercase\expandafter{\romannumeral 1}}=e^{\frac{i}{2}t\Delta_{V}}e^{-\frac{i}{2}\Delta_{V}}|J_{V}|^{\alpha}(1)u_0-i\lambda\int_{1}^{t}e^{\frac{i}{2}(t-\tau)\Delta_{V}}F_{\alpha}(\tau)d\tau. \label{NLS4}\end{aligned}$$ We also have $$\begin{aligned} u_{\alpha}^{\uppercase\expandafter{\romannumeral 2}}=\int_{1}^{t}e^{\frac{i}{2}(t-\tau)\Delta_{V}}\tau^{\alpha-1}M(-\tau)A(\alpha)M(\tau)u(\tau)d\tau \label{NLS8}\end{aligned}$$ from (\[NLS2\]).\ By Proposition \[proposition 2.2\] and Proposition \[theorem 2.2\], from (\[NLS4\]) we have $$\begin{aligned} \|u_{\alpha}^{\uppercase\expandafter{\romannumeral 1}}\|_{L^{\infty}\left([1,T]; L^{2}(\mathbb{R}^{2})\right)} \lesssim\||J_{V}|^{\alpha}(1)u_0\|_{L^{2}(\mathbb{R}^{2})}+\left\|F_{\alpha}\right\|_{L^{1}\left([1,T]; L^{2}(\mathbb{R}^{2})\right)}, \label{4.7}\end{aligned}$$ where $F_{\alpha}=|J_{V}|^{\alpha}\left(|u|^{p-1}u\right)$, $|J_{V}|^{\alpha}(t)=M(-t)\left(-t^{2}\Delta_{V}\right)^{\frac{\alpha}{2}}M(t)$, and $M(t)=e^{-\frac{i}{2t}|x|^{2}}$.\ By (\[n.m0\]) in Lemma \[Lemma Jv\], and Lemma 3.4 in [@GOV], we obtain $$\begin{aligned} && \quad \left\|F_{\alpha}\right\|_{ L^{2}(\mathbb{R}^{2})} \label{7.10} \\ && =\left\||J_{V}|^{\alpha}\left(|u|^{p-1}u\right)\right\|_{ L^{2}(\mathbb{R}^{2})} \notag \\ && \lesssim \left(\left\| |J|^{\alpha}(|u|^{p-1}u)\right\|_{ L^{2}(\mathbb{R}^{2})} +t^{\alpha-\sigma}\left\| |J|^{\alpha}(|u|^{p-1}u)\right\|_{ L^{2}(\mathbb{R}^{2})}^{\frac{\sigma }{\alpha}} \left\| |u|^{p-1}u \right\|_{ L^{2}(\mathbb{R}^{2})}^{1-\frac{\sigma }{\alpha}} \right) \notag \\ && \lesssim \|u\|^{p-1}_{ L^{\infty}(\mathbb{R}^{2})} \left(\left\| |J|^{\alpha}u\right\|_{ L^{2}(\mathbb{R}^{2})} +t^{\alpha-\sigma} \left\||J|^{\alpha}u\right\|_{ L^{2}(\mathbb{R}^{2})}^{\frac{\sigma }{\alpha}} \left\| u\right\|_{ L^{2}(\mathbb{R}^{2})}^{1-\frac{\sigma }{\alpha}}\right) \notag\end{aligned}$$ for $p> 2$, $1<\alpha<2$ and $0<\sigma <1.$ By Lemma \[lemma 4.2\], from (\[7.10\]) we obtain $$\begin{aligned} && \quad \left\|F_{\alpha}\right\|_{ L^{2}(\mathbb{R}^{2})} \label{7.2} \\ && \lesssim \|u\|^{p-1}_{ L^{\infty}(\mathbb{R}^{2})} \left(\left\| |J_{V}|^{\alpha}u\right\|_{ L^{2}(\mathbb{R}^{2})} +t^{\alpha-\sigma}\left\||J_V|^{\alpha}u\right\|_{ L^{2}(\mathbb{R}^{2})}^{\frac{\sigma }{\alpha}} \left\|u\right\|_{ L^{2}(\mathbb{R}^{2})}^{1-\frac{\sigma }{\alpha}} \right) \notag\end{aligned}$$ for $p> 2$, $1<\alpha<2$ and $0<\sigma < 1.$ By (\[16.1\]) in Remark \[Remark 6.1\], from (\[7.2\]) we get $$\begin{aligned} && \quad \left\|F_{\alpha}\right\|_{ L^{2}(\mathbb{R}^{2})} \label{7.3} \\ && \lesssim\left( t^{-1}\||J_{V}|^{\alpha}u\|^{\frac{1}{\alpha}}_{L^{2}(\mathbb{R}^{2})} \left\|u\right\|^{1-\frac{1}{\alpha}}_{L^{2}(\mathbb{R}^{2})} \right)^{p-1} \notag \\ && \quad \times \left(\left\| |J_{V}|^{\alpha}u\right\|_{ L^{2}(\mathbb{R}^{2})} +t^{\alpha-\sigma} \left\||J_V|^{\alpha}u\right\|_{ L^{2}(\mathbb{R}^{2})}^{ \frac{\sigma }{\alpha}} \left\|u\right\|_{ L^{2}(\mathbb{R}^{2})}^{1-\frac{\sigma }{\alpha}} \right). \notag\end{aligned}$$ Then we obtain $$\begin{aligned} && \label{i1} \left\|F_{\alpha}\right\|_{L^{1}\left([1,T]; L^{2}(\mathbb{R}^{2})\right)} \\ &&\lesssim \|u_0\|_{L^{2}(\mathbb{R}^{2})}^{\left(1-\frac{1}{\alpha}\right)(p-1)} \||J_{V}|^{\alpha}u\|_{L^{\infty}\left([1,T]; L^{2}(\mathbb{R}^{2})\right)}^{\frac{p-1}{\alpha}+1} \notag \\ && \quad + \|u_0\|_{L^{2}(\mathbb{R}^{2})}^{(p-1)\left(1-\frac{1}{\alpha}\right)+1-\frac{\sigma }{\alpha}}\left\|t^{-(p-1)+\alpha-\sigma}\||J_{V}|^{\alpha}u\|_{L^{2}(\mathbb{R}^{2})}^{\frac{p-1+\sigma}{\alpha}} \right\|_{L^{1}\left([1,T]\right)} \notag\\ && \lesssim \|u_0\|_{H^{\alpha}(\mathbb{R}^{2})}^{\left(1-\frac{1}{\alpha}\right)(p-1)} \||J_{V}|^{\alpha}u\|_{L^{\infty}\left([1,T]; L^{2}(\mathbb{R}^{2})\right)}^{\frac{p-1}{\alpha}+1}\notag \\ && \quad + \|u_0\|_{H^{\alpha}(\mathbb{R}^{2})}^{(p-1)\left(1-\frac{1}{\alpha}\right)+1-\frac{\sigma }{\alpha}} \|t^{-p+1+\alpha-\sigma}\|_{L^{1}\left([1,T]\right)} \left\||J_{V}|^{\alpha}u \right\|_{L^{\infty}\left([1,T];L^{2}(\mathbb{R}^{2})\right)}^{\frac{p-1+\sigma}{\alpha}} \notag \\ \left\||J_{V}|^{\alpha}u \right\|_{L^{\infty}\left([1,T];L^{2}(\mathbb{R}^{2})\right)}^{\frac{p-1+\sigma}{\alpha}}, \notag\end{aligned}$$ since we can choose $\alpha$ and $\sigma$ such that $-p+2+\alpha-\sigma<0$ for $p>2$, where $1<\alpha < \frac{3}{2}$ and $\frac{2}{3}<\sigma<1.$ By (\[n.m0\]) in Lemma \[Lemma Jv\] and (\[i1\]), then we have $$\begin{aligned} && \quad \|u_{\alpha}^{\uppercase\expandafter{\romannumeral 1}}\|_{L^{\infty}\left([1,T]; L^{2}(\mathbb{R}^{2})\right)} \label{7.11} \\ && \lesssim \left\||J|^{\alpha}(1)u_0\right\|_{L^{2}(\mathbb{R}^{2})} +\left\||J|^{\alpha}(1)u_0\right\|_{L^{2}(\mathbb{R}^{2})}^{\frac{\sigma}{\alpha}}\|u_0\|_{L^{2}(\mathbb{R}^{2})}^{1-\frac{\sigma}{\alpha}} \notag \\ &&\quad +\|u_0\|_{H^{\alpha}(\mathbb{R}^{2})}^{\left(1-\frac{1}{\alpha}\right)(p-1)} \||J_{V}|^{\alpha}u\|_{L^{\infty}\left([1,T]; L^{2}(\mathbb{R}^{2})\right)}^{\frac{p-1}{\alpha}+1}\notag \\ && \quad + \|u_0\|_{H^{\alpha}(\mathbb{R}^{2})}^{(p-1)\left(1-\frac{1}{\alpha}\right)+1-\frac{\sigma }{\alpha}} \left\||J_{V}|^{\alpha}u \right\|_{L^{\infty}\left([1,T];L^{2}(\mathbb{R}^{2})\right)}^{\frac{p-1+\sigma}{\alpha}} \notag\end{aligned}$$ for $p>2$, where $1<\alpha < \frac{3}{2}$ and $\frac{2}{3}<\sigma<1$. By Proposition \[theorem 2.2\], we have from (\[NLS8\]) $$\begin{aligned} \qquad \left\|u_{\alpha}^{\uppercase\expandafter{\romannumeral 2}}\right\|_{L^{\infty}\left([1,T];L^{2}(\mathbb{R}^{2})\right)}\lesssim \left\|t^{\alpha-1}M(-t)A(\alpha)M(t)u\right\|_{L^{p_{1}^{\prime}}\left([1,T];L^{q_{1}^{\prime}}(\mathbb{R}^{2})\right)} \label{17.12}\end{aligned}$$ for $1<\alpha<2$, where $M(t)=e^{-\frac{i}{2t}|x|^{2}},$ $A(\alpha)=\alpha\left(-\Delta_{V}\right)^{\frac{\alpha}{2}}+\left[x\cdot\nabla,\left(-\Delta_{V}\right)^{\frac{\alpha}{2}}\right]$, $\frac{1}{p_{1}}+\frac{1}{q_{1}}=\frac{1}{2}$, $\frac{1}{p_{1}}+\frac{1}{p_{1}^{\prime}}=1,$ $\frac{1}{q_{1}}+\frac{1}{q_{1}^{\prime}}=1,$ and $1< q_1^{\prime} < 2.$ Let $1<\alpha<\frac{3}{2}$. By Lemma \[Lemma A(s)\] and the Sobolev inequality $$\begin{aligned} \|u\|_{L^{q_1}(\mathbb{R}^{2})}\lesssim \|(-\Delta)^{\frac{\alpha}{2}}u\|_{L^{2}(\mathbb{R}^{2})}^{\theta} \|u\|_{L^{2}(\mathbb{R}^{2})}^{1-\theta} \notag\end{aligned}$$ for $0 < \theta < \frac{1}{\alpha},$ where $q_1=\frac{2}{1-\theta \alpha}$, from (\[17.12\]) we have $$\begin{aligned} && \quad \left\|u_{\alpha}^{\uppercase\expandafter{\romannumeral 2}}\right\|_{L^{\infty}\left([1,T];L^{2}(\mathbb{R}^{2})\right )} \label{17.13} \\ &\lesssim &\left\|t^{\alpha-1}M(t)u\right\|_{L^{p_{1}^{\prime}}\left([1,T];L^{q_{1}}(\mathbb{R}^{2})\right)} \notag \\ &\lesssim &\left\|t^{\alpha-1} \|(-\Delta)^{\frac{\alpha}{2}}M(t)u\|_{L^{2}(\mathbb{R}^{2})}^{\theta} \|M(t)u\|_{L^{2}(\mathbb{R}^{2})}^{1-\theta} \right\|_{L^{p_{1}^{\prime}}([1,T])} \notag \\ &\lesssim &\|u_0\|_{L^{2}(\mathbb{R}^{2})}^{1-\theta} \left\|t^{\alpha-1} \|(-\Delta)^{\frac{\alpha}{2}}M(t)u\|_{L^{2}(\mathbb{R}^{2})}^{\theta} \right\|_{L^{p_{1}^{\prime}}([1,T])} \notag \\ &\lesssim &\|u_0\|_{H^{\alpha}(\mathbb{R}^{2})}^{1-\theta} \left\|t^{\alpha-1-\alpha\theta} \| |J|^{\alpha}u\|_{L^{2}(\mathbb{R}^{2})}^{\theta} \right\|_{L^{p_{1}^{\prime}}([1,T])} \notag \\ &\lesssim &\|u_0\|_{H^{\alpha}(\mathbb{R}^{2})}^{1-\theta} \|t^{\alpha-1-\alpha\theta} \|_{L^{p_{1}^{\prime}}([1,T])} \left\| |J|^{\alpha}u \right\|_{L^{\infty}([1,T]; L^{2}(\mathbb{R}^{2}))}^{\theta} \notag\end{aligned}$$ for $0< \theta < \frac{1}{\alpha}$, where $\frac{1}{p_{1}}+\frac{1}{p_{1}^{\prime}}=1, \frac{1}{q_{1}}+\frac{1}{q_{1}^{\prime}}=1, \frac{1}{p_{1}}+\frac{1}{q_{1}}=\frac{1}{2},$ $q_1=\frac{2}{1-\theta \alpha}$ and $1< q_{1}^{\prime} < 2$. For $1<\alpha<\frac{3}{2},$ we choose $\theta \in \left(\frac{2}{3}, \frac{1}{\alpha}\right)$. Then we have $p_1^{\prime}=\frac{2}{2-\alpha \theta}.$ Since $\left(\alpha-1-\alpha \theta\right)p_{1}^{\prime}+1=-\frac{\alpha (3\theta-2)}{2-\alpha \theta}<0$ for $1<\alpha<\frac{3}{2}$, where $\frac{2}{3}<\theta <\frac{1}{\alpha},$ we have $$\begin{aligned} \|t^{\alpha-1-\alpha \theta} \|_{L^{p_{1}^{\prime}}([1,T])} \leq C. \label{i2}\end{aligned}$$ By (\[17.13\]), (\[i2\]) and Lemma \[lemma 4.2\], we have $$\begin{aligned} && \quad \left\|u_{\alpha}^{\uppercase\expandafter{\romannumeral 2}}\right\|_{L^{\infty}\left([1,T];L^{2}(\mathbb{R}^{2})\right )} \label{7.14} \\ &\lesssim&\|u_0\|_{H^{\alpha}(\mathbb{R}^{2})}^{\frac{1}{4}} \left\| |J_{V}|^{\alpha}u \right\|_{L^{\infty}([1,T];L^{2}(\mathbb{R}^{2}))}^{\frac{3}{4}} \notag\end{aligned}$$ for $1<\alpha < \frac{3}{2}.$ Using (\[7.11\]) and (\[7.14\]), we have $$\begin{aligned} \left\||J_{V}|^{\alpha}u\right\|_{L^{\infty}\left([1,T];L^{2}(\mathbb{R}^{2})\right )} &\leq& \left\|u_{\alpha}^{\uppercase\expandafter{\romannumeral 1}}\right\|_{L^{\infty}\left([1,T];L^{2}(\mathbb{R}^{2})\right )} + \left\|u_{\alpha}^{\uppercase\expandafter{\romannumeral 2}}\right\|_{L^{\infty}\left([1,T];L^{2}(\mathbb{R}^{2})\right)} \notag\\ & \lesssim &\left\|u_{0}\right\|_{H^{\alpha}(\mathbb{R}^2)\cap \dot{H}^{0,\alpha}(\mathbb{R}^2)} \notag \\ &&\quad +\|u_{0}\|_{H^{\alpha}(\mathbb{R}^2)\cap \dot{H}^{0,\alpha}(\mathbb{R}^2)}^{\left(1-\frac{1}{\alpha}\right)(p-1)} \||J_{V}|^{\alpha}u\|_{L^{\infty}\left([1,T]; L^{2}(\mathbb{R}^{2})\right)}^{\frac{p-1}{\alpha}+1}\notag \\ && \quad + \|u_{0}\|_{H^{\alpha}(\mathbb{R}^2)\cap \dot{H}^{0,\alpha}(\mathbb{R}^2)}^{(p-1)\left(1-\frac{1}{\alpha}\right)+1-\frac{\sigma }{\alpha}} \left\||J_{V}|^{\alpha}u \right\|_{L^{\infty}\left([1,T];L^{2}(\mathbb{R}^{2})\right)}^{\frac{p-1+\sigma}{\alpha}}\notag\\ &&\quad +\|u_{0}\|_{H^{\alpha}(\mathbb{R}^2)\cap \dot{H}^{0,\alpha}(\mathbb{R}^2)}^{\frac{1}{4}} \left\| |J_{V}|^{\alpha}u \right\|_{L^{\infty}([1,T];L^{2}(\mathbb{R}^{2}))}^{\frac{3}{4}}\notag\end{aligned}$$ for $p>2$, where $1<\alpha < \frac{3}{2}$ and $\frac{2}{3}<\sigma<1$. Then for a fixed $C>0$ we have $$\begin{aligned} \left\||J_{V}|^{\alpha}u\right\|_{L^{\infty}\left([1,T];L^{2}(\mathbb{R}^{2})\right )} \leq C\|u_{0}\|_{H^{\alpha}(\mathbb{R}^2)\cap \dot{H}^{0,\alpha}(\mathbb{R}^2)},\end{aligned}$$ if $\|u_{0}\|_{H^{\alpha}(\mathbb{R}^2)\cap \dot{H}^{0,\alpha}(\mathbb{R}^2)}$ is small enough. By a standard continuity argument and Remark \[Remark 6.1\], we have the time decay estimate (\[timedecay\]) if $\epsilon_{0}$ is small enough. From (\[NLS\]), we have $$\begin{aligned} u(t)=e^{\frac{i}{2}t\Delta_V}\left(e^{-\frac{i}{2}\Delta_V}u_0-i\lambda\int_1^t e^{-\frac{i}{2}\tau\Delta_V}(|u|^{p-1}u)(\tau)d\tau\right).\end{aligned}$$ Let $u_+=e^{-\frac{i}{2}\Delta_{V}}u_0-i\lambda \int_{1}^{\infty}e^{-\frac{i}{2}\tau\Delta_{V}}(|u|^{p-1}u)(\tau)d\tau$. The we have $$\begin{aligned} u(t)=e^{\frac{i}{2}t\Delta_V}u_+ +i\lambda\int_t^\infty e^{-\frac{i}{2}(t-\tau)\Delta_V}(|u|^{p-1}u)(\tau)d\tau.\end{aligned}$$ We obtain the scattering (\[scattering\]) by a standard argument from the time decay estimate (\[timedecay\]). We omit the proof here. Appendix I {#Appendix} ========== Let $M(t)=e^{-\frac{i}{2t}|x|^{2}}$ and $[B, D]=BD-DB$. To prove Proposition \[proposition 1.1\] and Proposition \[proposition 1.01\], we consider the following lemmas (see [@CGV2014]). \[lemma 1.1\] We have the following identities: $$[i\partial_{t},M(-t)]=\frac{|x|^{2}}{2t^{2}}M(-t),\label{1.2}$$ and $$[i\partial_{t},M(t)]=-\frac{|x|^{2}}{2t^{2}}M(t).\label{1.3}$$ Since $$\begin{aligned} &&\qquad [i\partial_{t},M(-t)]f \notag \\ & &=i\partial_{t}(M(-t)f)-M(-t)i\partial_{t}f\notag\\ & &= \frac{|x|^{2}}{2t^{2}}M(-t)f, \notag\end{aligned}$$ then we obtain the first identity (\[1.2\]).\ By using the similar method, we get the second identity (\[1.3\]). \[lemma 1.2\] We have $$[ \Delta,M(-t)]=M(-t)\left(\frac{in}{t}-\frac{|x|^{2}}{t^{2}}+2\frac{ix\cdot\nabla}{t}\right),\label{1.4}$$ and $$[ \Delta,M(t)]=M(t)\left(-\frac{in}{t}-\frac{|x|^{2}}{t^{2}}-2\frac{ix\cdot\nabla}{t}\right),\label{1.5}$$ where $n$ is the generic space dimension. By some calculations, we have $$\begin{aligned} &&\qquad [\Delta,M(-t)]f \notag \\ & &=\Delta(M(-t)f)-M(-t)\Delta f \notag\\ & &= M(-t) \Delta f +\Delta(M(-t))f+2\nabla M(-t)\cdot \nabla f -M(-t)\Delta f \notag \\ & &= \Delta(M(-t))f+2\nabla M(-t)\cdot \nabla f \notag \\ & &=M(-t)\left(\frac{in}{t}-\frac{|x|^{2}}{t^{2}}+2\frac{ix\cdot\nabla}{t}\right)f. \notag\end{aligned}$$ Taking complex conjugates, we get the second identity (\[1.5\]). We have the following commutator relations \[lemma 1.3\] $$\left[ i\partial_{t}+\frac{1}{2}\Delta,M(-t)\right]=\frac{1}{2}M(-t)\left(\frac{in}{t}+2\frac{ix \cdot \nabla}{t}\right),\label{1.6}$$ and $$\left[ i\partial_{t}+\frac{1}{2}\Delta,M(t)\right]=M(t)\left(-\frac{in}{2t}-\frac{|x|^{2}}{t^{2}}-\frac{ix\cdot\nabla}{t}\right),\label{1.7}$$ where $n$ is the generic space dimension. By Lemmas \[lemma 1.1\] and \[lemma 1.2\] , we have $$\begin{aligned} &&\qquad \left [ i\partial_{t}+\frac{1}{2}\Delta,M(-t)\right]f \notag \\ & &= [ i\partial_{t},M(-t)]f + \frac{1}{2}[\Delta,M(-t)]f \notag\\ & &= \frac{|x|^{2}}{2t^{2}}M(-t)f + \frac{1}{2}M(-t)\left(\frac{in}{t}-\frac{|x|^{2}}{t^{2}}+2\frac{ix \cdot \nabla}{t}\right)f \notag \\ & &= \frac{1}{2}M(-t)\left(\frac{in}{t}+2\frac{ix \cdot \nabla}{t}\right) f . \notag\end{aligned}$$ By Lemmas \[lemma 1.1\] and \[lemma 1.2\] , we also get the commutator relation (\[1.7\]). \[lemma 1.4\] Let $\Delta_{V}=\Delta-V(x)$. For $s \geq 0$, we have $$\left[ i\partial_{t}+\frac{1}{2}\Delta_{V},\left(-t^{2}\Delta_{V}\right)^{\frac{s}{2}}\right]=\frac{is}{t}\left(-t^{2}\Delta_{V}\right)^{\frac{s}{2}}. \label{1.8}$$ By the commutator relation $\left[ \left(-\Delta_{V}\right)^{\frac{s}{2}}, \Delta_{V}\right]=0,$ we have $$\begin{aligned} && \qquad \left[ i\partial_{t}+\frac{1}{2}\Delta_{V},\left(-t^{2}\Delta_{V}\right)^{\frac{s}{2}}\right]f \label{1.9} \\ & &= \left[ i\partial_{t},\left(-t^{2}\Delta_{V}\right)^{\frac{s}{2}}\right]f + \frac{1}{2} \left[ \Delta_{V},\left(-t^{2}\Delta_{V}\right)^{\frac{s}{2}}\right]f \notag\\ & &= \left[ i\partial_{t},\left(-t^{2}\Delta_{V}\right)^{\frac{s}{2}}\right]f .\notag\end{aligned}$$ By some simple calculations, we have $$\begin{aligned} \label{1.10} \left[ i\partial_{t},\left(-t^{2}\Delta_{V}\right)^{\frac{s}{2}}\right]f &=& i\partial_{t}\left[\left(-t^{2}\Delta_{V}\right)^{\frac{s}{2}}f \right]-i\left[\left(-t^{2}\Delta_{V}\right)^{\frac{s}{2}}\right]\partial_{t}f \\ &=&\frac{ is}{t}\left(-t^{2}\Delta_{V}\right)^{\frac{s}{2}}f. \notag\end{aligned}$$ Combining (\[1.9\]) and (\[1.10\]), we have our desired result. Proof of Proposition \[proposition 1.1\] ---------------------------------------- Since $[B,DE]=[B,D]E+D[B,E]$, then we have $$\begin{aligned} && \label{1.12 }\hspace{1cm} \left[ i\partial_{t}+\frac{1}{2}\Delta_{V}, |J_{V}|^{s}(t)\right]f \\ && \hspace{0.5cm} =\left[ i\partial_{t}+\frac{1}{2}\Delta_{V},M(-t)\left(-t^{2}\Delta_{V}\right)^{\frac{s}{2}}M(t)\right] f \notag \\ & &\hspace{0.5cm} = \left[ i\partial_{t}+\frac{1}{2}\Delta,M(-t)\right]\left(-t^{2}\Delta_{V}\right)^{\frac{s}{2}}M(t) f \notag \\ & & \qquad\qquad + M(-t)\left[ i\partial_{t}+\frac{1}{2}\Delta_{V},\left(-t^{2}\Delta_{V}\right)^{\frac{s}{2}}M(t)\right] f. \notag\end{aligned}$$ By Lemmas \[lemma 1.3\], \[lemma 1.4\] and $[B, DE]=[B, D]E+D[B, E]$, we have $$\begin{aligned} && \hspace {1cm}\left[ i\partial_{t}+\frac{1}{2}\Delta,M(-t)\right]\left(-t^{2}\Delta_{V}\right)^{\frac{s}{2}}M(t) f \label{1.13} \\ && \qquad \qquad + M(-t)\left[ i\partial_{t}+\frac{1}{2}\Delta_{V},\left(-t^{2}\Delta_{V}\right)^{\frac{s}{2}}M(t)\right] f \notag \\ & & \hspace {0.5cm}= \frac{i}{t}\left|J_{V}\right|^{s}(t)f+\frac{i}{t}M(-t)x\cdot\nabla \left(-t^{2}\Delta_{V}\right)^{\frac{s}{2}}M(t) f\notag\\ && \qquad \qquad +M(-t)\left[ i\partial_{t}+\frac{1}{2}\Delta_{V},\left(-t^{2}\Delta_{V}\right)^{\frac{s}{2}}\right] M(t)f \notag \\ & &\qquad \qquad + M(-t)\left(-t^{2}\Delta_{V}\right)^{\frac{s}{2}}\left[ i\partial_{t}+\frac{1}{2}\Delta_{V},M(t)\right] f \notag \\ & & \hspace {0.5cm}= \frac{i}{t}\left|J_{V}\right|^{s}(t)f+\frac{i}{t}M(-t)x\cdot\nabla \left(-t^{2}\Delta_{V}\right)^{\frac{s}{2}}M(t) f \notag\\ && \qquad \qquad+M(-t)\frac{is}{t}\left(-t^{2}\Delta_{V}\right)^{\frac{s}{2}}M(t)f \notag \\ & &\qquad \qquad + M(-t)\left(-t^{2}\Delta_{V}\right)^{\frac{s}{2}}M(t)\left(-\frac{i}{t}-\frac{|x|^{2}}{t^{2}}-i\frac{x\cdot\nabla}{t}\right) f \notag \\ & & \hspace {0.5cm}= \frac{is}{t}\left|J_{V}\right|^{s}(t)f+\frac{i}{t} M(-t) \left[ x\cdot\nabla, \left(-t^{2}\Delta_{V}\right)^{\frac{s}{2}}M(t)\right]f \notag\\ && \qquad \qquad - M(-t)\left(-t^{2}\Delta_{V}\right)^{\frac{s}{2}}\frac{|x|^{2}}{t^{2}}M(t)f. \notag\end{aligned}$$ Using $[B, DE]=[B, D]E+D[B, E]$ and $\left(-t^{2}\Delta_{V}\right)^{\frac{s}{2}}[x\cdot\nabla, M(t)]f=\left(-t^{2}\Delta_{V}\right)^{\frac{s}{2}}x\cdot\left(\nabla M(t)\right)f$, we have $$\begin{aligned} & & \hspace {1cm}\frac{is}{t}\left|J_{V}\right|^{s}(t)f+\frac{i}{t} M(-t) \left[ x\cdot\nabla, \left(-t^{2}\Delta_{V}\right)^{\frac{s}{2}}M(t)\right]f \label{1.14} \\ && \qquad \qquad - M(-t)\left(-t^{2}\Delta_{V}\right)^{\frac{s}{2}}\frac{|x|^{2}}{t^{2}}M(t)f \notag\\ && \hspace {0.5cm}=it^{s-1}M(-t)[s(-\Delta_{V})^{\frac{s}{2}}]M(t)f+\frac{i}{t}M(-t) [x\cdot \nabla, (-t^{2}\Delta_{V})^{\frac{s}{2}}]M(t)f\notag \\ && \qquad \qquad +\frac{i}{t} M(-t)(-t^{2}\Delta_{V})^{\frac{s}{2}}[x\cdot \nabla, M(t)]f - M(-t)\left(-t^{2}\Delta_{V}\right)^{\frac{s}{2}}\frac{|x|^{2}}{t^{2}}M(t)f \notag\\ & & \hspace {0.5cm}=it^{s-1}M(-t) A(s) M(t)f, \notag\end{aligned}$$ where $A(s)=s\left(-\Delta_{V}\right)^{\frac{s}{2}}+\left[x\cdot\nabla,\left(-\Delta_{V}\right)^{\frac{s}{2}}\right]$.\ Combining (\[1.12 \]), (\[1.13\]) and (\[1.14\]), we complete the proof of (\[1.11\]). Proof of Proposition \[proposition 1.01\] ----------------------------------------- Let $S=x\cdot \nabla$. By the formula $$\begin{aligned} \left(-\Delta_{V}\right)^{\frac{s}{2}}f&=&c(s)(-\Delta_{V})\int_{0}^{\infty}\tau^{\frac{s}{2}-1} \left(\tau-\Delta_{V}\right)^{-1}fd\tau \notag\end{aligned}$$ for $0<s<2$, where $c(s)^{-1}=\int_{0}^{\infty} \tau^{\frac{s}{2}-1}(\tau+1)^{-1}d\tau$, we get $$\begin{aligned} A(s)=s\left(-\Delta_{V}\right)^{\frac{s}{2}}+c(s)\int_{0}^{\infty} \tau^{\frac{s}{2}-1}[S,-\Delta_{V}(\tau-\Delta_{V})^{-1}]d\tau. \label{2.16}\end{aligned}$$ Using $[B, DE]=[B, D]E+D[B, E],$ we have $$\begin{aligned} &&[S,-\Delta_{V}(\tau-\Delta_{V})^{-1}] \label{2.17}\\ &=&[S,-\Delta_{V}](\tau-\Delta_{V})^{-1}-\Delta_{V}[S,(\tau-\Delta_{V})^{-1}] \notag \\ &=&[S,-\Delta_{V}](\tau-\Delta_{V})^{-1}\notag\\ && \quad+\Delta_{V}(\tau-\Delta_{V})^{-1}[S, -\Delta_{V}](\tau-\Delta_{V})^{-1}. \notag\end{aligned}$$ Since $\Delta_{V}=\Delta-V(x),$ we obtain $$\begin{aligned} &&[S,-\Delta_{V}] \label{2.18} \\ &=&[S,-\Delta]+[S,V] \notag \\ &=&-(x\cdot \nabla)\Delta+\Delta(x\cdot \nabla)+x\cdot \nabla V-V x\cdot \nabla \notag\\ &=&2\Delta+SV \notag \\ &=&2\Delta_{V}+W, \notag\end{aligned}$$ where $W=(S+2)V$. By (\[2.17\]) and (\[2.18\]), we have $$\begin{aligned} &&[S,-\Delta_{V}(\tau-\Delta_{V})^{-1}] \label{2.19}\\ &=&2\Delta_{V}(\tau-\Delta_{V})^{-1}+W(\tau-\Delta_{V})^{-1}\notag \\ &&\quad+\Delta_{V}(\tau-\Delta_{V})^{-1}[S, -\Delta_{V}](\tau-\Delta_{V})^{-1} \notag\\ &=&2\Delta_{V}(\tau-\Delta_{V})^{-1}+W(\tau-\Delta_{V})^{-1}\notag \\ &&\quad+\Delta_{V}(\tau-\Delta_{V})^{-1}(2\Delta_{V}+W)(\tau-\Delta_{V})^{-1} \notag\\ &=&2\tau\Delta_{V}(\tau-\Delta_{V})^{-2}+\tau (\tau-\Delta_{V})^{-1}W(\tau-\Delta_{V})^{-1}. \notag\end{aligned}$$ By (\[2.16\]) and (\[2.19\]), we get $$\begin{aligned} A(s)&=&s\left(-\Delta_{V}\right)^{\frac{s}{2}}+2c(s)\int_{0}^{\infty} \tau^{\frac{s}{2}}\Delta_{V}(\tau-\Delta_{V})^{-2}d\tau \label{2.20} \\ &&+c(s)\int_{0}^{\infty} \tau^{\frac{s}{2}}(\tau-\Delta_{V})^{-1}W(\tau-\Delta_{V})^{-1}d\tau . \notag\end{aligned}$$ Since $$\begin{aligned} s\left(-\Delta_{V}\right)^{\frac{s}{2}}=-2c(s)\int_{0}^{\infty} \tau^{\frac{s}{2}}\Delta_{V}(\tau-\Delta_{V})^{-2}d\tau \notag\end{aligned}$$ by integrating by parts, we have our desired result from (\[2.20\]). Appendix II: zero is not resonance {#resonance} ================================== In this section we can prove the lack of resonance at the origine, i.e. we shall prove that the origin is not resonance point, recalling that the definition of resonance used in [@JN01] and Theorem 6.2 guarantee that zero is a resonance point can be characterized by the existence of solution $$\Psi(x) = c_0 + \Psi_0(x), \ \ c_0 \in \mathbb{C},\ \ \Psi_0 \in L^q(\mathbb{R}^2), \ \exists q \in (2,\infty)$$ to the equation $$\label{eq.A3.1} -\Delta \Psi + V \Psi = 0.$$ Using Lemma 6.4 and the relation (6.94) in [@JN01], assuming $\beta > 10$, we can deduce further $$\Psi_0(x) = O(\langle x \rangle^{-1} ), \ $$ $$\nabla \Psi_0(x) = O(\langle x \rangle^{-2} ).$$ Rewriting in the form $$-\Delta \Psi_0 + V \Psi = 0,$$ multiplying by $\overline{\Psi}$ and integrating over $|x| \leq R,$ we get $$\int_{|x| \leq R} |\nabla \Psi_0(x)|^2 dx - \overline{c_0}\int_{|x|=R} \partial_r \Psi_0 (x) dS_x + \int_{|x| \leq R} V(x) | \Psi(x)|^2 dx =0.$$ The asymptotics of $\Psi_0, \partial_r \Psi_0$ enables one to take the limit $R \to \infty$ and arrive at $$\int_{\mathbb{R}^2} |\nabla \Psi_0(x)|^2 dx + \int_{\mathbb{R}^2} V(x) | \Psi(x)|^2 dx =0$$ and the assumption $V \geq 0$ implies $\Psi=0.$ **Acknowledgments.** V. Georgiev was supported in part by Project 2017 “Problemi stazionari e di evoluzione nelle equazioni di campo nonlineari" of INDAM, GNAMPA - Gruppo Nazionale per l’Analisi Matematica, la Probabilità e le loro Applicazioni, by Institute of Mathematics and Informatics, Bulgarian Academy of Sciences and Top Global University Project, Waseda University, by the University of Pisa, Project PRA 2018 49 and project “Dinamica di equazioni nonlineari dispersive", “Fondazione di Sardegna", 2016. C. Li was partially supported by the Education Department of Jilin Province \[2018\] and NNSFC under Grant Number 11461074. [60]{} S. Agmon, *Spectral, Properties of Schrödinger operators and scattering theory*, Annali della Scuola Normale Superiore di Pisa-Classe di Scienze 2.2 (1975), 151–218. S. Agmon, *Lower bounds for solutions of Schrödinger equations*, J. Analyse Math., **23** (1970), 1 – 25. P. D’Ancona and L. Fanelli, *Strichartz and smoothing estimates for dispersive equations with magnetic potentials*, Communications in Partial Differential Equations, **33** (2008), 1082–1112. P. D’Ancona, L. Fanelli, L. Vega, and N. Visciglia, *Endpoint Strichartz estimates for the magnetic Schrödinger equation*, Journal of Functional Analysis, **258** (2010), 3227–3240. P. Auscher, A. McIntosh, *Heat kernels of second order complex elliptic operators and applications*, Journal of Functional Analysis, **152** (1998), 22–73. J. E. Barab, *Nonexistence of asymptotically free solutions for a nonlinear Schrödinger equation*, J. Math. Phy., **25** (1984), 3270–3273. K. Bogdan, J. Dziuba$\acute{n}$ski, and K. Szczypkowski, *Sharp Gaussian estimates for Schrödinger heat kernels: L$^{p}$ integrability conditions*, 2016, arXiv: 1511.07167v3. J.-M. Bouclet and H. Mizutani, *Uniform resolvent and Strichartz estimates for Schrödinger equations with critical singularities,* Transactions of the American Mathematical Society, **370** (2018), 7293–7333. N. Burq, F. Planchon, J. G. Stalker and A. S. Tahvildar-Zadeh, *Strichartz estimates for the wave and Schrödinger equations with the inverse-square potential*, Journal of Functional Analysis, **203** (2003), 519–549. N. Burq, F. Planchon, J. G. Stalker and A. S. Tahvildar-Zadeh, *Strichartz estimates for the wave and Schrödinger equations with potentials of critical decay*, Indiana Univ. Math. J., **53** (2004), no. 6, 1665–1680. S. Cuccagna, V. Georgiev and N. Visciglia, *Decay and scattering of small solutions of pure power NLS in $\mathbb{R}$ with $p>3$ and with a potential,* Communications on Pure and Applied Mathematics, **67** (2014), no.2, 957–981. V. Georgiev and A. Ivanov, *Existence and mapping properties of wave operator for the Schrödinger equation with singular potential*, Proceedings of the American Mathematical Society, **133** (2005),1993–2003. V. Georgiev and B. Velichkov, *Decay estimates for the supercritical 3-D Schrödinger equation with rapidly decreasing potential*, Progress in Mathematics, **301** (2012), 145–162. V. Georgiev and N. Visciglia, *About resonances for Schrödinger operators with short range singular perturbation. Topics in contemporary differential geometry,* complex analysis and mathematical physics, World Sci. Publ., Hackensack, NJ, (2007), 74 – 84. J. Ginibre, T. Ozawa and G. Velo, *On the existence of wave operators for a class of nonlinear Schrödinger equations,* Ann. Inst. H. Poincar$\rm{\acute{e}}$ Phys. Th$\rm{\acute{e}}$or., **60** (1994), 211–239. N. Hayashi, C. Li, and P. I. Naumkin, *Nonlinear Schrödinger systems in 2d with nondecaying final data*, Journal of Differential Equations, **260** (2016), Issue 2, 1472–1495. N. Hayashi, C. Li, and P. I. Naumkin, *Critical nonlinear Schrödinger equations in higher space dimensions*, J. Math. Soc. Japan, **70** (2018), no. 4, 1475–1492. N. Hayashi and P. Naumkin, *Asymptotics for large time of solutions to the nonlinear Schrö dinger and Hartree equations*, Amer. J. Math., **120** (1998), no. 2, 369–389. N. Hayashi and T. Ozawa, *Scattering theory in the weighted $L^{2}(\mathbb{R}^{n})$ spaces for some Schrödinger equations,* Ann. Inst. H. Poincare Phys. Theor., **48** (1988), 17–37. A. Jensen and G. Nenciu, *A unified approach to resolvent expansions at thresholds.* Rev. Math. Phys. **13** (2001), no. 6, 717 – 754. G. Jin, Y. Jin and C. Li, *The initial value problem for nonlinear Schrödinger equations with a dissipative nonlinearity in one space dimension*, Journal of Evolution Equations, **16** (2016), no.4, 983–995. M. Keel and T. Tao, *Endpoint Strichartz estimates,* Amer. J. Math., **2015** (1998), 955–980. N. Kita and A. Shimomura, *Large time behavior of solutions to Schrödinger equations with a dissipative nonlinearity for arbitrarily large initial data,* J. Math. Soc. Japan, **61** (2009), no. 1, 39–64. C. Li and H. Sunagawa, *On Schrödinger systems with cubic dissipative nonlinearities of derivative type,* **29**, (2016), no. 5, Nonlinearity,1537–1563. Z. Li and L. Zhao, *Decay and scattering of solutions to nonlinear Schrödinger equations with regular potentials for nonlinearities of sharp growth,* J. Math. Study, **50** (2017), 277–290. H. P. McKean and J. Shatah, *The nonlinear Schrödinger equation and the nonlinear heat equation reduction to linear form,* Comm. Pure Appl. Math., **44** (1991), no. 8-9, 1067–1080. T. Mizumachi, *Asymptotic stability of small solitons for 2D nonlinear Schrödinger equations with potential,* J. Math. Kyoto Univ. (JMKYAZ), **47** (2007), 599–620. K. Mochizuki, *Spectral and scattering theory for second-order partial differential operators,* Monographs and Research Notes in Mathematics. CRC Press, Boca Raton, FL, 2017. T. Ozawa, *Long range scattering for nonlinear Schrödinger equations in one space dimension,* Comm. Math. Phys., **139** (1991), no. 3, 479–493. Y. Sagawa, H. Sunagawa, and S. Yasuda, *A sharp lower bound for the lifespan of small solutions to the Schrödinger equation with a subcritical power nonlinearity,* Differential and Integral Equations, **31** (2018), no. 9-10, 685–700. W. Schlag, *Dispersive estimates for Schrödinger operators in dimension two,* Comm. Math. Phys., **257** (2005), no.1, 87–117. B. Simon, *Schrödinger semigroups,* Bull. Amer. Math. Soc. (N.S.) **7** (1982), no. 3, 447– 526. B. Simon, *Tosio Kato’s work on non-relativistic quantum mechanics: an outline*, (2017), arXiv: 1710.06999v1. A. Stefanov, *Strichartz estimates for the magnetic Schrödinger equation,* Advances in Mathematics, **210** (2007), 246–303. W. Strauss, *Nonlinear scattering theory. Scattering theory in mathematical physics,* In: Lavita J.A., Marchand JP. (eds) Scattering Theory in Mathematical Physics, NATO Advanced Study Institutes Series (Series C–Mathematical and Physical Sciences), vol 9. Springer, Dordrecht, 1974. W. Strauss, *Nonlinear scattering theory at low energy: sequel,* J. Funct. Anal. , **43** (1981), no. 3, 281–293. K. Yajima, *$L^p$-boundedness of wave operators for two dimensional Schrödinger operators*, Commun. Math. Phys., **208** (1999), 125–152. Q. Zhang, *Global bounds of Schrödinger heat kernels with negative potentials*, Journal of Functional Analysis, **182** (2001), 344–370.
When my husband and I lived in Miami Beach just out of college, we would ride our bikes to this pizza place (the name is long forgotten) that had an amazing pizza topped with sauteed spinach, blue cheese, chopped fresh tomatoes and mozzarella. Nothing better than stopping there on the way home for a slice and a cold beer after spending the morning on the beach. When we moved (why did we do that, again?), one of the things we missed the most was this pizza, so we started making it ourselves from memory, tweaking it to suit our tastes and desire to get it in the oven as fast as possible. It's dead simple to make (I mean, you spoon the diced tomatoes directly from the can), and when you use store-bought dough it comes together in no time. We normally saute fresh spinach, but in trying to keep the ingredients to five or fewer, I'm subbing frozen instead. Don't let the simplicity of this recipe fool you - the way the flavors come together gives the finished pizza a complexity and depth of flavor that is seriously amazing! —weekend at bearnaise See what other Food52ers are saying.
https://food52.com/recipes/34599-spinach-and-blue-cheese-pizza-aka-the-best-pizza-in-the-world-ever-for-real
Michael Palmer is the Manager, External Customer Relations for the SUNCORP Group. The Suncorp group encompasses some of the most well known brands in the financial services industry such as AAMI, GIO & Shannon’s. The group has approximately 9 Million customers. The External Customer Relations team works with customers who have proceeded to an External Dispute Resolution scheme such as the AFCA, OAIC, AHRC etc. Michael is passionate about creating the best working environment for complaint handling professionals so they remain positive in a challenging workspace, and where they strive to rebuild trust with frustrated customers. Michael holds a bachelor of laws (hons) and a bachelor arts (Criminology). Prior to joining Suncorp Michael worked at the financial ombudsman service and in the health industry. Michael has been a member of SOCAP for the past 4 and a half years since he joined Suncorp.
http://socap.org.au/speaker/michael-palmer/
Based on my patient’s description of her domestic situation, I do not have reasonable cause to suspect that her husband’s behavior constitutes domestic violence as defined by the state in which I practice rheumatology; the mandatory reporting statute of my state does not apply. Nor do I have an ethical obligation to report this domestic situation. You Might Also Like Explore This IssueApril 2019 My determination of obligation in mandatory (legal) reporting should be grounded entirely on the facts as I understand them and the level of suspicion of domestic violence. The fact that my patient’s husband may become my patient should not influence any decision to appropriately report to the oversight institution if IPV is suspected. Conversely, if he were established as my patient, I would then have an ethical obligation to him as well, and his interests would necessarily enter into my calculus. Most would maintain, however, that a legitimate suspicion of spousal abuse would supersede the physician-patient relationship. In this clinical scenario, my patient’s relationship with her husband is significantly strained, and she is clearly suffering. Under the principle of beneficence, my ethical obligation is to help my patient in whatever way I can. I believe that both my patient and her husband would benefit from psychotherapy, couples counseling and/or treatment by a psychiatrist. There is no easy solution to this situation, but it is my ethical duty to provide access to the resources that would be most beneficial. I make this recommendation to her, and she is relieved to know that avenues for help are available. Conclusions Although this representative clinical scenario does not illustrate an example of domestic violence that must be reported, it does bring to light an important dimension of patient care. Physicians seek to advance the well-being of patients through clinical evaluation and management on a daily basis, but it is equally important—and perhaps more challenging—to ensure patient safety at home. It is entirely possible that you will encounter domestic violence or suspected abuse in your practice; given the numbers, many of us likely have already been confronted by a similar situation. Being familiar with the laws concerning mandatory reporting in your state will provide legal guidance to complex encounters. As physicians, we already seamlessly incorporate justice, patient autonomy, beneficence and non-maleficence into our daily medical practice; we should, therefore, view patient safety, including domestic violence, through the lens of these guiding ethical principles. Sarah F. Keller, MD, graduated from the Massachusetts General Hospital Rheumatology Fellowship Training Program in 2016. She practices rheumatology in Crestview Hills, Ky. Marcy B. Bolster, MD, is associate professor of medicine at Harvard Medical School and director of the Rheumatology Fellowship Training Program at Massachusetts General Hospital in Boston.
https://www.the-rheumatologist.org/article/patient-safety-at-home-what-are-our-legal-ethical-responsibilities/3/
Originally published in Model View Culture’s 2016 print edition. In late April of 2016, I attended the Aspiration California Nonprofit Technology Festival in Watsonville, gathering with other technologists, nonprofit workers, organizers and activists to exchange ideas on how to better work for social justice through technology. I lead a session about alternative careers in tech after hearing that students from the Everett Program at nearby UC Santa Cruz were keen to hear about possibilities for making a decent living in tech while still being grounded in social and economic justice. Since that’s a pretty good description of my fourteen years in nonprofit technology, I was excited to share my experiences and advice. Our small group included two young Latinx women who, as they approached college graduation, were eager to figure out how to work in tech in ways that didn’t reflect the dominant culture of nearby Silicon Valley. They talked about social and economic justice, immigrant and women’s rights, anti-racism, and how they wanted to work those things into their careers. It was inspiring to talk with young women intent on blazing a different trail, and I was excited to offer some hope for a viable path:
https://contacted.org/2020/10/worker-coops-a-better-way-to-make-a-living-in-tech/
Phonetics are an important part of learning English Many accent reduction course involve comprehensive software and training materials that help people reduce their accents and improve confidence and intelligibility in mission critical jobs. What is meant by phonetics? Merriam Webster’s definition: phonetics a) the system of speech sounds of a language or group of languages b) the study and systematic classification of the sounds made in spoken utterance c) the practical application of this science to language study The dictionary description continues: “It deals with their articulation (articulatory phonetics), their acoustic properties (acoustic phonetics), and how they combine to make syllables, words, and sentences (linguistic phonetics). The first phoneticians were Indian scholars (c. 300 BC) who tried to preserve the pronunciation of Sanskrit holy texts.” The individual sounds in a language are called ‘phonemes’. English has about 44 phonemes although the number varies according to the accent or dialect with which the language is spoken. Phonetics is the study and teaching of word stress, sentence stress and intonation Your ESL teacher is well equipped to work these aspects of spoken language and therefore to really help you with your pronunciation in the classroom setting. Some classrooms will have phonetics software, or other interactive and written course materials. Phonetics software is available so widely on the internet that your teacher can refer you to phonetics websites that offer exercises that are the most effective for an ESL learner. Teachers constantly work on their students’ pronunciation – phonetics – as one of the most important objectives in an ESL course is to help you communicate in a second language. The teacher must assess whether the student’s pronunciation is adequate for their level and for the tasks that are required of them once they graduate. Phonetics is always an important aspect of ESL courses. Leave a Comment You must be logged in to post a comment.
https://blog.talk.edu/grammar/phonetics/
Patient satisfaction and its impact on healthcare and health outcomes dates back to the 1950s, where relationships between patients and healthcare providers were examined.1 These relationships have become extremely complex as the healthcare industry has grown, and there is now legislation in the form of the Affordable Care Act (ACA) requiring that these relationships impact the business of healthcare in the form of reimbursement and consumerism. Operating as a service industry, healthcare has similarities to other firms whose goal is perfecting customer satisfaction and providing superior services or products. Fenton and associates2 describe how high patient satisfaction is associated with high mortality, the opposite goal of the principles of healthcare. The purpose of this paper is to discuss how there has been a shift from the primary goal of medically treating patients to now treating them as consumers, and how patient satisfaction is changing the structure of healthcare. Historical Background In the early stages of examining patient satisfaction, the relationships between providers and patients were the main focus. It was found that patients experienced a lack of empathy, low levels of friendliness, and dissatisfaction from health services1. Since the initial investigations, there have been numerous studies looking to determine what affects patient satisfaction, eventually leading to the current mentality that patients are “consumers of healthcare,” and therefore the healthcare industry should shift towards a model of consumerism. Since the 1950s, several socioeconomic factors have become a reality and have changed patient-centered care. Factors such as rising patient expectations, demand for greater transparency, and demand for immediate access to imaging and pharmacotherapy have impacted the direction of healthcare as an industry.3 Senić and Marinković1 found that the last service encounter experienced by a patient is usually the encounter on which they will rate their overall experience. In 2002, the Baldrige National Quality Program began awarding businesses in healthcare. The Baldrige National Quality Program works to “identify and recognize role-model businesses, establish criteria for evaluating improvement efforts, and disseminate and share best practices.4” These awards signify that a healthcare organization serves as a role model in its field based upon overall success. The award measures the success of patient care outcomes and processes, patient satisfaction, workforce satisfaction, and financial market performance, all of which lead to a successful organization.4 While patient satisfaction and patient-centered care have always been important, the passage of the Affordable Care Act by the federal government in 2010 set the stage for value-based purchasing; a system in which payments to healthcare organizations will be impacted by patient satisfaction scores.5 Patient satisfaction is measured in many ways, but the Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) scores are the most influential, as they make up 30% of the overall performance score for value-based purchasing. The paradigm suggests that improving patient-centered care and improved patient satisfaction will lead to better health outcomes.6 This has had significant impact on the structure of healthcare and has led to changes in priorities, goals, and objectives of many healthcare organizations. Impact of Patient Satisfaction on Healthcare Structure The implementation of the Affordable Care Act imposed major changes on the healthcare industry. As medical coverage is extended to patients under the ACA, the number of patients entering the healthcare system is expected to reach 32 million, leading to difficulties for the current system to accommodate these new healthcare consumers.7 As patients make their way through the complex medical system, they will have a multitude of opportunities to complete surveys that will impact various organizations’ HCAHPS scores. They will rate categories including nurse communication, physician communication, responsiveness, pain management, medication communication, cleanliness, discharge information, overall rating, and likelihood to recommend. These scores based on patient experience will ultimately be tied to the reimbursement of physicians and healthcare organizations under a pay-for-performance model.8 With value-based purchasing becoming so important, multiple studies have examined the effects that it would have on healthcare organizations in regard to financial impact and business models. Cliff5 found that by improving the patient experience, institutions can experience positive financial results. It was found that hospitals that rank amongst the best inpatient satisfaction are also some of the most profitable and financially sound institutions. As with most businesses, anything that positively influences finances and payments is a strong motivator for success. With patient satisfaction being so closely tied to reimbursement and financial rewards, healthcare organizations have been motivated to improve the patient experience. Using HCAHPS categories as a basis for areas in which to improve, the healthcare industry has seen a shift in priorities, goals, and objectives. With managers being educated in business, healthcare has seen a trend towards providing tangible services and goods aimed at improving the patient experience while, at times, losing sight of a primary objective of medicine which is to positively impact a patient’s health and well-being. This can lead to increased healthcare costs that do not directly impact a patient’s health outcome. Fenton et al.2 describe the negative effects of achieving high patient satisfaction scores. The implications of linking physician reimbursement to patient satisfaction have led to a change in the practice of medicine. In attempts to satisfy patients, physicians have begun to order unnecessary testing (laboratory and imaging studies) simply to avoid negative impacts on reimbursement. This type of practice can quickly lead to an increase in overall healthcare costs, which is a major factor in the healthcare crisis currently being experienced in the United States. Greater patient satisfaction has been associated with increased utilization of healthcare resources and ultimately an increase in healthcare spending.5 Healthcare executives, in conjunction with physicians, must be wise to effectively and efficiently use resources to positively impact the patient experience while also containing costs.9 Physicians undergo extensive schooling and post-graduate training so that they will know what to investigate, how to investigate it, and when it should be investigated. By succumbing to patient requests for unnecessary testing, the physicians are compromising their ideals solely for the concern of decreased reimbursement as a result of dissatisfied patients. Healthcare as a Service Industry Service industries serve in the economy to provide services rather than tangible goods. These industries have become extremely focused on customer satisfaction to strengthen their places in competitive markets. Good customer service is extremely subjective, as each individual has their own idea of what is acceptable customer service. Service industries aim to provide individualized services, yet healthcare seems to be lagging by using generalizations from patient satisfaction surveys to provide for its customers.10 Healthcare is included in the service industry. Therefore, it is being treated as any other organization and is not seen as unique. As the United States is seeing a shift in healthcare from treating patients to treating consumers, we are seeing an excessive amount of resources being used for advertising and marketing.11 The market for healthcare providers is vast, therefore competition is increasing. When deciding on where to receive elective healthcare, patients tend to forget a hospital’s clinical outcomes and seem to focus on the “window dressings” of free parking, food quality, guest internet access, and other amenities. These amenities aim to earn repeat business and recommendations from current patients. It is unfair to compare healthcare services, such as life-saving emergency care, surgery, chemotherapy, etc., to other services such as haircuts, online video streaming, and package delivery. Consumers of healthcare typically do not seek services because they are having a good day. Very few, if any, patients begin their day of receiving healthcare services in a good mood. They are preparing themselves for long waits, potential for receiving bad news, receiving medications that may make them sick, having to endure a needle-stick to have laboratory work performed, and other unpleasant situations. There are few services where consumers do not expect positive outcomes. Experiences like these make healthcare unique when it comes to its inclusion in the service industry of economics. As consumerism is empowered in healthcare, the system is being placed under increasing pressure to conform to customer satisfaction practices, leading to shifts in the goals of providing patient care.12 Healthcare providers are under significant stress with increasing demands from customers, payors, and government regulations; therefore, they are feeling that customer service is just one more thing added to their job requirements.13 Needham14 describes how patients, now considered consumers of healthcare, will begin to expect from healthcare what they expect from other service industries, such as value, convenience, and respect. Listening to customers for continuous feedback on their experiences is important for improving service quality, in both healthcare and traditional service industries.15 Healthcare leaders and management must have a strong foundation on which to improve patient satisfaction. This foundation should consist of empowering positive values and supporting change initiatives aimed at providing high-quality service.16 Patient Perceptions and Attitudes Messina and associates17 describe how patients present to hospitals and clinics with their own agendas and expectations of what to expect with regarding service and care. This is true of consumers in many industries, and in some ways, healthcare is no different. Meeting service expectations and setting standards of behavior play a role in healthcare, but must be modified in certain situations.18 Many patients bring their own expectations to provider encounters. This includes demanding certain unnecessary testing, prescriptions, or other services. Patients tend to be more satisfied when physicians fulfill their expectations, regardless of whether the services are necessary.2 This form of practice has led to inappropriate medication usage, increasing risk for adverse reactions to unnecessary interventions, and increasing healthcare expenditures all under the fear of decreased reimbursement. However, providing patient-centered visits where the provider has the time to discuss the patient’s concerns could both improve patient satisfaction while being judicious in the use of resources. This requires longer patient-physician encounters, which is proving more difficult to find in today’s healthcare system as the nation is currently experiencing a physician shortage. Satisfied patients are more likely to be compliant with their medical care plan, ultimately leading to improved outcomes and more efficient utilization of healthcare resources.19 With regards to the noncompliant patient, one can see the issues regarding reimbursement being tied with patient satisfaction. Fontenot6 describes how a physician trying to improve a patient’s health by empowering them to take a personal responsibility in their own care when that is not what they want to hear, will ultimately lead to a dissatisfied patient. These noncompliant patients’ attitudes toward their healthcare is poor, and yet it is the healthcare provider that is financially penalized. Patients seeking healthcare services have varying backgrounds ranging from excellent overall health to extremely poor health with multiple chronic illnesses that require significant resources. The severity of patient illness impacts their perceptions of healthcare and the importance of various aspects. Otani et al20 describe how patients with serious illnesses see patient-physician interactions and physician care as most important. As one might expect, patients that require frequent visits, procedures, and encounters with the healthcare system will have more opportunities to complete patient satisfaction surveys. As these patients are not regarded as being in good health, they are more likely to receive bad news, incur more healthcare debt, and require more resources all leading to a higher possibility of dissatisfaction with their experiences as well as the healthcare industry as a whole. Satisfaction of Healthcare Employees The healthcare industry would not be able to operate without cooperation between healthcare executives/administration, physicians, nurses, ancillary staff, and ultimately patients. The attitude of members of the healthcare team impacts sixty percent of patient experiences as well as patient perception of quality care and service.12 Ensuring employee satisfaction will likely indirectly increase patient satisfaction. Physicians play a significant role in the healthcare system, especially in the care of patients with poor overall health. With an aging population seen in the “Baby Boomer” generation, chronically ill patients are becoming the norm. It should come as no surprise that empowering physicians and focusing on their satisfaction should be a top priority for healthcare management. From the beginning of their studies, and often before, physicians aim to provide a satisfying experience for their patients. This sentiment is often missed when discussing patient satisfaction. Handel21 detailed how influencing physicians by honing in on their pride, professionalism, and natural problem-solving abilities can provide a positive impact on patient satisfaction. Medical education has become increasingly focused on patient communication due to the impact it has on the patient experience, and ultimately, patient satisfaction. Ossoff and Thomason11 found that physicians’ bedside manner, and the way in which they interact with patients, continues to be one of the most important factors in achieving high patient satisfaction scores. Improving bedside manner involves improving how physicians listen to a patient, deliver information or bad news, allow patients and families to participate in medical decision making, and the respect they show towards patients. These traits can be applied to nursing and ancillary staff as well. By influencing all members of the healthcare team to positively impact the patient experience, patient satisfaction is likely to improve. Shannon21 describes the impact of physician well-being on the patient experience. More satisfied physicians tend to have higher patient satisfaction scores; however, physician dissatisfaction and “burnout” are on the rise nationwide. A recent survey of currently practicing physicians demonstrated that nearly half of those surveyed would not choose medicine again as a career. Another worrisome statistic is that nearly 30 percent of practicing physicians are considering leaving the profession within the next two years due to “burnout.” The factors leading to physician dissatisfaction are complex, but some of the most common issues faced by practicing physicians are issues with healthcare reform, some of which have been exacerbated by the passage of the Affordable Care Act. Physicians are concerned that they will face with reduced compensation and autonomy, along with worsening time constraints and increased pressures to complete administrative tasks all due to greater access to healthcare among patients that were previously not in the market for healthcare services.21 As value-based purchasing is becoming more prominent in the healthcare industry, it is imperative to focus on satisfied, engaged employees. Discussion As the information in this paper demonstrates, patient satisfaction is complex. There is no clear definition of patient satisfaction and the idea is highly subjective. Implementing a standardized approach is not likely to be effective as patients are looking for individualized care. Patient satisfaction is not simply focused on the patient experience, but it has now become closely linked to reimbursement for physicians and healthcare organizations. At present, much of healthcare organizations are managed by business-minded individuals, as opposed to medically-trained personnel, leading to healthcare being run as a traditional service industry. The ability to realize that the healthcare industry is unique in its role to provide services is of utmost importance. Tailoring to the unique aspects of healthcare will allow for overall satisfaction, from patients to employees. The healthcare system must not lose sight of its primary objective, providing world-class patient care to improve patient health and well-being in a safe manner. Tailoring patient care and amenities to provide positive patient experiences should also be considered important, but should not overshadow the importance of improving health outcomes. Unfortunately, patients who provide high patient satisfaction scores experience higher mortality. Limitations of the Research The research included in this paper is not without limitations. Patient satisfaction is highly subjective and is easily influenced. There are many confounding variables when discussing patient satisfaction, therefore increasing the complexity of research studies. A majority of the research involves patient surveys and questionnaires, which reflects the patient’s personal views which are then applied to a generalized population. Human nature allows for patients to have a natural tendency to begin an experience with a predetermined expectation of how satisfying their encounter will be. Also, patient satisfaction surveys are conducted after each patient encounter, with a majority of visits being those patients with chronic medical conditions that are in poor health and often do not have good outcomes. The majority of the research studies referenced in this paper pertain to healthcare in the United States of America. It would be inappropriate to generalize these findings to healthcare around the world, as healthcare systems vary from country to country. Healthcare is changing on a daily basis; therefore, some data can quickly become outdated, increasing the importance of the need for ongoing research in patient satisfaction. Future Research Future research should be aimed at the comparison between healthcare systems and traditional service organizations. Also, as healthcare reform continues to be a major political and economic topic, the current healthcare system will likely change. As value-based purchasing is a relatively new concept, extensive research will be needed on what the true impact on patient satisfaction and overall health outcomes is going to be. With a person’s health being the true determinant of satisfaction with life, anything that can potentially have a positive impact on a person’s health should be investigated thoroughly. Summary In summary, patient satisfaction has overtaken the structure of the healthcare system. There has been a shift in the paradigm in healthcare from providing excellent medical care to now providing services and/or goods focused on improving the patient satisfaction. This model of healthcare has the potential to have a negative impact on the healthcare system as evidenced by decreased physician satisfaction and increased burnout. The nation is already facing a physician shortage, and the concern for the future is that taking the focus away from providing medical care in order to focus on making customers happy will lead to more physicians, nurses, and ancillary staff leaving the healthcare industry. The role of patient satisfaction in the healthcare system is not straightforward. It is quite complex. With the implementation of the Affordable Care Act and value-based purchasing, the structure of healthcare is in for a change. We are seeing the financial impact of patient satisfaction on healthcare organizations, which is driving the business of medicine to change as financial stability is a strong motivator of business. With patients being considered as consumers, healthcare is seeing a trend towards increased competition, and therefore we are seeing increased expenses for marketing and advertising while healthcare organizations are seeking a competitive advantage. Healthcare reform is a quite popular and complicated topic that is under the control of governmental agencies, nearly all of which are headed by non-medical personnel. This is likely contributing to a shift in medicine towards a business model focused on improving patient satisfaction. As healthcare reform evolves, there are likely to be significant changes to the healthcare experience. References - Senić V, Marinković V. Patient care, satisfaction and service quality in health care. International Journal of Consumer Studies. 2012;37(3):312-319. doi:10.1111/j.1470-6431.2012.01132. - Fenton JJ, Jerant AF, Bertakis KD, Franks P. The Cost of Satisfaction: A National Study of Patient Satisfaction, Health Care Utilization, Expenditures, and Mortality. Arch Intern Med. 2012;172(5):405–411. doi:10.1001/archinternmed.2011.1662 - Cliff B. The evolution of patient-centered care. Journal of Healthcare Management. 2012;57(2):86-88. - Griffith JR. Understanding high-reliability organizations: are Baldrige recipients models? Journal of Healthcare Management. 2015;60(1). - Cliff B. Excellence in patient satisfaction within a patient-centered culture. Journal of Healthcare Management. 2012;57(3):157-159. - Fontenot SF. Will patients’ happiness lead to better health? The ACA and reimbursements. Physician Leadership Journal. 2014;1(1):28-31. - McCaughey D, Erwin CO, DelliFraine JL. Improving capacity management in the emergency department: A review of the literature, 2000-2012. Journal of Healthcare Management. 2015;60(1):63-75. - Stanowski AC, Simpson K, White A. Pay for performance: are hospitals becoming more efficient in improving their patient experience? Journal of Healthcare Management. 2015;60(4):268-286. - McCaughey D, Stalley S, Williams E. Examining the effect of EVS spending on HCAHPS scores: A value optimization matrix for expense management. Journal of Healthcare Management. 2013;58(5):320-334. - Powers B, Navathe AS, Jain S. How to Deliver Patient-Centered Care: Learn from Service Industries. Harvard Business Review. April 2013. https://hbr.org/2013/04/how-to-deliver-patient-centere.html. - Ossoff RH, Thomason CD. The role of the physician in patient satisfaction. Journal of Health Care Compliance. 2012;14(1):57-72. - Lanser May E. Diagnosing the patient experience. Healthcare Executive. 2015;30(4):20-30. - Mayer T. Leadership for great customer service. Getting the “why” right before mastering the “how”. Healthcare Executive. 2010;25(3). - Needham BR. The truth about patient experience: what we can learn from other industries, and how three P’s can improve health outcomes, strengthen brands, and delight customers. Journal of Healthcare Management. 2012;57(4):255-263. - Kennedy DM, Caselli RJ, Berry LL. A roadmap for improving healthcare service quality. Journal of Healthcare Management. 2011;56(6):385-400. - Arbab Kash B, Spaulding A, Johnson CE, Gamm L. Success factors for strategic change initiatives: A qualitative study of healthcare administrators’ perspectives. Journal of Healthcare Management. 2014;59(1):65-81. - Messina DJ, Scotti DJ, Ganey R, Zipp GP. The relationship between patient satisfaction and inpatient admissions across teaching and nonteaching hospitals. Journal of Healthcare Management. 2009;54(3):177-190. - Scott G. The six elements of customer service: achieving a sustained, organization-wide commitment to excellence improves customer and employee satisfaction. Healthcare Executive. 2013;28(1):64-67. - Otani K, Ye S, Chumbler NR, Judy Z, Herrmann PA, Kurz RS. The impact of self-rated health status on patient satisfaction integration process. Journal of Healthcare Management. 2015;60(3):205-218. - Otani K, Waterman B, Dunagan WC. Patient satisfaction: how patient health conditions influence their satisfaction. Journal of Healthcare Management. 2012;57(4):276-292. - Handel DA, Delorio N, Yackel TR. The five P’S to influence physicians. Physician Leadership Journal. 2014;1(2):24-26 - Shannon D. Physician well-being: a powerful way to improve the patient experience. Physician Executive. 2013;39(4):6-12.
https://www.acoep-rso.org/the-fast-track/the-business-of-healthcare-how-patient-satisfaction-plays-a-role/
Statistical control based on covariance analysis procedures The statistical method of covariance analysis ANCOVA is a mix of methods of variance and regression analysis. This method as a method of indirect experimental control is used every time when the researcher in the intergroup experiment lacks the ability to select equivalent groups based on standard primary control schemes, or in situations when the application of such schemes is undesirable and may threaten the external validity of the experiment. Because the consequence of the rejection of the equivalence of experimental groups may be the emergence of a systematic blending of the investigated independent variable with any secondary variable, the value of the dependent variable, measured during the experiment, must be corrected based on the previously changed values of this variable. In the covariance analysis, the value of the dependent variable is denoted as a varia, or a criterion, and the value of the secondary variable, on the basis of which the variations are made, as covariates. or a predictor. The scheme of the experiment using this control is given in Table. 14.1. Table 14.1 Quasi-experimental plan using indirect, statistical control | | Independent variable levels | | l | | ... | | j | | k | | Covariates (X) | | Variatia (Y) | | Covariates (X) | | Variatia (Y) | | Covariates (X) | | Variatia ( Y ) | | X l1 | | l1 | | X lj | | lj | | X lk | | lk | | ... | | ... | | ... | | ... | | ... | | X n1 | | Y n1 | | X nj | | Y nj | | X nk | | Y nk As you can see, for each level of the independent variable, two measurements of the dependent variable are made. One measurement is made before the experimental effect is produced. It gives the value of covariates. The second measurement is made after the implementation of the experimental treatment. It gives the value of the variations. Imagine that the researcher's task is to compare the effectiveness of various teaching methods within the same academic discipline, say, a foreign language. In the standard case of a true experimental plan, it would be necessary to form two homogeneous groups of subjects, the level of initial knowledge in which in this discipline would be approximately equal for all subjects in the average group. Suppose, however, that in a real situation the researcher is forced to deal with already formed training groups. For example, a psychologist conducting research in an ordinary secondary school wants to compare the success of mastering a foreign language in two parallel classes, in which the teaching methods for this subject matter differ. It is clear that in such a situation the researcher hardly has the opportunity to demand the reformatting of the classrooms recruited several years before the beginning of his research. Therefore, not being able to equalize the classes, the experimenter, even before the beginning of the study, assesses the initial level of possession of all the subjects in this discipline. This gives the value of the covariates, which will be used as a control condition when assessing the overall effectiveness of training. It is clear that if the differences in the mean values of the covariates between the experimental groups are initially found, then this fact clearly indicates the nonequivalence of the experimental groups themselves. Consequently, the differences in the mean values of the variat, if they are marked with the results of an experimental study, they can be related either to the effect of an independent variable (by teaching the discipline) or to the original nonequivalence of the experimental groups under study. Therefore, the covariance analysis assumes the calculation of the adjusted values of the variable based on the values of covariates. For this, the logic of simple linear regression is used. The adjusted values are then compared in accordance with the rules of dispersion analysis already known to us. Let's consider this procedure in more detail. For data within an arbitrarily chosen experimental group, the adjusted values of the dependent variable, the variations Y ' , can be given by the following linear equation: where In 'is the coefficient of a simple linear regression. The total square for the residual variance, reflecting the effect of the experimental error, for this regression equation will be described by the following relationship: This statistic has ( to (n - 1) - 1) the degree of freedom. Thus, the corrected mean square (variance) for the effect of an experimental error expressing the effect of possible side variables can be expressed by the formula: The linear regression equation for an arbitrary Y criterion pair and the covariates X i of the test subject in j -th group will look like this: The total square of the remainder for this regression equation will obviously have the following form: This expression corresponds to the total variance of the adjusted data. If we have k groups of subjects by n observations, the statistics will have kn - 2 degrees of freedom. The variance (mean square) of the effects of the experimental impact is calculated on the basis of the difference D and D : The statistics constructed in this way are used in a standard way to construct a F-ratio, reflecting the relationship between the variance of the experimental action and the variance of the error. It has a to -1 degrees of freedom for the numerator and to (n - 1) - 1 degrees of freedom for the denominator . The statistical significance of this ratio (p-level) is estimated on the basis of the standard F-distribution. The logic of such an analysis does not differ from the logic of the usual ANOVA analysis ANOVA. The method of covariance analysis allows us to evaluate not only the overall effect of an independent variable, corrected based on the values of covariates, but also any contrasts, both paired and multiple. There is also a factorial version of the covariance analysis, which makes it possible to carry out statistical control in more complex quasi-experimental plans that investigate the effects of not one but several independent variables. thematic pictures Also We Can Offer!
https://testmyprep.com/subject/psychology/statistical-control-based-on-covariance-analysis
Scientists have recorded the first footage of a giant squid hunting in the wild. The elusive creatures are notoriously difficult to film, as their habitat is housands of feet under the sea, where it's dark and the crushing pressure of water requires specialist equipment. While several dead specimens have washed up on shore, the first still images of a living giant squid in the wild weren't recorded until 2004 and video wasn't obtained until 2012. Now marine biologists have, for the first time, captured footage of Architeuthis dux hunting prey in the wild. The footage was captured in 2019, but researchers have now released analysis of the creature's behavior. A special platform with a built-in camera captured the elusive sea creature attacking a decoy in the Gulf of Mexico nearly 2,500 feet beneath the surface. The decoy, called E-Jelly, was designed to attract the squid by imitating the bioluminescence given off by of a jellyfish in distress. Experts had previously believed the squid waited to ambush its prey, but the video shows it stalking the E-Jelly before going in for the kill. The giant squid, formally known as Architeuthis dux, is one of the most elusive creatures of the inky black deep. It can grow more than 40 feet long, from fin to tentacle, and has eyes the size of basketballs. Its existence has inspired legends of the kraken and other sea monsters for centuries, 'yet our knowledge of the large deep-sea cephalopods that inspired this myth remains limited,' researchers wrote in a new report published in the journal Deep Sea Research Part I: Oceanographic Research Papers. RELATED ARTICLES Share this article Because the squid's habitat can be more than a half-mile beneath the surface, researchers have had to rely on robotic submersibles to search for it. But the noise and bright lights of a drone can scare off the light-sensitive squid. So researchers with the National Oceanic and Atmospheric Administration (NOAA) built a 'trap': A platform with a built-in camera to passively draw the squid in. Video still of an unidentified squid, possibly Promachoteuthis sloani, recorded by the Medusa in the northern Gulf of Mexico The E-Jelly (pictured) is designed to mimic the bioluminescence of a jellyfish. It was attached to a remote-controlled camera platform, dubbed the Medusa, to capture the encounter with the squid Using the E-Jelly as a reference point, researchers estimate the giant squid was at least 13 feet long—likely a juvenile They baited the remote-controlled platform, dubbed the Medusa, with 'E-jelly,' a contraption with lights that mimic the bioluminescence a jellyfish emits when it's in danger. Giant squids' eyes focus on shorter-wavelength blue light so they used red light, which the squid can't see as well, to capture the encounter. The trap worked on smaller types of squid as well. In 2004 and 2005 it attracted two squid, possibly Promachoteuthis sloani, in the Gulf of Mexico and Exuma Sound in the Bahamas. Previously only juveniles under four inches long have been observed, but these were adults, with bodies over a foot long. Almost a decade later, in 2013, the camera recorded another squid, a half-foot-long Pholidoteuthis adami. The Medusa was deployed in several locations off the Gulf of Mexico and the Exuma Sound in the Bahamas. The black circle F indicates where the giant squid was sighted Adami, which is common in the Gulf of Mexico and Eastern US, can grow more than two feet long—but it's no giant squid. Finally, in June 2019, an Architeuthis dux made an appearance at a depth of approximately 2,490 feet in the Gulf of Mexico off the coast of Mobile, Alabama. It marked the first time a giant squid had been filmed in US waters. At 13 feet long (not including tentacles), the specimen isn't the largest giant squid ever, but it provided key information on the species predatory habits The squid swam around the platform for several minutes before going in for the 'kill.' 'It comes right in, shoots its arms out [and] wraps its arms around the E-Jelly,' researcher Nathan Robinson told New Scientist. The Medusa filmed the squid circling the E-Jelly for several minutes before going in for the attack Some scientists had previously theorized the giant squid just ambushed its prey—because it was so big, it wouldn't expend energy to hunt. But the video footage suggests its stalks before attacking—and uses its saucer-like eyes to find meals. 'You feel very alive,' Robinson said. 'There's something instinctual about these animals that captures the imagination of everyone — the wonder that there are these huge animals out there on our planet that we know so little about, and that we've only caught on camera a couple of times.' Robinson and his colleagues are eager to refine their technique to further observe Architeuthis dux and other cephalopods. 'These encounters suggest that unobtrusive camera platforms with luminescent lures are effective tools for attracting and studying large deep-sea squids,' they wrote. Advertisement Share or comment on this article: Giant squid is seen hunting prey on video for the first time ever in footage taken by robot Do you want to automatically post your MailOnline comments to your Facebook Timeline? Your comment will be posted to MailOnline as usual We will automatically post your comment and a link to the news story to your Facebook timeline at the same time it is posted on MailOnline. To do this we will link your MailOnline account with your Facebook account. We’ll ask you to confirm this for your first post to Facebook. You can choose on each post whether you would like it to be posted to Facebook. Your details from Facebook will be used to provide you with tailored content, marketing and ads in line with our Privacy Policy.
The socialist society envisaged by Karl Marx can only be built on the achievements of capitalism and what has been called its civilising mission. This progress rests on an enormously increased productivity of labour, which has reached such a level that the productive forces of society now realistically promise a society that more and more meets the needs of all its members, with inequality and insecurity vastly reduced and material poverty eliminated. Within capitalism, progress inevitably involves increased exploitation since exploitation of labour is how this society increases productivity. But progress there has undoubtedly been and without it socialism would not be possible. Capitalism has created this possibility but capitalism now stands in its way. When I first became interested in socialist politics in the mid-seventies I used to visit the Communist Party bookshop on High Street in Glasgow. I remember picking up a CP pamphlet extolling the virtues of the ‘socialist’ countries of Eastern Europe and the USSR. It set out the daily calorific intake of the average citizen in a number of these countries with East Germany the top performer. Even at the time this jarred and seemed somewhat disappointing. I was by no means rich. I lived in a tenement with an outside toilet and shared a bedroom with my sister, while my mother slept in the living room. But I never once thought that I was going to suffer from a lack of calories; in fact I barely thought about food and was too busy running around to worry about it. Now of course, in the space of less than a lifetime, a problem in the most developed capitalist societies is not a lack of calories for the average working class person, which I knew from my Scottish granny had been a problem in the past, but too many calories! Reading some material on inequality and its effects, as argued in the book ‘The Spirit Level’, I came across some quotes that illustrated how very different the problem is now. Now the stereotypical poor person is overweight or obese, or rather the latter are nearly always working class or poor, while the equivalent rich person is slim and healthy. The capitalist food and drink industry specialises in feeding fatburgers and sugar-filled drinks to the poor while offering exotic sounding pulses, vegetables and bottled water in delicatessens for the discerning middle class. I exaggerate of course; this is a distorted caricature albeit with a grain of truth, but the most important truth is that in many countries, for the vast majority of the population, an adequate food supply is not a problem. Problems with its supply lie elsewhere, including in the exploitation of the humanity and nature that ensures its production. While the productive forces of society more and more are capable of offering increased economic security, freedom from social stress and worry, and a promise of a fulfilling life, capitalism is more and more demanding that this promise can be offered for only some and on more and more unacceptable terms. These terms include zero hour contracts, massive increases in debt, an absence of rights in the workplace and increasing threats to political rights outside it. Working into your seventies is now the prospect for those in their youth and young adulthood. Nevertheless, despite all this, it is unquestionable that progress has been made. Had it not, then on what grounds could we claim that all these impositions and threats are unnecessary? That an alternative is eminently possible? A second aspect of this progress is that because it is capitalist progress it is accompanied by repeated crises, which can lead to sometimes dramatic falls in living standards for some, and constant insecurity and increased exploitation for many others; who are required to work longer and harder and with relatively less remuneration while having less and less security over their employment. The financial crisis has come and many think it has also gone, with the answer to it being austerity and the bankers going back to business as usual. Severe world-wide recession threatened after 2008, followed by crisis in the Eurozone and crises in developing countries as commodity prices fell. This was only partially offset by continued growth in China, which is now also threatened by a similar credit boom and overcapacity From being the fastest growing country in the west, the UK is now slowing dramatically while the Irish State, although it crashed, is now supposedly booming. These booms and busts make crisis appear a constant threat, the boom period demonstrating the legitimacy of capitalism and the bust demonstrating the difficulty of, and for, an alternative. For many these crises are proof that the contradictions of capitalism are insurmountable, are intrinsic to the system and cannot be escaped. Just as progress under capitalism is built upon exploitation, so it is also achieved through crises. It is crises that most violently reorganises production and ensures its further development. Crises therefore not only express the irrationality of capitalism but also its rationality, its ability to achieve further development through destruction. The most common alternative understanding is one that proposes that the system can be cleansed of its most irrational aspects while also ensuring that the growth that characterises capitalism can continue, and even increase. The private greed that disfigures the system can be ameliorated by the state, which can be regarded as the representative of society as a whole and can act on its behalf. Freeing this state from direct and indirect control of the 1% is therefore the most important task. Marxists question this alternative and point out that inequality is not primarily a feature of market outcomes, of inequality of income, of working conditions, employment, housing and general welfare. It is a question of utter and complete inequality in the conditions of production that generates income inequality and all the other inequalities that condition the general welfare of the majority of society. What is distributed, and is considered fair distribution, is determined by how the wealth of society is produced in the first place. Marx put it like this – “before distribution can be the distribution of products; it is (1) the distribution of the instruments of production, and (2), which is a further specification of the same relation, the distribution of the members of society among the different kinds of production. (Subsumption of the individuals under specific relations of production). The distribution of products is evidently only a result of this distribution, which is comprised within the process of production itself and determines the structure of production.” If the means by which the wealth in society is produced is not owned in common, by everyone, but by a small number so becoming a separate class, then the distribution of income and wealth that flows from this production will primarily benefit this class. This is why we have massive increases in productivity and material wealth but it is accompanied by increased exploitation and inequality. Why it is accompanied by crises, in which private appropriation of the fruits of production, and of the means of production itself, conflict with the greater and greater cooperation required to make this production possible. Nor do Marxists believe that the state is the true representative of society as a whole. It is not ‘captured’ by the 1%, its functions are determined by the structure of society as a whole, by the fact that the means of production belong to a separate tiny class. The state can adjust, within limits, inequality of income, housing and working conditions but it cannot fundamentally adjust the ownership of production that is the guarantee of general inequality. In acting to defend the regular and ordered functioning of society, it must by this fact alone defend society’s fundamental structure, lest any radical change threaten its stability or the stability of the state itself. And even if this were not the case, the argument for a socialism based on ownership of production by the state has floundered on the experience of the ‘socialist’ states in Eastern Europe and the USSR, which, before their collapse, could boast that their system fed their people. Marx’s alternative is not based on the state, which is the instrument of capitalist rule, but is based on the progress that capitalism has created, it development of the productivity of labour and most importantly on the labour itself that performs this productive work. Marx’s alternative is therefore based on the working class and its potential to control society. Crises demonstrate the necessity of an alternative but in themselves do not create that alternative. They can demonstrate what is wrong, but it is what it is possible to replace this system with that is the question. Only if the contradictions which give rise to crises contain within themselves their progressive resolution is it possible for there to be a progressive alternative to capitalism. So, what is the nature of the contradiction that Marx identified that promises that a fundamentally different society is possible?
https://irishmarxism.net/2017/08/11/karl-marxs-alternative-to-capitalism-part-14/?replytocom=15208
What do Europeans expect from nanotechnologies? Where and how would they like them to be used? In terms of Responsible Research and Innovation, the nanotechnology community is called upon to be responsive to the hopes and concerns of citizens, and to take these into account in research- and policy-making. Overall respondents felt that nanotechnologies will have a positive effect on both “our overall way of life”, and on European economies. Impacts of nanotechnologies on the environment and the safety of European society were, overall, viewed with less confidence, although positive views are the majority here too. Regarding the different applications or product areas, respondents were less enthusiastic towards products that are used close to one’s body, such as food, cosmetics or textiles, with the only exception of medicine, where the use of nanotechnologies is considered positive by most of the participants. Respondents almost unanimously welcomed application areas that could be directly linked to societal challenges, such as climate change. Public institutions, companies and CSOs were all appraised as important nanotechnology communicators. When it comes to different media, print and TV still outweighed social media and the Internet. While surveys can offer a baseline of quantitative information on public perceptions to be considered in research and policy, this information should be deepened and complemented with qualitative methods: the preferences of citizens can result from a number of different conceptions, hopes and fears. The results thus hopefully function as an invitation to stakeholders; as encouragement to dig deeper into the findings, and to engage the public on their preferences and needs, in order to determine how such views could be taken into account when forming research and policy. As the involvement of public perceptions is not always as easy as it sounds, NanoDiode carried out additional in-depth stakeholder interviews in six partner countries, setting out to probe this important, yet difficult question of responsiveness: What kind of role should public perceptions and opinions have for the use and development of nanotechnologies? Where should the attitudes, hopes and fears present in the general public exactly flow into? Who should take them into account and how? The last part of this report gives recommendations to stakeholders for fostering responsiveness in research and policy. For improving the chances for responsiveness, the involvement of the public needs to be supported by the respective organisations’ head and take place early enough. In the latter stages of the innovation chain, the resources that have already been invested make true consideration of public preferences more difficult. Beyond that, the methods for dialogue and discussion need to be chosen according to the target group and context.
http://www.nanodiode.eu/publication/towards-responsive-research-policy-report-nanodiode-citizens-survey-depth-interviews/
Analysis of images to detect and quantify spatial variations in deformation is important for understanding the health and disposition of, for example, materials, structures, and tissues. A standard approach for such analysis involves estimating displacement fields inferred by comparing images of the sample taken at different times or under different conditions. Displacement field, as used herein, refers to a spatial distribution of displacements of locations within a sample between a first image and a second image. The most broadly used method involves matching image intensities over a grid of regions of a sample before and after the sample is deformed, then differentiating the resulting displacement fields numerically to estimate the tensor of strains that describes the spatial distribution of deformation. Displacement field estimation can be improved dramatically for large deformations through the Lucas-Kanade algorithm that applies and optimizes a warping function to the undeformed image before matching it to a deformed image; this may also be achieved by applying the Lucas-Kanade algorithm in the reverse direction by optimizing a warping function to the deformed image before matching it to the undeformed image. Strain tensors estimated through these optical approaches underlie much of quantitative cell mechanics, using a technique that compares images of a deformable medium contracted by cells to images of the same medium after deactivation or removal of the cells. Similar approaches have been used to study collective cell motion, tissue morphogenesis, and tissue mechanics. More generally, these tools are standard in the non-destructive evaluation of materials, structures, and tissues using optical techniques. However, these methods are subject to large errors when strain is high or localized. Specifically, small inaccuracies in displacement estimation become amplified through the numerical differentiation needed to estimate strain tensors. Minor mis-tracking of a single displacement can lead to an artifact that is typically indistinguishable from a region of concentrated strain. Although accuracy can be improved by incorporating into the image matching algorithm a mathematical model that describes how a specific tissue deforms, such techniques cannot be applied to a tissue whose properties are not known a priori.
Agoraphobic avoidance in patients with psychosis: Severity and response to automated VR therapy in a secondary analysis of a randomised controlled clinical trial. Freeman D., Lambe S., Galal U., Yu L-M., Kabir T., Petit A., Rosebrock L., Dudley R., Chapman K., Morrison A., O'Regan E., Murphy E., Aynsworth C., Jones J., Powling R., Grabey J., Rovira A., Freeman J., Clark DM., Waite F. BACKGROUND: The social withdrawal of many patients with psychosis can be conceptualised as agoraphobic avoidance due to a range of long-standing fears. We hypothesised that greater severity of agoraphobic avoidance is associated with higher levels of psychiatric symptoms and lower levels of quality of life. We also hypothesised that patients with severe agoraphobic avoidance would experience a range of benefits from an automated virtual reality (VR) therapy that allows them to practise everyday anxiety-provoking situations in simulated environments. METHODS: 345 patients with psychosis in a randomised controlled trial were categorised into average, moderate, high, and severe avoidance groups using the Oxford Agoraphobic Avoidance Scale. Associations of agoraphobia severity with symptom and functioning variables, and response over six months to brief automated VR therapy (gameChange), were tested. RESULTS: Greater severity of agoraphobic avoidance was associated with higher levels of persecutory ideation, auditory hallucinations, depression, hopelessness, and threat cognitions, and lower levels of meaningful activity, quality of life, and perceptions of recovery. Patients with severe agoraphobia showed the greatest benefits with gameChange VR therapy, with significant improvements at end of treatment in agoraphobic avoidance, agoraphobic distress, ideas of reference, persecutory ideation, paranoia worries, recovering quality of life, and perceived recovery, but no significant improvements in depression, suicidal ideation, or health-related quality of life. CONCLUSIONS: Patients with psychosis with severe agoraphobic avoidance, such as being unable to leave the home, have high clinical need. Automated VR therapy can deliver clinical improvement in agoraphobia for these patients, leading to a number of wider benefits.
https://www.psych.ox.ac.uk/publications/1300813
For the first time, the 24th of January is declared The International Day of Education as earmarked in the UN calendar. This day is an opportunity for civil society, education stakeholders and partners to celebrate and deeply reflect on the ensuing global education crisis. With millions of children out of school and illiterate, the world cannot sit back and keep quiet while children, global future leaders are deprived of their fundamental human right, education. Sustainable Development Goal 4 (SDG 4), demands inclusive and equitable quality education and the promotion of “lifelong learning opportunities for all”. Learning is paramount for all the sustainable development goals. Education eradicates poverty, boosts prosperity and fosters peaceful, just and inclusive societies. 2019 is a crucial year for education. The world is a decade away from achieving the ambitious 2030Agenda. Yet today the world’s pursuit of sustainable development and education goals continues to encounter extreme pressure and deep challenges. While positive steps have been taken to acknowledge and improve the status of education worldwide, statistics tell us we have a long way to go and much to do, to ensure each and every child across the globe exercise their right to a free, equitable quality education from early childhood. So today as the world celebrates the role of education, the fulfilment of commitments and achievements of targets towards sustained peace and development, all countries must reflect on where we are in terms of the Incheon Declaration for Education 2030, which sets the vision for education for the next fifteen years? Globally, education is in crisis. There is growing recognition of education as the equalizing factor to attaining SDG goals but the world is falling behind in meeting its objectives. And there are a few reasons for this. Inequality and gender inequality in education The widening gap between inequality and education is evident in the links between social status and education. The haves and the have nots. There are a handful of elite parents around the world who chose to enroll their children into prestigious private schools and universities in order to maintain their status and privileges. While those from disadvantaged backgrounds, due to socio-political interferences and circumstances, are left with no choice but to send their children walking miles away from home to attend poorly funded public schools, their only hope to provide them a better future. Inequality not only speaks to one’s capacity to overcome imposed societal challenges and the inability to sufficiently provide financially for one’s family and needs, but inequality includes broader systemic issues, such as gender. As the famous saying goes, “you educate a girl you educate a nation” yet the 2018, World Bank report “The cost of not educating girls missed opportunities: The high cost of not educating girls”, paints a grim picture. Globally girls are still on the lower end of attaining education in comparison to boys. “Globally, nine in ten girls complete their primary education, but only three in four complete their lower secondary education.” The fact that today, research indicates these gender disparities in education still negatively impact the trajectory of girls’ around the world, is yet another example of society failing its girl children. Many young girls forced to drop out of school endure early child marriage, lower expected income in adulthood, thus increasing poverty in households. The world cannot afford a society that disempowers girls, marginalises women, silences their voices and deprives nations of equitable, sustainable and inclusive development. Education in Crisis countries – a case of Yemen Yemen is riven by civil conflict. Lack of access to basic social services, induced poverty, and starvation, displaces hundreds of thousands of people and inhibits millions of children from entering classrooms. Education is the major casualty in this crisis, taking down with it an entire future generation of children in Yemen. According to UN Reports, a total number of 2 million children are out-of-school since 2015. Adding salt to injury, the country faces a severe shortage of paid teachers and now over 2000 schools serve as shelters for the displaced or the army. The unjustifiable repercussions of conflict in war-torn countries around the world can only result in heightened states of global existential angst which world leaders must adequately address. As the world confronts consequences of catastrophic climate change, it is anticipated that the number of conflicts in the most affected countries will increase, and that millions more will migrate. The World leaders have a duty to ensure the right to education is realised for internally displaced persons and migrants. Ignoring the education crisis in conflict countries or emergency situations is a human right atrocity. Privatisation in and of education Education is and remains a public good and governments are the sole bearers accountable to this duty and fundamental human right. The rise and growth of privatisation in and of education should invigorate debates and tangible actions around domestic education financing that lead to a truly transformative education system that benefits as well as empowers all communities and individuals. As was recently outlined in the media, the commodification of education exasperates societal disparities, increases social exclusivity, especially in low-income countries where equitable public education should be prioritised. In Uganda Kampala, for example, low cost private, profitable schools increased by eight percent in 2015 and account for over eighty percent of children in school. A recent report from Mauritania indicate an eleven percent increase in the number of private schools to from 417 to 702 schools between 2016 and 2017. Investors of these unregulated private mechanisms of education must realise the precarious danger of isolating the global education crisis, blaming it squarely on broken down political systems. Worldwide, there is a thriving civil society, willing governments and NGO’s truly committed to transforming education and leaving no one behind. In conclusion Civil society is the anchor that drives responsive and effective state action in the education sector. A united global civil society has the capacity to interrogate deeper systematic problems that permit poor quality systems to persist and in turn advocate for sustainable change. Across countries, if girls attain six years of education, their average earnings could increase by almost 9 percent and 12 years of education, girls could gain over 40 percent. Empowering girls in decision-making abilities and transforming their lives and those around them for good. World leaders must unite against attacks on schools and embolden efforts to protect children’s education, especially in war zones. Schools must always remain safe zones for learning. The next decade requires amplified actions, renewed commitments to the shared universal and ambitious Global Agenda, which seeks to eradicate poverty through sustainable development by 2030. Let this first-ever International Day of Education serve as an important moment in the education movement’s trajectory. The shared struggle across every continent to finally ensure that all people, from all walks of life, access lifelong learning opportunities, are equipped with the knowledge and skills required for this fast-paced globalised world, in order to fully participate in society and contribute to sustainable development. Let this battle be one the world cannot ignore. Let the next decade be a decisive period where institutions, civil society, and governments catch up to the real demands and education needs of today. Author: Refaat Sabbah – President of the Global Campaign for Education and a lifelong human rights and education activist. Sabbah is the Chair of the Arab Network for Civic Education (ANHRE), and the founder of the Arab Coalition for Education for All (ACEA).
https://campaignforeducation.org/en/2019/01/24/the-world-can-no-longer-neglect-the-right-to-education/
Good writers always brainstorm creatively before writing, even when they have strict time limits. If you brainstorm and organize well, the rest of the essay will flow smoothly and easily. If you don”t take the time to brainstorm and organize, your essay will flounder. • Always set aside 6 to 8 minutes to analyze the question, brainstorm possible examples, write a thesis, and write a quick outline. Don”t worry—you won”t waste time. Doing these right will save you lots of time in writing the essay. The writing will flow easily once you”ve laid the groundwork. • When brainstorming, turn off your internal “critic.” Don”t dismiss ideas right away. Think about them for a bit, and you may find that the ideas you were going to throw away are the best ones after all! • Brainstorm on paper, not just in your head. The SAT will give you room to scribble notes. Use it. Write down thoughts, connect them, cross them out, underline them—do whatever your creative brain tells you to do. Be Unique Don”t take the first thesis that pops into your head. Chances are that the first thesis you think of will be the same thing that pops into thousands of other heads. Instead, focus on finding a unique perspective. You can hone your perspective by first thinking of the most interesting examples. Think of Examples Before You Make Your Thesis Don”t write your thesis until you”ve brainstormed several interesting examples. Since your thesis rests on your discussion of your examples, think about interesting examples first. After you have analyzed the assignment and defined your terms, ask, “What is the most interesting example I can think of that helps to answer this question?” Show off what you know and how creative a thinker you are. Think of examples from your reading, your studies, and your life. Think of examples that other students won”t think of, but make sure that they are on the mark and that you can discuss them with authority. Go Off the Beaten Path Avoid a run-of-the-mill point of view. If you”re asked, “Can a loss ever be more valuable than a victory?” try to avoid clichés such as “losing the championship game” or “getting a D on a test” unless you can analyze them with unique insights. Instead, go off the beaten path, and try to think of more interesting examples of loss, such as the Green Party”s loss in the 2000 presidential election, or America”s loss in the race to put a human being into space, or Captain Ahab”s failure to capture Moby Dick. Make the readers notice your unique and well-informed mind. Going off the beaten path will keep you on your toes and force you to write a better essay. If you take an “easy” position, you will fall into lazy writing habits such as cliché, redundancy, and vagueness. Practice 3: Brainstorm Your Alternatives Creatively Brainstorming Practice Give yourself 6 minutes for each exercise below. Use the space below each question to practice brainstorming. Write down all the words, ideas, associations, people, events, books, etc. that pertain to the issue implied by the question. Don”t censor or criticize any idea; just get it down on the paper. Then, in the last few minutes, try to organize your thoughts into ideas for individual paragraphs. Try to find one idea for each of four paragraphs. (Don”t write the paragraphs, though.) 1. Should safety always be first? 2. Is the pen always mightier than the sword? Show this work to your teacher or tutor. Discuss ways of efficiently releasing your creativity and connecting to your academic knowledge.
https://schoolbag.info/sat/sat_1/62.html
Watt's steam engineJames Watt, was born on January 19, 1736, in Greenock. He worked as a mathematical-instrument maker as a teenager and soon became interested in steam engines, which were used at the time to pump water from mines. His interest really took off in 1763 when he was given a Newcomen steam engine to repair. Watt realised that he could improve the engine’s efficiency by the use of a separate condenser. This made Watt’s engine 4 times more powerful than earlier designs. - US IndependenceThe Unanimous Declaration of the Thirteen United States of America, is the pronouncement and founding document adopted by the Second Continental Congress meeting at Pennsylvania State House (later renamed Independence Hall) in Philadelphia, Pennsylvania, on July 4, 1776. The Declaration explains why the Thirteen Colonies at war with the Kingdom of Great Britain regarded themselves as thirteen independent sovereign states, no longer subject to British colonial rule. - Period: to French RevolutionThe French Revolution was a period of radical political and societal change in France that began with the Estates General of 1789 and ended with the formation of the French Consulate in November 1799. Many of its ideas are considered fundamental principles of liberal democracy. - Period: to Napoleonic EmpireThe First French Empire, officially the French Republic, then the French Empire after 1809, also known as Napoleonic France, was the empire ruled by Napoleon Bonaparte, who established French hegemony over much of continental Europe at the beginning of the 19th century. It lasted from 18 May 1804 to 11 April 1814 and again briefly from 20 March 1815 to 7 July 1815. - Period: to LuddismA nineteenth century movement against the implementation of certain technologies in manufacturing industries due to the fear that this might render skilled people jobless. - Congress of ViennaThe Congress of Vienna of 1814–1815 was a series of international diplomatic meetings to discuss and agree upon a possible new layout of the European political and constitutional order after the downfall of the French Emperor Napoleon Bonaparte. Participants were representatives of all European powers and other stakeholders, chaired by Austrian statesman Klemens von Metternich, and held in Vienna from September 1814 to June 1815. - 1820 RevolutionRevolutions during the 1820s included revolutions in Russia, Spain, Portugal, and Italy for constitutional monarchies, and for independence from Ottoman rule in Greece. Unlike the revolutionary wave in the 1830s, these tended to take place in the peripheries of Europe. - The First Trade UnionsThe history of trade unions in the United Kingdom covers British trade union organisation, activity, ideas, politics, and impact, from the early 19th century to the present. - Stephenson's steam locomotiveStephenson's Rocket is an early steam locomotive of 0-2-2 wheel arrangement. It was built for and won the Rainhill Trials of the Liverpool and Manchester Railway (L&MR), held in October 1829 to show that improved locomotives would be more efficient than stationary steam engines. 'Rocket' was designed and built by Robert Stephenson in 1829, and built at the Forth Street Works of his company in Newcastle upon Tyne. - 1830 RevolutionThe Revolutions of 1830 were a revolutionary wave in Europe which took place in 1830. It included two "romantic nationalist" revolutions, the Belgian Revolution in the United Kingdom of the Netherlands and the July Revolution in France along with revolutions in Congress Poland, Italian states, Portugal and Switzerland. It was followed eighteen years later, by another and much stronger wave of revolutions known as the Revolutions of 1848. - Period: to Unification of ItalyThe unification of Italy was the 19th-century political and social movement that resulted in the consolidation of different states of the Italian Peninsula into a single state in 1861, the Kingdom of Italy. Inspired by the rebellions in the 1820s and 1830s against the outcome of the Congress of Vienna, the unification process was precipitated by the Revolutions of 1848, and reached completion in 1871 after the Capture of Rome and its designation as the capital of the Kingdom of Italy. - 1848 RevolutionThe Revolutions of 1848, known in some countries as the Springtime of the Peoples or the Springtime of Nations, were a series of political upheavals throughout Europe starting in 1848. It remains the most widespread revolutionary wave in European history to date. The revolutions were essentially democratic and liberal in nature, with the aim of removing the old monarchical structures and creating independent nation-states, as envisioned by romantic nationalism. - Communist ManifestoThe Communist Manifesto is a political pamphlet written by German philosophers Karl Marx and Friedrich Engels. Commissioned by the Communist League and originally published in London in 1848, the Manifesto remains one of the world's most influential political documents. It presents an analytical approach to class struggle and criticizes capitalism and the capitalist mode of production, without attempting to predict communism's potential future form. - Period: to Unification of GermanyThe unification of Germany into the German Empire, a Prussian-dominated nation state with federal features, officially occurred on January 18, 1871, at the Palace of Versailles in France. Princes of most of the German-speaking states gathered there to proclaim King Wilhelm I of Prussia as German Emperor during the Franco-Prussian War. - First InternationalFirst International, formally International Working Men’s Association, federation of workers’ groups that, despite ideological divisions within its ranks, had a considerable influence as a unifying force for labour in Europe during the latter part of the 19th century.
https://www.timetoast.com/timelines/history-timeline-d99b4ef4-19f3-44d9-9854-0403b834c7b2
- In a large bowl, mix whole eggs with egg whites until blended. - Add melted manna or olive oil or butter, salt, coconut flour, baking powder, arrow root and fresh herbs. - Mix thoroughly with a whisk then let mixture sit for five minutes. Then whip again. This is necessary so the flour absorbs the liquid. - Using a flat griddle pan or crepe pan, heat over medium low to low heat. But let the pan fully heat before starting to ensure that the pan is very hot. - Once hot, spray or brush on some oil and pour about a third of a cup of batter onto pan and then cover with a glass lid . The lid helps retain the heat (like a mini oven) to cook the bread and using a glass lid will help you see when to turn the flat bread. - After 3-4 minutes, you should see the edges dry and bubbles forming in batter. With a spatula, lift an edge and check. If golden, flip, cover and cook the other side about 2-3 more minutes, or until slightly browned and cooked through. - Transfer to a platter and continue cooking the remainder of the batter. (Note: I used two pans so I could cook two at a time and found the timing to be slightly different between the two pans, mostly because one pan was thicker than the other, therefore your cooking time will vary depending on your cookware, burner type and heating source.) - Once cooked, these are great served warm with sandwich ingredients folded in half like a piece of pita bread.
https://www.afamilyfeast.com/gluten-free-flat-bread/print/24446/
To an archaeologist, the soil resembles a historical document; the researcher must decipher, translate, and interpret the soil before it can help him or her understand the human past. But unlike a document, the soil of an archaeological site can be interpreted only once in the state in which it is found. The very process of excavation destroys a site forever, making such an investigation a costly experiment that cannot be repeated. Accordingly, archaeologists conduct excavations with great care. Before an excavation begins, they survey the site meticulously and map it on a grid within a coordinate system. Researchers then reference the locations of all unearthed artifacts or features to their coordinates within the wider site. Archaeologists note unexcavated areas just as carefully, because they may be of interest to other archaeologists in the future. Many of the tools used in excavation are surprisingly familiar. Archaeologists employ common household utensils such as ladles, spoons, dustpans, and brushes to move small amounts of earth. They use flat-edged shovels to remove larger volumes of soil and root cutters and small hand saws to extract grounded tree roots. However, no single tool is more synonymous with archaeology than the small mason's trowel. The sturdy, welded body and tough, steel blade of this tool make it ideally suited for gingerly removing successive layers of soil. As an excavation progresses, it uncovers the past in both horizontal and vertical dimensions. The horizontal dimension reveals a site as it was at a fixed point in time. The vertical dimension shows the sequence of changes within a site over time. Excavation methods vary according to which dimension of the past an archaeologist chooses to study. A researcher seeking a detailed "snapshot" of a particular point in time would likely initiate a large, open-area excavation. This technique requires archaeologists to uncover a site layer by layer until reaching the level of the desired time period. Alternately, an archaeologist seeking to understand the progression of time at a site would probably employ a grid excavation. Under this method, workers dig evenly spaced square holes, leaving baulks (wall-like unexcavated areas) between the squares. Baulks allow archaeologists to examine a site's general stratigraphy and are later removed to reveal whatever might lie within them. Researchers use more intrusive excavation methods when a site will be obstructed or destroyed by some form of modern development, such as a shopping center. These "salvage" projects force archaeologists to race against time to find evidence. To this end, they conduct "reconnaissance" surveys (small-scale excavations) at random locations, along a predetermined site grid, or wherever they suspect they may find archaeological evidence. Researchers gather two very different sets of information during the course of any excavation. They can examine tangible findings, such as artifacts and the remains of plants, animals, and humans, well after an excavation has ended. However, excavation destroys contextual features, such as building remains, as they are uncovered. To preserve vital information about these remains, archaeologists painstakingly catalog every nuance of a site through volumes of photographs and drawings.