content
stringlengths 0
1.88M
| url
stringlengths 0
5.28k
|
---|---|
At the beginning of 2021, the Austin Resource Recovery, a department of the City of Austin that helps manage local waste to provide resources for the local community, released its final part of the Curbside Composting Collection Program. The program has been developed over the past four years to provide a composting service to all Austin residents, and this final part expanded the service to over 55,000 households in Austin, including those within Bowie’s region.
The program is part of a larger initiative to cut down waste and decrease the amount of garbage in landfills around Austin. At school, Bowie follows the Austin Independent School District’s (AISD) Zero Waste Initiative, an program that works to compost and recycle in school cafeterias, and has a goal to decrease landfill waste by 90% by 2040. Junior Taru Mishra, who is a member of the JBHS Earth Club, shares her thoughts on Bowie’s implementation of AISD’s current environmental plan.
“I think Bowie needs to focus on managing its trash and waste materials by increasing the number of recycling bins,” Mishra said. “This needs to be paired with increased education of people at Bowie regarding what materials can be recycled and what cannot.”
In terms of composting at Bowie, English teacher Kimberly Wiedmeyer describes the necessity of implementing a composting program on campus.
“I see students throw away huge amounts of food constantly: three students will get single servings of fries, each eats a couple, and then they’ll throw them away,” Wiedmeyer said. “I really struggle with this attitude and mindlessness about our agricultural system and lack of understanding of how this impacts our environment. For that reason, I think implementing a compost system in Bowie would heighten awareness and create a way to beneficially re-purpose this food waste.”
Although Bowie doesn’t have a composting system yet, senior Liana Chen, who founded the JBHS Eco-Team, hopes that with this expansion of the Curbside Collection Program, there will be more support to add one on campus as well.
“I think the new Austin composting system is beneficial because it will increase the number of people who compost which reduces the city’s trash and increases support for composting,” Chen said.
Every household that utilizes the Curbside Composting Collection Program is given a green bin, similar to a trash can bin. Residents will then collect food scraps, yard trimmings, and food-soiled paper in the bin.
“[Residents] can simply have carts that allow them to carry their compost on a regular basis,” Mishra said. “This will encourage more people to compost and reduce the amount of waste that ends up in landfills, limiting the risk to potentially pollute marine and other animal habitats.”
Although the carts cost an additional $5 per month for the service, many believe that the benefits of composting outweigh the costs. According to the United States Environmental Protection Agency (EPA), composting helps enrich soil, reduce the need for chemical fertilizer, and encourages the production of beneficial bacteria. Moreover, the carts are picked up once a week with the regular trash bin on the same day as the regular trash collection, so residents simply have to put the bin outside their homes.
“As a city, it’s more convenient for the state to provide the composting bins because it’ll increase the number of people who compost,” Chen said. “Composting doesn’t require much effort and with Austin’s new plan, it’ll be even easier.”
While the system’s goal is to decrease the amount of land waste in landfills, it has yet to expand its services to all Austin residents. Although Mishra’s household isn’t included in the Curbside Compost Collecting Program, she described other methods that the city can get its residents to compost, even if they aren’t involved in the program.
“For those whose communities are not included in the current compost initiative, the City of Austin could maybe provide information about how to set up composting in one’s own home or area,” Mishra said. “[The city could] add posters throughout the communities to ensure that people are properly educated in what they are doing in regards to compost.”
Nevertheless, there are other ways to compost in Austin, including creating an individual composting system or dropping it off at designated locations established by the Austin Resource Recovery, such as farmers markets or community gardens.
“Food waste is one of the biggest contributors to global warming, and it’s something we all need to be more aware of,” Wiedmeyer said. “If we do throw out food, we can use composting to mitigate some of the negative impacts.”
Aside from Austin residents composting through the program, science teacher Daniel Chonis believes that the goal of composting must be expanded even further in order to help meet the goal of decreasing landfill waste.
“I would also shift the focus away from the individual being solely responsible for reducing waste and hold businesses, companies, and corporations to a higher standard in terms of waste output,” Chonis said. “These are main contributors to waste even though the blame is often put on the citizens.”
As Austin increases its Curbside Composting Collecting Program in the future, Chonis reiterates the need for composting in our environment in order to keep it healthy for the years to come. | https://thedispatchonline.net/13963/news/austin-curbside-composting-program-local-expansion/ |
Evidence For Milk, Meat, And Plants In Prehistoric Kenya And Tanzania
Conny Waters – MessageToEagle.com – The first evidence for diet and subsistence practices of ancient East African pastoralists is now presented by scientists, led by the University of Bristol, with colleagues from the University of Florida.
The development of pastoralism is known to have transformed human diets and societies in grasslands worldwide. Cattle-herding has been (and still is) the dominant way of life across the vast East African grasslands for thousands of years.
Examples of potsherds analyzed. Credit: Kate Grillo
This is indicated by numerous large and highly fragmentary animal bone assemblages found at archaeological sites across the region, which demonstrate the importance of cattle, sheep, and goat to these ancient people.
Today, people in these areas, such as the Maasai and Samburu of Kenya, live off milk and milk products (and sometimes blood) from their animals, gaining 60 – 90 percent of their calories from milk.
Milk is crucial to these herders and milk shortages during droughts or dry seasons increase vulnerabilities to malnutrition and result in increased consumption of meat and marrow nutrients.
“How exciting it is to be able to use chemical techniques to extract thousands of year-old foodstuffs from pots to find out what these early East African herders were cooking,” Dr. Julie Dunne, from the University of Bristol’s School of Chemistry, who led the study, said in a press release.
“This work shows the reliance of modern-day herders, managing vast herds of cattle, on meat and milk-based products, has a very long history in the region.”
Yet we do not have any direct evidence for how long people in East Africa have been milking their cattle, how herders prepared their food or what else their diet may have consisted of.
Significantly though, we do know they have developed the C-14010 lactase persistence allele, which must have resulted from the consumption of whole milk or lactose-containing milk products. This suggests there must be a long history of reliance on milk products in the area.
The researchers examined ancient potsherds from four sites in Kenya and Tanzania, covering a4000-year timeframe (c 5000 to 1200 BP), known as the Pastoral Neolithic.
Their findings, published in the journal PNAS, showed that by far the majority of the sherds yielded evidence for ruminant (cattle, sheep or goat) meat, bones, marrow and fat processing, and some cooking of plants, probably in the form of stews.
This is entirely consistent with the animal bone assemblages from the sites sampled. Across this entire time frame, potsherds preserving milk residues were present at low frequencies, but this is very similar to modern pastoralist groups, such as the heavily milk-reliant Samburu, who cook meat and bones in ceramic pots but milk their cattle into gourds and wooden bowls, which rarely preserve at archaeological sites.
In the broader sense, this work provides insights into the long-term development of pastoralist foodways in east Africa and the evolution of milk-centered husbandry systems.
The time frame of the findings of at least minor levels of milk processing provides a relatively long period (around 4,000 years) in which selection for the C-14010 lactase persistence allele may have occurred within multiple groups in eastern Africa, which supports genetic estimates. | https://www.messagetoeagle.com/evidence-for-milk-meat-and-plants-in-prehistoric-kenya-and-tanzania/ |
The course is divided into 5 interdisciplinary laboratory modules focused on a topic of biological relevance. The main purpose of the course is to offer the student tools to focus on the problem, by using different highly complementary techniques. The GENETICS module aims to provide expertise on experimental approaches and bioinformatics analysis necessary to identify genetic variants associated with specific pathological conditions and their validation. The module of PROTEIN ENGINEERING aims to provide to the students with specific information on the principles and techniques used in protein engineering, with particular reference to the production of recombinant proteins in heterologous systems (construction and expression of foreign gene in prokaryotic and eukaryotic host cell). The BIOINFORMATICS module aims to introduce the computational methods used today to predict the effect of variants associated with diseases on the structure/function of proteins. At the end of the course, the student must demonstrate that he is able to use state-of-the-art computational methods to predict the effect of mutants from the sequence and structure of proteins. The module of EXPRESSIONAL PROTEOMICS aims to acquire laboratory skills for the preparation of an experiment in differential proteomics. The experiment can be aimed at the comparison of a pathological sample with a control sample for the identification of potential biomarkers of clinical use, or aimed at the comparison of a cellular sample treated or not with a drug for the recognition of the mechanism of molecular action of the drug itself. The traslational genetics module aims to familiarize students with the embryological and molecular methodologies used to study in zebrafish (Danio rerio) embryos, the effects of specific genetic alterations associated with human disorders.
------------------------
MM: a
------------------------
Definition of recombinant protein. Introduction to protein engineering. Acquisition of the required information (theoretical and experimental) to carry out the process of engineering of a protein function/structure. Production of recombinant proteins. Experimental approaches to study and modulate the protein functionality. Protein characterization (Site directed mutagenesis, Gel electrophoresis, Tryptophan (Trp) fluorescence, ANS Fluorescence, Limited proteolysis). Examples of application of protein engineering.
------------------------
MM: b
------------------------
The module will be entirely developed in a computer laboratory. The module is based on the seminal article: Predicting the Effects of Amino Acid Substitutions on Protein Function by Pauline C. Ng and Steven Henikoff and published in: Annual Review of Genomics and Human Genetics. The techniques reviewed in the article will be briefly introduced to the students. Then the students will put their hands on the problem by using those methods to assess the effects on mutants on human Calmodulin. The methods include: Sequence based methods: - Sift - PolyPhen - Panther - PSEC Structure based methods - Analyse the wild-type structure usgin the Chimera program - Introduce the mutants - Analyse the lost/gain interactions upon mutation - Study of the electrostatic potential on the surface of the protein (wild-type and mutated) Annotation based methods: - iHop - Pfam
------------------------
MM: c
------------------------
The expressional proteomics module includes key issues for a proteomics laboratory, for example, methods for protein quantification before a proteomic analysis, separation of proteins by two-dimensional electrophoresis, the detection of the proteomic profile by different staining (colorimetric and/or fluorescent), image acquisition of proteomic profiles, and an introduction to identification of deregulated proteins by mass spectrometry.
------------------------
MM: d
------------------------
The functional proteomics module focuses on the use of biomimetic approaches for selective recovery of protein classes, for the proteomic analysis. The experimental design is: In silico design of the best epitope target in a defined protein. Preparation of the biomimetic material. Functional characterization of the biomimetic material. Application of the biomimetic material for the selective enrichment of biological samples and analysis 2DE of the enriched fraction. In silico modelling of the protein corona.
------------------------
MM: e
------------------------
The goal of the genetics module is the identification of single nucleotide gene variants associated or causative of specific pathological conditions. Competences acquired will be: how to select the subject of the study, how to identify the genes associated to the disease, how to select the most appropriate target enrichment technique, how to perform the steps from sequencing to bioinformatics analysis and how to validate the new markers identified.
The verification of the acquisition of concepts and protocols inherent to the thematics of the research inspired laboratory , will be through a global exam, subdivided into 10 open questions based on the 5 modules (2 questions for bioinformatics; 2 for biochemistry; 2 for expressional proteomics, 2 for genetics and 3 for functional proteomics) to be replied in 3 hours.
All the questions aims at verifying acquisition of the knowledge of the practicals and of the inherent theories discussed over the course. | http://www.dbt.univr.it/?ent=oi&codiceCs=S78&codins=4S003669&cs=698&discr=&discrCd=&lang=en |
Essentials of Medical Pharmacology 8th Edition PDF:
Preface:
Essentials of Medical Pharmacology 8th Edition PDF, is a unique blend of basic pharmacology, clinical pharmacology and pharmacotherapeutics. The subject in the PDF is highly dynamic with concepts and priority drugs changingrapidly. Innovations and developments are happening at an unprecedented pace. Several new molecular targets for drug action have been identified and novel drugs produced to attack them. On the other hand, a huge body of evidence has been generated to quantify impact of various drugs and regimens on well defined therapeutic end points, so that practice of medicine is transforming from impression based to evidence based.
The present edition of Essentials of Medical Pharmacology 8e PDF:
The present 8th edition in PDF focuses on evidence based medicine by referring to numerous large randomized trials and other studies which have shaped current therapeutic practices. By evaluating such evidences, professional bodies, eminent health institutes, expert committees and WHO have formulated therapeutic guidelines for treating many conditions, as well as for use of specific drugs. The latest guidelines of Essentials of Medical Pharmacology, have been summarized and included in the present edition along with other developments and the core content.
About the Author of the PDF:
KD Tripathi MD, Ex-Director-Professor and Head of Pharmacology, Maulana Azad Medical College and Associated LN and GB Pant Hospitals, New Delhi, India.
NOTE: This sale only contains the ebook ISBN 978-9352704996 Essentials of Medical Pharmacology , 8e, in PDF. No access codes included. | https://edutextbooks.com/product/essentials-of-medical-pharmacology-8th-edition-pdf/ |
Igneous rocks are formed by the cooling and hardening of molten material called magma. It is one of the three main rock types, the others being sedimentary and metamorphic. The word igneous comes from the Latin word ignis, meaning fire. Volcanic rocks are named after Vulcan, the Roman name for the god of fire. Intrusive rocks are also called “plutonic” rocks, named after Pluto, the Roman god of the underworld. The magma can be derived from partial melts of existing rocks in either a planet’s mantle or crust. Typically, the melting is caused by one or more of three processes: an increase in temperature, a decrease in pressure, or a change in composition.
Generally, the longer the cooling time, the larger the mineral crystals can grow. Trapped deep in the Earth, magma is allowed to cool slowly. Examples of intrusive rocks are granite and gabbro. Granite is made up of three common minerals – quartz, mica and feldspar – and when viewed from a distance appears tan, pinkish or gray, depending on the concentrations and grain sizes of the three minerals. This rock is very widely used as a building material, due to its abundance and strength. Gabbro has lower quartz content than granite, and is therefore much darker.
Igneous and metamorphic rocks make up 90–95% of the top 16 km of the Earth’s crust by volume. Igneous rocks form about 15% of the Earth’s current land surface. Most of the Earth’s oceanic crust is made of igneous rock.
Igneous rocks are classified according to their texture and mineral or chemical content. The texture of the rock is determined by the rate of cooling. The slower the cooling, the larger the crystal. Intrusive rock can take one million years or more to cool. Fast cooling results in smaller, often microscopic, grains. Some extrusive rocks solidify in the air, before they hit the ground. Sometimes the rock mass starts to cool slowly, forming larger crystals, and then finishes cooling rapidly, resulting in rocks that have crystals surrounded by a fine, grainy rock mass. This is known as a porphyritic texture.
Igneous rocks are formed from the solidification of magma, which is a hot (600 to 1,300 °C, or 1,100 to 2,400 °F) molten or partially molten rock material. The Earth is composed predominantly of a large mass of igneous rock with a very thin veneer of weathered material namely, sedimentary rock. Over 99% of Earth’s crust consists of only eight elements (oxygen, silicon, aluminum, iron, calcium, sodium, potassium, and magnesium). Most igneous rocks contain two or more minerals, which is why some rocks have more than one color.
Molten materials are found below the earth crust and are normally subjected to extreme pressure and temperatures – up to 1200° Celsius. Because of the high temperatures and pressure changes, the molten materials sometimes shoot up to the surface in the form of volcanic eruption and they cool down to form volcanic or extrusive igneous rocks. Some of the molten materials may cool underneath the earth surface very slowly to form plutonic or intrusive igneous rocks. It is because of the extreme heat levels and changes in pressure that igneous rocks do not contain organic matter or fossils. The molten minerals interlock and crystallize as the melt cools and form solid materials.
Igneous rocks contain mostly silicate minerals and are sometimes classified according to their silica content. Silica (SiO2) is a white or colorless mineral compound. Rocks containing a high amount of silica, usually more than 50%, are considered acidic (sometimes the term felsic is used), and those with a low amount of silica are considered basic (or mafic). Acidic rocks are light in color and basic rocks are dark in color. | http://www.assignmentpoint.com/science/geography/igneous-rock.html |
Characterization of a neutral pectin lyase produced by Oidiodendron echinulatum MTCC 1356 in solid state fermentation
- Source:
- Journal of basic microbiology 2012 v.52 no.6 pp. 713-720
- ISSN:
- 0233-111X
- Subject:
- Citrus, Crotalaria juncea, EDTA (chelating agent), activation energy, agricultural wastes, ammonium sulfate, anion exchange, denaturation, enzyme activity, enzyme inhibition, enzyme inhibitors, fungi, metal ions, pH, pectin lyase, pectins, polyacrylamide gel electrophoresis, potassium permanganate, retting, solid state fermentation, temperature, wheat bran
- Abstract:
- A neutral pectin lyase produced by a new fungal strain Oidiodendron echinulatum MTCC 1356 under solid state fermentation using wheat bran as agro waste has been studied. The enzyme was purified by ammonium sulphate precipitation (30–60%), DEAE anion exchange and Sephadex G‐100 column chromatographies. The SDS‐PAGE and native PAGE revealed two bands of sizes 42 and 47 kDa. The enzyme was purified 37 fold with specific activity of 4.5 U/mg and 2.25% yield. The Kₘ and Vₘₐₓ values determined using citrus pectin were 1.2 mg/ml and 0.36 IU/min respectively. The pH and temperature optima were pH 7.0 and 50 °C, respectively. The pH stability was around 5.0 for 24 h at 20 °C. The purified enzyme retained maximum activity for 30 min upto 50 °C. The activation energy for thermal denaturation of the purified enzyme was found to be 60.0 kJ/Mol. The effects of various metal ions and protein inhibitors on enzyme activity have revealed total inhibition of the enzyme activity in the presence of Ag⁺and Cu⁺and KMnO₄ at 1 mM. The neutral pectin lyase showed retting of Crotalaria juncea fibre in the presence of EDTA. (© 2012 WILEY‐VCH Verlag GmbH & Co. KGaA, Weinheim)
- Agid: | https://pubag.nal.usda.gov/catalog/204774 |
A good review of research literature does not just describe and assess the applicable studies. It also integrates across them. The integration stage forges something new from the prior studies. This introductory lesson and the following five Integration lessons (I-2 to I-6) will help you several ways of doing this.
The "new" may come from a redefinition of a research problem, modification of conceptual framework applied to the problem, identification of methodological shortcomings in a body of research, or a best guess about the "truth" of some issue from a body of research with inconsistent results. The Search and Assessment stages of a literature review provide the foundation for the Integration stage.
Portray: Paint a picture. Indicate the surface similarities and differences. Map out surface relationships. Identify the gaps.
Trace History: Describe changes over time. Identify lineage. Situate the evolving research in a larger historical context. Identify causes and consequences of the evolution.
Categorize: Sort into a taxonomy based on similarities and differences of underlying characteristics.
Summarize: Identify main commonalities and central tendencies. Generalize and simplify.
Undermine: Break up existing paradigms of thought by documenting obscure inadequacies that are widespread. Identify chaos in apparent order. Show the variation and complexity of the situation.
Synthesize: Pull together into a new whole. Explain and reconcile apparent contradictions. Redefine. Identify order in apparent chaos.
Enlightening integration rarely is achieved mechanically. It benefits from intuition, inspiration, and insight. It also requires verification and documentation. A scholar is expected to substantiate his or her integrations so that others can second-guess their validity and merit.
As with the Search and Assessment stages of a literature review, a scholar's beliefs, hopes, predispositions, and opinions can unintentionally, and sometimes intentionally, bias the results of the Integration stage, making them invalid, misleading, and dysfunctional. To avoid this, be cognizant of your biases and rigorously verify and document the resulting integrations. As a final check, ask yourself: were there any surprises in your integrations? If not, you may have proceeded with blinders.
The integration of research is a scholarly endeavor. Accordingly, your conclusions are considered of little importance without careful documentation of the evidence upon which they are based.
Research Problem: Undesirable human or social conditions, unsatisfactory findings of prior research, or inadequacies in conceptual frameworks or methodology of prior research.
Conceptual Framework: The lens through which the problem is viewed-the worldviews, theories, perspectives, and/or constructs of prior research studies.
Methodology: The research design, sampling, data collection, and analysis of prior research studies.
Interventions: The experimental treatments, the innovative practices being demonstrated, and the naturally occurring events whose impacts are of interest.
Findings: The results, conclusions, interpretations, predictions, and recommendations of prior research studies.
Some reviews of research seek to integrate in only one of the five domains. Reviews in preparation for major research studies, including dissertation research, should usually undertake some integration in all five areas, thus providing the best basis for new research that is likely to make a real contribution to knowledge.
Although the steps in research normally begin with a research problem, followed in order by a conceptual framework, methodology, and findings, in the following lessons, integration of research problems will be discussed last because it usually draws heavily on integrations of the other domains. | https://www2.gwu.edu/~litrev/i01.html |
Engineers Without Borders Denmark is a technical humanitarian organization that, through the vision “Building a better tomorrow” seeks to re-establish life-preserving measures for people in need. We provide water and housing for people fleeing from war - and electricity, so that doctors and nurses can save victims in emergency areas. Our engagement last after the disaster has ended, and more sustainable solutions are needed. Our work covers: WASH, food security, and participatory management of environment, sustainable energy and technical assistance in humanitarian crisis. EWB-DK was established in 2001, and is a part of Engineers Without Borders International. EWB-DK offers 15 years of experience in deployment of highly qualified technical skilled persons - both juniors and seniors.
Engineers Without Borders Sierra Leone is a non-profit, non-governmental, non-political, non-denominational voluntary development humanitarian engineering organization established in 2005 to partner with developing communities within and probably beyond Sierra Leone in order to improve their quality of life incl. recovery and resilience.
Dan Church Aid Nepal has been supporting development work in Nepal since 1980’s. DCA Nepal is working mainly to improve the life and livelihoods of the most marginalized communities through a more rights based approach. Focus areas are: Inclusive and accountable governance, Resilient livelihoods and Sustainable food security, Migrants’ rights, Humanitarian response and Disaster risk reduction. DCA-Nepal will as already certified as hosting organisation under the EU Aid Volunteer Initiative, seek to share knowledge and experience with the other members of the Consortium. Cross learning from DCA will form part of the methodology used in the training and of course in the mapping of the individual capacity and experience of each partner organisation within the Consortium.
Build up Nepal is dedicated to rebuilding and fighting poverty in rural villages. It was founded by a group of organizations and professionals with long experience in engineering, social business, management and rural development. Build up Nepal will seek to become hosting organisation in the consortium. Main role will be recipient beneficiary of project training and capacity building. Located in vicinity to DCA Nepal, it is envisioned that a strong local cross learning can be fostered.
Mavuno Improvement for Community Relief and Services has as main objective to improve the quality of life in rural areas of Karagwe and Kyerwa districts, Tanzania. The role of Mavuno in the project is to become hosting organisation and through hosting of volunteers to build local capacity to continue and improve its work in humanitarian aid. Main role will be recipient beneficiary of project training and capacity building.
Emergency Architecture & Human Rights main objectives are to produce local community centred buildings and construction processes, that utilize local materials, local workforce/volunteers in a collaborative, innovative and inclusive process. EA&HR has a volunteer base, which provides technical service to poor and vulnerable communities and third country partners. Main role will be recipient beneficiary of project training and capacity building
Engineers Without Borders Norway does not run its own aid assistance projects, but assists other aid organizations with specific assignments. It builds assistance expertise among its volunteers - Engineers, which it associates with the needs of the aid organizations in their projects. EWB-NO will be associated partner and hence seek to build strong ties under the consortium to other sending and hosting organisations. Main role will be knowledge sharing from its extensive experience with placement of volunteers in third country partner organisations.
Engineers Without Borders Sweden aims to support sustainable development in close cooperation with local partners by our volunteer projects abroad on engineering expertise and knowledge. Specifically, we focus on pre graduate education and gender equality, clean water and sanitation, affordable clean energy.
For further information about EWB-DK's EU Aid project, please contact: | https://iug.dk/en/consortia-members |
Located in upstate New York, the Syracuse City School District (SCSD) is an urban school district that serves over 20,000 students across 34 schools, making it one of New York State’s “Big 5” districts. With one of the highest refugee intake rates in the country, the student population in Syracuse is extremely diverse - 76 different languages are spoken by 3500 students from 76 different countries. In 2014-15, the last year data are available, 15,500 SCSD students were classified by New York state as “economically disadvantaged” as the city struggled with high poverty rates among families.
After analyzing data in 2012-13 school year showing high suspension rates among certain student populations, SCSD began to work district-wide on an emphasis of restorative practices for discipline as well as training on culturally responsive pedagogy. By the 2015-16 school year, suspensions had been cut in half.
SCSD has also struggled with low proficiency levels in math and ELA, with teachers needing to account for a wide variation in background knowledge and essential skills among students.
To help focus on individual students’ needs, SCSD worked at the district level to rethink social-emotional learning, how the district uses data, and how its curriculum and pedagogy was preparing students for college and career. However, many staff and school leaders felt the tension of a wide variety of initiatives as they worked to keep focus on a larger, cohesive vision.
At the district level, Education Elements first set out to help create a visual roadmap of SCSD’s key priorities and the role that personalized learning would play in supporting them. By working to align initiatives at the district level, EE was able to gain a sound understanding of the key priorities of the district, such as restorative justice and culturally responsive teaching, and how blended, personalized instructional models could enhance these practices.
Ed Elements then worked with teams of school leaders and teachers to set their own visions for PL and connect aspirational goals with key problems of practice. Each school created a design plan with PL models that has served as a guiding document for reflection during the school year.
Ed Elements has helped facilitate improved collaboration between district staff and school staff through learning walks that occur throughout the district. Schools have begun to host their peers and open their doors for critical feedback and reflection, creating a more open and responsive culture as reflections and data collected during the walks have helped the district continue to set priorities for support.
“Working with Education Elements helped a lot because they were immediately able to create personalized three year plan for our district.“
Education Elements has worked hard to become ADA compliant, and continues to strive for accessibility on this website for everyone. If you find something that is not accessible to you, please contact us here. | https://www.edelements.com/personalized-learning-at-syracuse-city-school-district-ny |
As most HR professionals know, document retention for employee-related records—such as personnel files, payroll information, benefits records, and background checks—is a particularly complicated process, required by law, with variations from country to country. Complicating the process further, each document in each country has its own individual retention requirements, and the financial penalties for noncompliance can be significant. A carefully designed and implemented HR record retention policy is a necessary step to support an employer’s robust compliance program.
While disposing of too many records can increase a company's legal exposure, disposing of too few records may also increase legal exposure as well as the cost of storage. Employers must identify which records should be retained, how long records should be retained and the different formats in which records may be stored. Employers must also determine how to ensure internal HR record retention policies comply with all applicable regulations and local laws.
Keeping HR records through a robust document retention policy may be useful to employers for various reasons, including (a) maintaining the corporate memory of the company; (b) satisfying legal or regulatory requirements; (c) preserving documents with an enduring business value to the company; and (d) protecting the company against the risks of litigation and the need to preserve evidence and comply with disclosure obligations as necessary.
However, a balance must often be struck between keeping documents for a sufficiently long period of time, so as to meet an employer’s legitimate business objectives, and not keeping those documents unnecessarily, which could give rise to a breach of data protection laws or otherwise create unnecessary risk.
Most countries have minimum and maximum retention periods for certain HR records. Even if there is no statutory minimum retention period for a certain category of records in a particular country, it is often recommended to retain records until the expiration of the relevant time limits for bringing legal actions or regulatory investigations (statutes of limitations).
In addition to maintaining minimum retention periods, some countries also have maximum retention periods. A record’s survival must often be limited so as to safeguard the privacy of persons whose personal data is contained in that record. In particular, records must be kept for no longer than is necessary for achieving the purposes for which the records were collected or subsequently used. After the maximum retention periods have expired, the documents should be either permanently deleted or anonymized (i.e., all references to data subjects should be redacted so that it is no longer possible to identify those persons).
Under the European General Data Protection Regulation (GDPR), Human Resources departments have an obligation to limit storing personal employee and applicant data. One way to demonstrate this is by having a clear and well documented retention policy that limits retention periods to what is legally or contractually required.
Multiple laws, decisions, and even everyday life practices apply when assessing the retention period of a document. There is currently limited insight on retaining employer related documents electronically in Romanian Labor Code and other labor legislation, with a few exceptions. For example, documents which were concluded in an electronic format and have an electronic signature that meets government requirements can be retained electronically. Common best practice is to store employment-related documents in both electronic and paper form in the event a judge or authority specifically requests it (excluding documents that were concluded electronically with an electronic signature, according to the law).
Romanian law requires certain documents to be retained in paper format, including:
If a document has been signed in wet-ink, government authorities have the right to request the original version of the document.
Note that certain documents must be retained on the company’s premises, including various corporate records (ex., the shareholders register must be retained at the company’s headquarters/site). Employers are obligated to retain a copy of the individual employment agreement at the workplace (in paper or electronic format) for employees who perform activity at that workplace, (Labor Code).
In addition, Government Decision no. 905/2017 requires that employee personnel files must be retained at the employer’s headquarters or at a secondary headquarters (if the hiring of employees is performed at the secondary headquarters). The personnel file includes documents necessary for employment such as the individual employment contract, additional documents and other documents related to modification, suspension and termination of individual employment contracts, study documents / qualification certificates, and any other document certifying the legality and correctness of the completion in the register.
The personnel file is not required to be retained in a specific format, but conservative employers retain at least some of the documents in paper format.
Law no. 319/2006 on health & safety at work (and associated regulations) includes the requirement that individual training forms are retained through the employment relationship and are accompanied by a copy of the last aptitude chart issued by the occupational doctor; and, work accident investigation files are retained in the employer’s archive. Common practice is to retain these health and safety related documents in paper form. Recent legal amendments allow employers to retain proof of training electronically with a simple, advanced or, qualified electronic signature.
Note that in the event of a labor audit, inspectors can generally request any employment related document. Therefore, employment related documents should remain accessible to be given to an inspector, if requested.
UKG's HR Compliance Assist team relies on a network of internal and external compliance experts and lawyers to provide clients with best practices and recommendations on topics such as HR document retention, employee data privacy, and HR electronic records. HR Compliance Assist also provides local compliance monitoring and alert services in select countries where UKG's customers have employees. HR Compliance Assist is a service exclusively available to UKG customers. | https://www.hrcomplianceassist.com/romania/legal-framework |
What is a ransomware attack?
Ransomware is software designed to lock or encrypt an organization’s system or data. Ransomware typically spreads through sophisticated “phishing emails,” which trick users to interact with infected emails, and/or through server or software vulnerabilities without user interaction. Once a system or data is exposed, the ransomware encrypts the system or information on the system, and requires users to pay a ransom by a specified deadline in exchange for access to the system and/or data.
Why all the fuss?
Ransomware creates real and significant risks to organizations. These risks were recently demonstrated by a global ransomware attack which infected and shut down hundreds of thousands of computers across the world.
The attack was caused by ransomware commonly known as “WannaCry” which, among other things, infected systems through malicious email attachments, and then spread through network drives by exploiting a Windows vulnerability. Once systems were infected, WannaCry encrypted data and demanded a ransom be paid by a specified deadline.
If having an organization’s system or data encrypted for a ransom is not troublesome enough, there is a real risk that paying the ransom will not remove the ransomware, and/or that the attack will be repeated on an infected system or data.
Further, even if data is recovered and further attacks are thwarted, the negative impact of a cyberattack on an organization’s assets, operations, reputation and relations, and the associated financial loss, regulatory consequences and potential liability, can be devastating.
How can organizations prepare?
Fortunately, there are a number of steps that organizations can take to minimize the chance of, and mitigate the risks associated with, a successful ransomware attack. In particular, organizations should take the following 10 steps to prepare for a ransomware attack:
- Assess and Address the Risks: The world of cyber security moves very fast, and organizations should identify and assess potential cyber security risks to and gaps in their IT systems on an ongoing basis, including by assessing what and where their most valuable information is, and then by appropriately addressing risks to that information.
- Implement Safeguards: There are a number of technical and operational safeguards that organizations can implement including, among other things, keeping operating systems and software up-to-date, installing security patches and updates as soon as they are available, installing appropriate firewalls and malware protection, incorporating appropriate administrative access controls, and implementing appropriate policies and procedures including monitoring, intrusion-detection, white knight hacking and audits.
- Make a Plan: Organizations can substantially decrease the negative consequences of a ransomware attack by preparing and regularly reviewing appropriate and customized incident response and business continuity plans that assist organizations to take appropriate steps in response to such attacks in a timely manner.
- Make a Back-Up Plan: Organizations should ensure appropriate back-ups are made of critical information, including back-ups which are performed at regular intervals and which involve the storage of information at a location not accessible by a ransomware attack.
- Do Your Due Diligence: Organizations should ensure appropriate due diligence is conducted on – and ensure that appropriate contractual protections are in place with – service providers that have access to the organization’s IT systems.
- Inform Your Users: A critical step in preparing for ransomware attacks is to implement training and awareness programs so that users are informed about cyber security risks, do not subject an organization’s IT systems and data to unnecessary risks, and appropriately respond to attacks.
- Get Insurance: There are a number of insurance options available to organizations to provide some financial protection against the various risks and liabilities associated with ransomware attacks.
- Get the Right Help at the Right Time: In addition to obtaining executive buy-in and working with internal security, IT and legal teams, there are a range of external advisers, consultants, investigators, coaches and products available to help organizations preparing for or responding to a ransomware attack.
- Respond Appropriately: If (or, as some experts say, “when”) a ransomware attack happens, it is important for organizations to follow the plans that have put in place. It will also be important for organizations to consider and meet any mandatory breach reporting and record keeping obligations.
- Be Ready for Litigation: There are various steps organizations can take to ensure that appropriate legal privileges are engaged, particularly during the investigation of a ransomware attack, to assist the organization in the event that the ransomware attack leads to litigation.
These steps should be incorporated by organizations into a customized cyber security program, which should then be reviewed, tested and updated on an ongoing basis to appropriately reflect the changing threat landscape. Organizations may wish to work with experienced legal counsel to assist them with any of the foregoing steps.
Note: This article is of a general nature only and is not exhaustive of all possible legal rights or remedies. In addition, laws may change over time and should be interpreted only in the context of particular circumstances such that these materials are not intended to be relied upon or taken as legal advice or opinion. Readers should consult a legal professional for specific advice in any particular situation. | https://www.mltaikins.com/innovation-data-technology/technology/10-steps-prepare-organization-ransomware-attack/ |
GLYNET-PM 2 is a combination of three medicines that helps to control blood sugar levels. This medicine is used with diet and exercise to improve blood sugar control in adults with type 2 diabetes mellitus. It helps in the proper utilization of insulin, thereby lowering blood sugar levels.
GLYNET-PM 2 Tablet should be taken in the dosage and duration as suggested by your doctor. It must be taken with a meal to prevent stomach upset. If you miss a dose, take it as soon as feasible. However, if it's almost time for your next dose, then skip the missed dose and go back to your regular schedule. Don't double the dose. Overdose might lead to low blood sugar (hypoglycemia).
Some people may evolve hypoglycemia (low blood sugar level) when this medicine is taken along with other antidiabetic medicines, alcohol or skipping a meal. Monitor your blood sugar levels regularly while taking it. Other common side effects of this medicine include nausea, taste changes, diarrhea, stomach pain, headache, edema (swelling), blurred vision, bone fracture, and respiratory tract infection.
USES
- To treat type 2 diabetes mellitus
- Low blood sugar levels in adults with type 2 diabetes
- To control the level of sugar in your blood by helping your body make better use of the insulin
COMPOSITION
Glimepiride 2mg + Pioglitazone Hydrochloride 15mg + Metformin Hydrochloride 500mg (As Extended Release)
HOW IT WORKS
GLYNET-PM 2 is a combination of antidiabetic medicines: Glimepiride, Metformin, and Pioglitazone. They work by different mechanisms to provide better control of blood sugar when single or double therapy is not effective.
Glimepiride: Glimepiride lowers blood sugar by producing insulin (a natural substance needed to break down sugar in the body) and causing the pancreas to let the body use insulin efficiently. This medicine will only help in lowering blood sugar in those people whose body produces insulin naturally. With type 2 diabetes, the body doesn’t produce enough insulin, or it can’t properly use the insulin that it makes, so the sugar remains in your bloodstream. It causes high blood sugar levels (hyperglycemia).
Pioglitazone: Pioglitazone belongs to a set of diabetes medicines called thiazolidinediones (or glitazones). It helps control blood sugar levels by improving how your body uses a hormone called insulin. It does this by assisting your cells to become more sensitive to the insulin your body makes.
Metformin: Metformin is a medicine that helps to treat high blood pressure caused by type 2 diabetes. Metformin is used alongside diet and exercise to improve blood sugar control in adults with type 2 diabetes mellitus. It works by reducing the amount of glucose (sugar) made by your liver, decreasing the amount of glucose your body absorbs and increasing the effect of insulin on your body.
Reviews
Recent Reviews
No reviews yet! Be the first to leave a review. | https://zeelabpharmacy.com/product/glynet-pm-2 |
Scientists use climate models to understand complex earth systems. These models allow them to test hypotheses and draw conclusions on past and future climate systems. This can help them determine whether abnormal weather events or storms are a result of changes in climate or just part of the routine climate variation.
Why is it important to model the climate?
Climate models are important tools for improving our understanding and predictability of climate behavior on seasonal, annual, decadal, and centennial time scales. Models investigate the degree to which observed climate changes may be due to natural variability, human activity, or a combination of both.
Why are climate models important to climate scientists?
Scientists use climate models to predict how the climate might change in the future, especially as human actions, like adding greenhouse gases to the atmosphere, change the basic conditions of our planet.
Are models necessary to understand climate?
Are models necessary to understand climate change? (No. The basic cause of Earth’s warming is understood without models, but the interactions are complex enough that models help in trying to fully understand all of the relationships between the components in Earth’s climate system.)
How do scientists use climate models?
To predict future climate, scientists use computer programs called climate models to understand how our planet is changing. Climate models work like a laboratory in a computer. They allow scientists to study how different factors interact to influence a region’s climate.
What do you understand by climate models?
A climate model is a computer simulation of the Earth’s climate system, including the atmosphere, ocean, land and ice. They can be used to recreate the past climate or predict the future climate.
What is climate system model?
The current norm for a climate system model is to include a full suite of physical representations for air, water, land, and ice with a geographic resolution scale of typically about 250 km. … At present, climate system models specify solar luminosity, atmospheric composition, and other agents of radiative forcing.
How can you and scientists tell that a model is good?
Scientists tell good models from bad ones by statistical methods that are hard to communicate without equations. These methods depend on the type of model, the amount of data and the field of research.
What is an important conclusion of climate prediction models?
Explanation: The conclusion is that human activity alters the climate system, raising the global average near-ground temperature.
What are climate models and how accurate are they?
Climate models are sets of mathematical equations based on the laws of physics, chemistry, and biology that are used to understand Earth’s changing climate. Because climate cannot be studied in a laboratory, these models are run on computers to make projections of how the climate is changing.
Why do we use models in our everyday lives?
A model is any simplification, substitute or stand-in for what you are actually studying or trying to predict. Models are used because they are convenient substitutes, the way that a recipe is a convenient aid in cooking.
Why do scientists use models to study changes to the earth’s surface?
Models are used to explain every scientific phenomena imaginable. For example, our models of the earth’s atmosphere explain what energy enters and leaves the earth. Having a scientific model of the atmosphere tells us that if we change the gases in it, we will get different results.
How well must climate models agree with observations?
(i) Climate models can only meaningfully be evaluated relative to a specific purpose. For such an evaluation, the suitability of any given metric for that specific purpose needs to be demonstrated. (ii) Any mismatch between observation and model simulation might simply be caused by chaotic internal variability.
How do you make a climate model?
Constructing a climate model involves a number of steps:
- Building a 3-D map of the Earth’s climate system. The basic building blocks of climate models are 3-D “grid cells” that contain climate-related physical information about a particular location. …
- Developing computer code. …
- Making the model run through time.
What is an important conclusion of climate prediction models Brainly?
An important conclusion of climate prediction models is Increase in global temperature will continue but can be reduced.
What do climate models predict will happen globally to weather over the next 100 years quizlet?
Climate models predict global warming will increase temperatures at every location on Earth. | https://fdgkinetic.com/ecology/your-question-how-can-models-help-us-study-climate.html |
This blog post is part 3 in the continuation of our Identifying the Factors for Successfully Managing Supply Chain Risks research post. Our recent study to better understand supply chain risks focused on the structure, implementation, and maintenance of a formal system for managing risks in the supply chain. These factors included: Corporate Strategy, Supply Chain Organization, Process Management, Performance Metrics, and Information & Technology. We will now further explore Factor 3 Process Management.
Factor 3:
Process Management
This study showed that documenting the likelihood & impact of risks was not a key part of SCM and that supply chain risk information was not readily available to key-decision makers. Furthermore, very few of the firms are actually able to exploit risk to an advantage by taking calculated risks in the supply chain and even fewer were prepared to minimize the effects of disruptions. These questions were asked on 1 to 7 scale (strongly disagree to strongly agree): 1) A key part of our supply chain management is documenting the likelihood & impact of risks (mean=4.20, var.=2.86); and 2) Supply chain risk information is accurate and readily available to key-decision makers (mean=3.87, var.=2.78). There was some debate as to the validity and usefulness of tools to operationalize the process. The managers did tend to prefer approaches which combine subjective and objective measures because this allows them some freedom rather than being pushed into taking decisions solely on complicated numerical analysis. Failure Mode Effects and Analysis (FMEA) is a mainstream tool used to collect information related to risk management decisions for most companies in an engineering capacity, but not in a supply chain capacity. There were several documented procedures to complete an FMEA across industries in this study, especially in automotive. Most managers supported a modified version of the tool that could be used to help evaluate the risk of SCM decisions.
Several of the firms used financial reports and questionnaires during supplier approval to compare supply candidates to the business requirements of the buyers or project teams. When justified by a perceived level of risk, a few of the firms went one step further and had candidate comparison matrices (e.g., supplier profiling form and supply Chain PFMEA). Additionally, most had formal processes for supplier visits (e.g., Rapid Plant assessment, site verification of the supplier questionnaire, etc.). Some firms actually used life cycle management with supplier report cards and their buyers would conduct periodic supply chain reviews. In one firm, sourcing was assigned risk ownership and they used FMEA principles to evaluate risk impact. For each risk, they would assess what the financial impact would be in the event of a disruption. They then assigned a probability to each risk area and then they prioritized by multiplying the financial impact by the risk probability. Again, most firms are only using existing SCM applications for managing risk with no formal risk management system in place. In the absence of risk management applications, these firms are building risk considerations into traditional SCM applications.
Managing supply chain risks should occur at all levels of the supply chain, and the process should support integration with supplier and customer risk management activities. The process should be active in all stages of the acquisition life cycle, starting with technology development and continuing through acquisition, production, maintenance, repair, and disposal. The scope of the process should include all types of risks appropriate for the supply chain. In addition to the common causes of disruption, risk identification should consider economic, political, environmental, regulatory, manufacturing readiness, and technological obsolescence issues. All levels of management should be actively engaged in risk management, including strategic, business, program, technical, and tactical levels. The process should both leverage common tools for assessing risk, but also develop specific SCM mitigation tools and solutions.
A method for analyzing supply chain risk must be a cross-functional process that involves senior management as well as key stakeholders from finance, operations, internal audit, and risk management. However, the companies in this study have not adopted this boundary spanning process. Instead, they have managed risks within functional areas. However, it was acknowledged that the most effective forms of risk management demands involvement across multiple areas of the organization.
The process begins with an assessment of the supply chain. This can usually be done with internal resources but might require the assistance of outside consultants. In either case, it was agreed that this assessment would take the most effort. While generally lacking among firms, this study indicates the importance of having a process that will allow an organization to analyze, prioritize, and measure the economic impact of risks in the supply chain. Such a process should provide decision makers with financially justified value propositions for initiatives that are aligned with the company’s strategic goals. Though a number of different risk management processes have been put forward, most tend to follow the generic process offered in this study with the following key elements:
-
SCM Risk Planning
-
Identification
-
Analysis
-
Handling
-
Monitoring
Understanding the risks within a supply chain requires an in-depth knowledge of business operations. To develop this understanding, the company must begin with interviews and workshops typically involving a cross-functional team of subject matter experts representing sourcing, manufacturing, and logistics. The company must collect its financial and risk performance data (e.g., average lead times, safety stock levels, other inventory levels, etc.) and benchmark it against industry and functional comparisons. This process enables the organization to develop a detailed picture of its supply chain, which in turn helps it identify potential risks more easily. A few managers took the view that effective supply chain risk management does not need to be a highly formalized and structured process. However, this study favors a more formal, structured process for managing risk.
We hope Part 3 of the research study helps highlight the Process Management Factor for Successfully Managing Supply Chain Risks. We will be exploring our Factor 4 Performance Metrics study results next month. | https://www.industrystarsolutions.com/blog/2016/01/successfully-managing-supply-chain-risks-factor-3-process-management-part-3-of-5/ |
The Good News: Residential burglaries are down 2 percent in Atlanta so far this year. The Bad News: That small dip follows a 65 percent spike during the three previous years. So reports Terminal Station, an Atlanta real estate blog that has started picking apart Atlanta’s crime statistics. An excellent, if somber, analysis that advances discussion of the issue beyond the spin from City Hall. | https://atlantaunfiltered.com/tag/citizens/ |
Modern-Day Plague Deforestation is clearing Earth's forests on a massive scale, often resulting in damage to the quality of the land. Forests still cover about 30 percent of the world’s land area, but swaths the size of Panama are lost each and every year. The world’s rain forests could completely vanish in a hundred years at the current rate of deforestation. Forests are cut down for many reasons, but most of them are related to money or to people’s need to provide for their families.The biggest driver of deforestation is agriculture. Farmers cut forests to provide more room for planting crops or grazing livestock. Logging operations, which provide the world’s wood and paper products, also cut countless trees each year. Not all deforestation is intentional. Deforestation has many negative effects on the environment. Deforestation also drives climate change. Trees also play a critical role in absorbing the greenhouse gases that fuel global warming.
Kids.Mongabay.com: Deforestation
Deforestation refers to the cutting, clearing, and removal of rainforest or related ecosystems into less bio-diverse ecosystems such as pasture, cropland, or plantations (Kricher, 1997). What are the causes of deforestation? I. II. III. IV. V. VI. Deforestation in Borneo. Largest rainforests worldwide listed in descending order (from largest to smallest). Amazon basin of South AmericaCongo river basin of Central AfricaS.E. Facts: Did you know that tropical rainforests, which cover 6-7% of the earth's surface, contain over half of all the plant and animal species in the world! Overview of deforestation around the world: Between 1960 and 1990, most of the deforestation occurred globally, with an increasing trend every decade. Brazil has the highest annual rate of deforestation today.Atlantic coast of Brazil has lost 90-95% of its rainforest.Central America has 50% of its rainforests.South America has 70% of its rainforests. Destructive logging in Malaysia. 149 acres (60 hectares) per minute 2. ?
ESchoolToday: Forest Preservation
Introduction to forest preservation Environmental activists consider forests as one of the top 5 natural resources on earth. This is rightly so, and today, we shall look at how wonderful our forests are to us, and why we should immediately stop its' destruction. There is more to forests than just a massive collection of trees. It is a natural, complex ecosystem, made up of a wide variety of trees, that support a massive range of life forms. Quiet apart from trees, forests also include the soils that support the trees, the water bodies that run through them and even the atmosphere (air) around them. Forests come in many sizes and forms. It is estimated that two-thirds of the world's forest is currently distributed among 10 countries: (REF: IUCN, International Union for Conservation of Nature) Forests are hugely important for life on earth.
Voices of Youth: Deforestation
Member since February 8, 2014 1 Post Deforestation is when humans remove forest areas and the land is turned to something other than a forest. The video above provided by NASA shows a place in Africa where the forests have been cleared and you can see the dirt coming through. National geographic website says "Forests still cover about 30 percent of the world’s land area, but swaths the size of Panama are lost each and every year". The effects of deforestation are many. Environment africa GEM deforestation GEM Sustainable Development desertification extinction damage to biodiversity
WWF: Deforestation Main Page
Forests cover 31% of the land area on our planet. They produce vital oxygen and provide homes for people and wildlife. Many of the world’s most threatened and endangered animals live in forests, and 1.6 billion people rely on benefits forests offer, including food, fresh water, clothing, traditional medicine and shelter. But forests around the world are under threat from deforestation, jeopardizing these benefits. Forests play a critical role in mitigating climate change because they act as a carbon sink—soaking up carbon dioxide that would otherwise be free in the atmosphere and contribute to ongoing changes in climate patterns. Deforestation is a particular concern in tropical rainforests because these forests are home to much of the world’s biodiversity. WWF has been working to protect forests for more than 50 years. | http://www.pearltrees.com/u/77596264-globalisation-environmental |
The United States Conference of Mayors and the Kresge Foundation recently published The American Rescue Plan Act: Promoting Equity through ARPA Implementation, a comprehensive report that evaluates how different American cities utilized their ARPA funding to further equity goals. The report specifically focused on how cities:
- Identified underserved areas and systemic inequities, then leveraged federal aid to address these imbalances.
- Included both non-profits and the private sector as stakeholders in the deployment of federal funds.
- Incorporated a wider set of public finance, community investment, and social investment strategies with ARPA funding to sustain continuous equity work.
The report reviews 20 different cities, highlighting the equity programs, short- and long-term impacts, external resources and stakeholders, and lessons learned from each. The report also outlines several national principles to guide local ARPA strategies based on the city case studies. This summary shares examples from the Conference of Mayors and Kresge report while also showcasing how other cities have done this work recently, both before and since the influx of federal funding. These examples demonstrate the viability of these principles and supplement the exemplars highlighted in the Promoting Equity through ARPA Implementation report.
Summary of Common Principles Found across Cities
1. A clear definition of equity, which is operationalized as both an outcome and a process, is a vital component of an ARPA strategy that seeks to address historic and systemic inequities for underserved populations.
As more and more cities elevate their equity work, create equity offices, and hire chief equity officers, local leaders are developing a common definition of equity for their cities. The report details how cities like Boston are using their ARPA funding to provide services and resources to those historically excluded populations, while in Chicago local leaders are co-creating solutions with those most impacted by the problems they are seeking to solve. Chicago has been a leader in this work for some time, as shown in this interview with Candace Moore, the city’s chief equity officer. In her interview with Data-Smart, Moore explained that she thinks “equity work is innovation work,” since it requires “rethinking how systems and processes work, and we want to rethink that in alignment with community experiences and perspectives.”
Likewise, the city of Long Beach, California has not only adopted a guiding definition of equity but is threading it through all their civic work. As the city dispenses recovery funds it is not only thinking about equity in who it goes to, but how. The city has renovated its procurement system to ensure that funding gets to community members through a transparent bidding process and diverse contractors.
2. Using a data-driven strategy to identify the underserved or historically marginalized populations, that can help to evaluate the benefits and burdens of ARPA funding allocations, is crucial for cities that seek to advance equity.
The report highlights work in Fort Worth, where city leaders are utilizing geo-spatial solutions to map funding allocation against racially marginalized community groups in a visual layer. There are many examples of how cities have used geographic information systems (GIS) and mapping to understand inequities and show how disinvestment has damaged specific communities, including Oakland, California’s work on paving and Birmingham, Alabama’s work on blighted properties.
The COVID-19 pandemic also changed how cities understood demographics, inequities, and the benefits of data-driven decision making. In Milwaukee, Wisconsin, city leaders already considered racism a public health crisis, so when the pandemic started they were able to quickly prioritize residents that had been underserved or harmed by the health care system to direct testing and vaccines.
3. Meaningful community engagement that brings underrepresented residents to the table and creates channels for the priorities they express to shape ARPA decision making, is critical to an inclusive equitable recovery strategy.
The report outlines the development of a budget simulation tool being used in Greensboro, North Carolina. The tool allows residents to directly provide input on equity concerns and projects of interest and show how they believe the city should prioritize where ARPA funds are used. In the city of Detroit, local leaders held dozens of community meetings to discuss what funding the city had and ask how residents wanted to spend it, with a particular focus on communities that have traditionally been underrepresented. All of the projects are being tracked in an open and transparent ARPA spending dashboard that allows residents to monitor progress.
4. Utilizing partnerships with nonprofits that serve communities most impacted by the pandemic, particularly communities of color, is important to address systemic inequities holistically.
In Knoxville, Tennessee, leaders are expanding funding opportunities for nonprofit and grassroots organizations to build capacity with ARPA dollars, and in Fort Worth, Texas local officials are engaging small nonprofits in focus groups to provide input on their areas of experience and recommendations.
A pre-ARPA model of utilizing partnerships to address systemic inequities is the BenePhilly program, which connects folks with benefits they’re eligible for in an easy and seamless way. One of the finalists for the 2021 Innovations in American Government Awards, BenePhilly partners with community-based organizations (CBOs) and local nonprofits to reach out to, check their eligibility, and assist with applications for more than 15 benefits. As cities look to strengthen relationships with communities and CBOs this program is an excellent model for bringing all stakeholders to the table and directly assisting vulnerable residents.
5. Integrating additional funding sources, alongside ARPA investments, is critical to ensure that transformative projects seeded with ARPA funds to enhance equity are sustained. This includes other intergovernmental funds, as well as philanthropic, social, and community investments.
The sustainability of equity work is an important consideration right now, as city investments must provide both immediate and long-term improvements. Data-Smart has highlighted several examples of local and state governments blending and integrating philanthropic and community funding to the sustained benefit of residents.
The Racial Equity Anchor Collaborative is a collection of national, multi-racial organizations dedicated to combatting disenfranchisement and improving representation for Black, Indigenous, and other people of color. The Collaborative, which includes groups like the NAACP, UnidosUS, and the National Congress of American Indians, furthers equity work by connecting philanthropic funding with on-the-ground organizations that are working to ensure accurate voting and census representation among racial minorities.
Another example of integrating funding is The Ray, a nonprofit building the “highway of the future” with state and local departments of transportation. Using a testbed for innovation in Georgia, The Ray is utilizing new technology to create safer and more sustainable roads, including rubberized asphalt using old tires and planning for solar electric vehicle charging. By combining funding from state and local governments, federal funding (including ARPA) and philanthropic and nonprofit partners The Ray is building sustainable roads in a fiscally sustainable way.
6. Addressing equity challenges with ARPA funds requires collaborative action with a cross-jurisdictional lens, particularly to address systemic inequities that require regional solutions at scale.
Many systemic issues were worsened during the COVID-19 pandemic, including learning loss. The report showcases how city leaders in Canton, Ohio are working with community partners across local school districts to fund new summer programs that aim to address this deepened learning loss among youth.
Systemic inequities are typically across jurisdictions, requiring collaboration to effectively address. This Data-Smart article explains how issues like air pollution and sustainable infrastructure must be looked at through a regional lens, and this article outlines how combatting climate change for vulnerable populations must be done regionally.
7. Cities that used ARPA funds for revenue replacement to enhance fiscal health can also take steps to strengthen the city’s ability to make future sustained investments in core programs and services that are integral to advancing equity.
In San Diego, the majority of ARPA SLFRF funds are being used for revenue replacement; according to the Promoting Equity report, this allows the city to maintain civic employment and keep services at existing levels. In Baton Rouge, local leaders are incorporating ARPA funding to help sustain their Stormwater Master Plan work, which will mitigate extreme flooding disasters.
There are also other examples of using federal funding to support and maintain core services from previous recovery and reinvestment funding. In this article Professor Steve Goldsmith, director of Data-Smart, outlines how cities can invest funds in programs like broadband access, infrastructure equipment, and other fundamentals to advance equity both now and in the future. | https://datasmart.ash.harvard.edu/importance-arpa-equity |
6 Ideas to Align People With Company Culture
Company culture is as much a factor of business performance as it is of employee satisfaction at work – both are actually correlated. Despite the opportunity, business leaders still focus only few resources to the development of a strong workplace culture. Some even believe that it cannot be managed. Many organisations today actually don’t go much further than defining their core values: an exercise that loses all credibility when it stops at this stage.
In all organisations, employees share fundamental beliefs which are relative to what they need to do to live and succeed. It’s these values that define your company culture. When confronted to change, a phase of growth, a merger or an acquisition for example, employees who share common values are more productive, and can contribute more efficiently to the attainment of the business objectives.
Consequently, cultural alignment represents a major strategic asset for business. But how do you instil a common belief across your entire workforce? Is your company culture a lever you can use to achieve business goals, or does it act as a roadblock? Read on to find out how to create people and culture alignment to enhance business strategy and build a happier workplace.
Why Is Cultural Alignment So Important?
In organisations where employees share a common value set and where culture is well defined, it’s possible to see a peak in productivity, a lower turnover, and high employee satisfaction levels. This is because business leaders and managers are able to take decisions much faster and more efficiently. Having cultural alignment in your organisation means that the whole company is focused on performance and on delivering a superior customer experience to clients.
How to Achieve People & Culture Alignment?
1. Recruit the right people for your organisation
Company culture is an element that is particularly effective to help attract talent to your organisation. Studies have shown that people who display traits, values and preferences in line with their organisation feel higher work satisfaction, have a stronger bond to their company, and are also less likely to leave.
That’s why it’s important to look further than experience and qualifications when it comes to candidate selection in a hiring context. The ideal candidate will be the one who can not only meet the requirements of the role, but who is also equipped with the motivation to stick to the business and contribute to its development on the long-term.
2. Monitor the Relevance of Values
Although values are supposed to help a business grow in the right direction, it might not always be the case in practice. Some values can benefit your organisation while some others can have a negative impact. As your business goes through time, and for a variety of reasons, certain values can become irrelevant to your business. They may no longer be aligned to the performance objectives of your organisation, and can constitute a source of constraints rather than benefits.
Managing your values involves a constant monitoring. It’s about using positive values, while at the same time, neutralising the impact of negative values to avoid a counter-productive effect. Corporate value management doesn’t stop at producing a well-written value statement. Values need to be proactively activated and reflected through the managerial practices and business processes your organisation adopts.
3. Ensure Leaders Share the Same Values
Do all the business leaders of your organisation share the same values? An organisation’s leaders play a pivotal role in creating people and culture alignment. Everything stems from the decision makers of your organisation and a toxic leadership could destroy your organisation’s social and cultural fabric. A toxic leader will intentionally, by his actions, damage the credibility of your cultural values. This creates a negative and counter productive environment that can reduce the visibility of your work culture and break the sense of belonging employees can feel to your organisation.
From a broader perspective, a good leader is someone who not only features personal characteristics, technical and professional competencies, but also positive values, attitudes and behaviours to guide employees in the right path. Recruiting the active participation of management in the deployment of the company values is therefore essential to create a positive work environment and avoid conflicts between different management styles that would otherwise hurt the viability of your workplace culture.
4. Measure Culture Adoption
As an intangible business asset, the idea of measuring culture can seem counterintuitive at first. And even though it’s a challenge, a range of technological tools are now available on the market to help you monitor culture and its adoption. Employee engagement software and regular employee surveys help measure the alignment of employees’ personal values with the ones driven by an organisation.
By putting into perspective actual and perceived values, HR can obtain a clear reflection of the situation. Cultural change cannot happen without a shift in beliefs and behaviours driven by management. Yet, to implement cultural change, it’s important to understand your employees and what they stand for on an individual level.
5. Pilot Performance With Values
Cultural alignment can be created in different ways: on a global and organisational level, but also, on a more local and managerial level. Together, manager and employees can determine the underlying values and type of performance required to achieve the business objectives set for a particular project: e.g. profitability, customer service, entrepreneurship, openness, collaboration, good failure management, or respect of diversity.
Each of these values, which have been chosen by the employees themselves, can be more easily rolled out into behavioural standards that will need to be adopted to achieve goals. Those behaviours, which have been pre-defined, understood and accepted, constitute the framework that will be used to conduct self-assessments and performance reviews.
6. Use Recognition to Drive Behavioural Change
We can distinguish two category of values. The values that fall in the first one are the result of the history of your organisation and are being transmitted to employees over time. Those values, often unspoken, structure and influence the behaviours of employees, sometimes without them even realising it. The second category is composed of the values that indicate the new direction your culture should take and is driven by an organisation’s leadership team. In order for company values to be credible and shared, they need to be composed of a mix from those two categories.
By encouraging peer-to-peer recognition, and manager-to-employee recognition, you empower employees to bring company values to life. For example, individual recognition badges can be shared between employees to rewards positive behaviours in the context of the values of your organisation. Employees can flourish when their work is valued and appreciated. Rewards becomes even more meaningful when they are associated to the beliefs and culture of an organisation, enhancing the bond that exists between an employee and his employer. | https://www.employeeconnect.com/blog/people-organisational-culture-alignment/ |
The WikiLeaks Party believes that truthful, accurate, factual information is the foundation of democracy and is essential to the protection of human rights and freedoms. Where the truth is suppressed or distorted, corruption and injustice flourish.
The WikiLeaks Party insists on transparency of government information and action, so that these may be evaluated using all the available facts. With transparency comes accountability, and it is only when those in positions of power are held accountable for their actions, that all Australians have the possibility of justice.
The WikiLeaks Party is fearless in its pursuit of truth and good governance, regardless of which party is in power. In each and every aspect of government we will strive to achieve transparency, accountability and justice. This is our core platform.
3) To scrutinise and oversee government practice, honesty and efficiency (the oversight function).
However what we have witnessed in Australia, despite exceptional work by individual politicians in all the major parties, is a failure of the oversight function. There has been a gradual acceptance that once a single party or a coalition has gained the majority required to form a government, Parliament then becomes little more than an extension of that government’s executive machinery: the houses of Parliament effectively become rubber stamps for its policy agendas. This problem becomes particularly pressing when a single party gains a majority in both Houses, a spectre that remains a distinct possibility after each federal election.
The WikiLeaks Party aims to restore genuine independent scrutiny into our political process.
This is why we are campaigning only for the Upper House. We want to return the Senate to its core function as a genuine Upper House – offering independent scrutiny of government, protecting the interests of the people, and ensuring that light is shone upon bad practices.
Parliament is also failing Australians in its democratic function.
In a true democracy, Parliament must facilitate – not obstruct – our democratic obligation to dissent.
Yet we are witness to a degeneration of democracy into political party oligarchy, in which dissent is stifled and the public bureaucracy is contained and docile.
Members of Parliament fail to represent the people who elect them partly because of the party system: they are constrained by an obligation to toe their party line. Instead of voting according to conscience or according to the values they have publicly espoused to their constituents, they vote as they are instructed by their Party leadership.
Far too often politicians conceive their public role not to be scrutineers of government, but to be partisan supporters of their own party. Their sense of duty to the party and to the networks of political patronage to which they owe their nomination and their career prospects, outweighs their sense of duty to the electorate.
How rare is it to see a Member of Parliament, whether in government or in opposition, stepping out of line, or raising difficult or controversial issues? To engage in backbench rebellion spells death for the career politician, putting an end to their prospects of career advancement or ministerial appointment.
The result is a system of party oligarchy in which conspiracy and corruption of purpose flourish.
It’s time for a culture shift.
It’s time to give dissidents a voice in our political system.
It’s time to inject some genuine, independent scrutiny into our political process.
The WikiLeaks Party will run candidates in future Australian Senate elections.
Our Senators will be genuinely independent in their scrutiny of the government and demand thorough transparency its contractual arrangements with private companies.
We will bring our core principles of transparency, accountability and justice to bear on all the major issues currently facing Australia.
WikiLeaks Party candidates are ideally suited to the work of the Senate: they are skilled in understanding complexity and they are experienced in dealing with large amounts of documents produced by bureaucracies and spotting their hidden significance and tricks.
The WikiLeaks Party will be vigilant against corruption in all its forms.
The WikiLeaks organisation was pioneering in its use of ‘scientific journalism’, reporting information with reference to publicly available primary sources. The WikiLeaks Party will promote ‘scientific policy’; decision-making based on research, evidence and clear, transparent principles.
the free flow of information: we live in a media-ocracy. What is politically possible is defined by the media environment. And in Australia 98% of the print media is the hands of just three corporations. Seven out of ten of our national newspapers are owned by the Murdoch News International group. The WikiLeaks Party will push for radical change in media policy to increase Australian media innovation.
Internet freedom – the WikiLeaks Party will be fearless in its opposition to the creeping surveillance state, driven by globalised data collection and spying agencies, both state and corporate controlled. We will demand that all information on data seizure and storage of citizens’ data by government agencies and allied corporations be made public.
protection for whistleblowers – with an increasingly unaccountable corporate state and an increasingly secretive security state, whistleblowers are an essential brake on bad practice. Only the threat of leaks can keep unaccountable institutions honest. So whistleblowers must be protected by law.
standing up for national sovereignty – for too long Australian politics has been under the influence of foreign powers and transnational non-state actors, affecting both our foreign and domestic policy, against the interests of Australians. The WikiLeaks Party will fight to expose the collusions between the Australian state and the military-industrial complex that dominates world affairs.
integrity in the global community – our national character is proud and generous, but recent policies have not met our obligations to the international community. The WikiLeaks Party will ensure Australia stands tall as a responsible global citizen. | http://wikileaksparty.org.au/ |
TECHNICAL FIELD
BACKGROUND
SUMMARY
DETAILED DESCRIPTION
Device Fabrication and Assay Preparation and Use
Applications
EXAMPLES
Example 1
Device Modeling
Example 2
Comparison of Baffle to Straight Channel Design
Example 3
Assay Validation with Finger Prick and Venous Healthy Donor Blood Sources
Example 4
Neutrophil Chemotaxis Towards Standard Chemoattractants
Example 5
Disfunction in Neutrophil Recruitment after Burn in Human Patient
Example 6
Comparison of Human, Rat and Murine Neutrophils
OTHER EMBODIMENTS
The present disclosure relates to cell chemotaxis assays.
Neutrophil directional migration in response to chemical gradients, also known as chemotaxis, is one of the key phenomena in immune responses against bacterial infection and tissue injury. Alterations in neutrophil chemotaxis, e.g., as a result of burns or trauma, may lead to chronic inflammation and further tissue damage. Identifying alterations of neutrophil chemotaxis may help estimate the risk for infections more accurately. To better study neutrophil chemotaxis, in vitro assays have been developed that replicate chemotactic gradients around neutrophils. Since red blood cells out-number neutrophils and have the propensity to clot, measurements of neutrophil migration pattern in whole blood is challenging. For this reason, traditional assays (e.g. transwells) uses isolated neutrophil cells from whole blood.
However, existing assays can include one or more drawbacks. For example, such assays require time-consuming processing of blood to isolate neutrophils. Furthermore, such isolation may alter the responsiveness of neutrophils compared to in vivo conditions, leading to inaccurate characterization of the neutrophils. For instance, certain chemotaxis assays utilize cell separation methods, such as positive selection or negative selection, which are prone to activating neutrophils by engaging specific receptors on the neutrophils. Once activated, the neutrophils' migration profile can be altered; however, this change may not be directly related to the biological condition of interest, but rather a response from the applied stress introduced by cell isolation protocols. Additionally, existing assays require processing relatively large volumes of blood. These assays lack accuracy and/or require use by technicians having specialized training. These limitations, among others, restrict the assays' usefulness in clinical laboratories.
The microfluidic devices disclosed herein enable the investigation of cell motility, such as neutrophil chemotaxis in blood samples. In particular, the microfluidic devices can be used to measure the directional migration and speed of neutrophils in an attractant gradient, using low sample volumes and with minimal, if any, activation of the neutrophils. Using the new devices, an analysis of cell motility health or impairment, e.g., due to infection or tissue injury, can be determined with a high degree of precision. Moreover, the new devices can be used to identify candidate drug agents suitable for modifying cell motility. Such compounds then can be further screened for their potential use as mediators of inflammation resulting from tissue trauma and/or infections.
In particular, the new devices circumvent the need to use separation methods that interfere with the motile cell analysis (e.g., positive or negative selection) by relying instead on a baffle structure that inhibits the movement of undesired cells in a sample to a greater extent than the movement of the desired motile cells. For neutrophil analysis, the new devices allow whole blood samples to be used directly, and thus reduce the overall sample processing time, while also enabling analysis of the neutrophils directly without the need to isolate them from whole blood.
In general, the subject matter disclosed herein can be embodied in methods for monitoring neutrophil chemotaxis in a device, in which the methods include obtaining a device that includes an input chamber, an attractant chamber, a migration channel arranged in fluid communication between an outlet of the input chamber and an inlet of the attractant chamber, a baffle arranged in fluid communication between the outlet of the input chamber and the migration channel or within the migration channel, and an exit channel in fluid communication with the migration channel at a point beyond the baffle and before the migration channel enters the inlet of the attractant chamber. The methods further include adding an attractant solution to the device to establish an attractant gradient between the input chamber and the attractant chamber, adding to the input chamber a blood sample including a plurality of red blood cells and a plurality of neutrophils, incubating the device under conditions and for a time sufficient to enable movement of cells in the blood sample from the input chamber into the migration channel, in which the baffle is configured to inhibit movement of the red blood cells through the baffle to a greater extent than the baffle inhibits movement of the neutrophils through the baffle, and monitoring whether any of the neutrophils follow the attractant gradient in the migration channel toward the attractant chamber.
The methods can include one or more of the following features in various combinations. For example, in some implementations, the blood sample is whole blood.
In some implementations, monitoring whether any of the neutrophils follow the attractant gradient includes determining a number of neutrophils that follow the attractant gradient.
In certain instances, the blood sample has a volume less than about 2 microliters.
In some cases, establishing the attractant gradient includes adding the attractant solution to all chambers and channels in the device, and replacing the attractant solution in the input chamber with a liquid medium that lacks the attractant such that the attractant gradient forms between the input chamber and the attractant chamber.
4
4
In some implementations, the attractant solution includes interleukin-8 (IL-8), C5a, formyl-methionyl-leucyl-phenylalanine (fMLP), leukotriene B(LTB), adhenosine tri-phosphate (ATP), tumor growth factor beta (TGFb), or endothelial derived neutrophil attractant factor (ENA).
In some instances, the baffle includes a first fluid passage in fluid communication with a second fluid passage, in which an angle between a sample transport path in the first fluid passage and a sample transport path the second fluid passage is greater than or equal to about 45 degrees. For example, the angle can be about 90 degrees.
In some implementations, a cross-sectional area of the first fluid passage normal to the sample transport path in the first fluid passage is greater than a red blood cell cross-sectional area, a height of the cross-sectional area is greater than a red blood cell thickness and less than a red blood cell diameter, and a width of the cross-sectional area is greater than the red blood cell diameter. For example, the height of the cross-sectional area can be greater than about 2 microns and less than about 6 microns, and the width of the cross-sectional area can be greater than about 6 microns.
In certain implementations, the baffle includes multiple first fluid passages, and multiple second fluid passages, each first fluid passage being in fluid communication with the output of the input chamber and in fluid communication with a corresponding second fluid passage, in which an angle between a fluid transport path of each first fluid passage and a sample transport path of the corresponding second fluid passage is greater than or equal to about 45 degrees.
In some implementations, the methods further include analyzing the health of the neutrophils that follow the attractant gradient in the migration channel toward the attractant chamber. Analyzing the health of the neutrophils can include determining a number of the neutrophils that follow the attractant gradient toward the attractant chamber compared to a total number of cells moving through the migration channel, determining a rate at which one or more neutrophils follow the attractant gradient toward the attractant chamber, and/or determining a number of neutrophils that do not follow the attractant gradient compared to the total number of moving cells.
In some cases, monitoring whether any of the neutrophils follow the attractant gradient includes obtaining an image of neutrophils in the attractant chamber. Monitoring whether any of the neutrophils follow the attractant gradient can occur for at least 10 minutes.
In some implementations, adding the attractant solution to establish the attractant gradient includes applying a vacuum to the input chamber, the migration channel, the baffle and the attractant chamber such that air is absorbed through walls of the device.
In another aspect, the subject matter of the disclosure can be embodied in methods of screening neutrophil attractants, in which the methods include obtaining a device that includes an input chamber, multiple attractant chambers, and multiple filtration passageways, each filtration passageway being in fluid communication with the input chamber and a corresponding attractant chamber, and each filtration chamber including a migration channel in fluid communication between an outlet of the input chamber and an inlet of the corresponding attractant chamber, a baffle arranged in fluid communication between the outlet of the input chamber and the migration channel or within the migration channel, and an exit channel in fluid communication with the migration channel at a point beyond the baffle and before the migration channel enters the inlet of the corresponding attractant chamber. The methods further includes adding a different attractant solution to at least two attractant chambers to establish a different attractant gradient between the input chamber and each attractant chamber to which an attractant solution has been added, adding to the input chamber a blood sample including multiple red blood cells and multiple neutrophils, incubating the device under conditions and for a time sufficient to enable movement of cells in the blood sample from the input chamber into one or more filtration passageways, in which the baffles of the one or more filtration passageways are configured to inhibit movement of the red blood cells through the baffles to a greater extent than the baffles inhibit movement of the neutrophils through the baffles, and monitoring whether any of the neutrophils follow any of the established attractant gradients to one of the attractant chambers to which an attractant solution has been added.
The methods can include one or more of the following features in various combinations. For example, in some implementations, monitoring whether any of the neutrophils follow any of the established attractant gradients includes, for each different attractant gradient, determining a number of neutrophils that follow the attractant gradient and/or determining a rate at which one or more neutrophils follow the attractant gradient.
In some implementations, the methods further include identifying the attractant that establishes the attractant gradient resulting in the largest number of neutrophils reaching an attractant chamber and/or resulting in the highest rate at which one or more neutrophils follow the attractant gradient. Each baffle can include multiple first fluid passages being in fluid communication with the output of the input chamber, and multiple second fluid passages, each second fluid passage being in fluid communication with a corresponding first fluid passage of the baffle, in which an angle between a fluid transport path of each second fluid passage and a sample transport path of the corresponding first fluid passage is greater than or equal to about 45 degrees.
In some cases, a cross-sectional area of each first fluid passage is greater than a red blood cell cross-sectional area, a height of the cross-sectional area for each first fluid passage is greater than a red blood cell thickness and less than a red blood cell diameter, and a width of the cross-sectional area is greater than the red blood cell diameter.
In another aspect, the subject matter of the present disclosure can be embodied in a device or devices that include an input chamber, an attractant chamber, a migration channel arranged in fluid communication between an outlet of the input chamber and inlet of the attractant chamber, a baffle arranged in fluid communication between the outlet of the input chamber and the migration channel or within the migration channel, and an exit channel in fluid communication with the migration channel at a point beyond the baffle and before the migration channel enters the inlet of the attractant chamber. The baffle is configured to inhibit movement of a first type of cell through the baffle to a greater extent than the baffle inhibits movement of a second type of cell through the baffle.
The device or devices may include one or more of the following features in various combinations. For example, in some implementations, the baffle includes a first fluid passage in fluid communication with a second fluid passage, in which an angle between a sample transport path in the first fluid passage and a sample transport path the second fluid passage is greater than or equal to about 45 degrees. For example, the angle can be about 90 degrees.
In some implementations, a cross-sectional area of the first fluid passage normal to the sample transport path in the first fluid passage is greater than a cross-sectional area of the first type of cell, a height of the cross-sectional area is greater than a thickness of the first type of cell and less than a diameter of the first type of cell, and a width of the cross-sectional area is greater than the diameter of the first type of cell. The height of the cross-sectional area can be greater than about 2 microns and less than about 6 microns, and wherein the width of the cross-sectional area can be greater than about 6 microns.
In some cases, a volume of the input chamber is greater than about 1 microliter and less than about 5 microliters.
In some implementations, a length of a fluid transport path through the migration channel is between about 10 microns and about 2000 microns.
In certain cases, the baffle multiple first fluid passages, and multiple second fluid passages, each first fluid passage being in fluid communication with the output of the input chamber and in fluid communication with a corresponding second fluid passage, in which an angle between a fluid transport path of each first fluid passage and a sample transport path of the corresponding second fluid passage is greater than or equal to about 45 degrees.
In some implementations, the inlet chamber and/or the migration channel is coated with a protein. In some cases, the protein is configured to prevent neutrophil adhesion. The protein can be albumin. In some cases, the protein is configured to promote neutrophil activation. The protein can be P selectin.
In another aspect, the subject matter of the present disclosure can be embodied in a system that includes any of the foregoing devices and a control apparatus configured to image a number of cells migrating from the input chamber through the baffle and the migration channel to the attractant chamber
As used herein, “motility” means the ability of a motile cell to move itself, e.g., at a specific migration rate, at least under certain conditions. Motile cells include neutrophils and other immune cells such as granulocyte, monocytes, and lymphocytes, as well as certain cells that can move only under certain specific conditions, such as mast cell precursors, fibroblasts, and endothelial cells (e.g., circulating endothelial cells (“CEC”)) or cells under pathological conditions, such as metastatic cancer cells (e.g., circulating tumor cells (“CTC”)). Other examples of motile cells include, but are not limited to, sperm cells, bacteria, parasites (e.g., sporozoite phase parasites), eosinophil cells, dendritic cells, and platelets.
As used herein, “chemotaxis” means a movement of a motile cell in response to a chemical stimulus.
As used herein, “attractant” means an agent or force that induces a motile cell to migrate towards the agent. Any agent that activates migration may be used as an attractant, including, for example, agents that introduce a force that is chemically, mechanically, or electrically based.
As used herein, “repellant” means an agent that induces a motile cell to migrate away from the agent. Any agent that activates migration may be used as a repellant, including, for example, agents that introduce a force that is chemically, mechanically, or electrically based.
As used herein, “gas-permeable” means having openings that allow gas to pass through.
As used herein, “blood sample” means any treated or non-treated blood.
Implementations of the subject matter described herein can include several advantages. For example, in some instances, the presently described techniques bypass the need to purify/isolate motile cells from a fluid sample, such as neutrophils from whole blood, prior to performing an assay. In certain implementations, the microfluidic devices disclosed herein enable identification of motile cells, such as neutrophils, without requiring the use of cumbersome cell separation methods such as density gradients, positive selection, or negative selection, which can introduce artifacts by activating neutrophils. The presently disclosed methods do not require a washing step to wash blood from the device after neutrophils are identified. Since such washing steps typically require additional hardware external to the microfluidic device, cost and complexity can be reduced. Neutrophil migration in the presently disclosed devices takes place inside three-dimensional channels, as opposed to solely on a two-dimensional surface, which may induce changes in speed and direction. Thus, the present device enables accurate quantification of neutrophil migration. In some cases, useful quantitative data can be obtained just by counting motile cells that reach an attractant chamber at the end of the assay, as opposed to requiring cell tracking.
Unless otherwise defined, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. Although methods and materials similar or equivalent to those described herein can be used in the practice or testing of the present invention, suitable methods and materials are described below. All publications, patent applications, patents, and other references mentioned herein are incorporated by reference in their entirety. In case of conflict, the present specification, including definitions, will control. In addition, the materials, methods, and examples are illustrative only and not intended to be limiting.
Other features and advantages of the invention will be apparent from the following detailed description, and from the claims.
FIG. 1A
FIG. 1A
100
100
100
102
102
104
104
102
106
108
102
102
106
is a schematic example of a microfluidic device for analyzing cell motility, e.g., neutrophil motility. The microfluidic device can be used for inducing migration of motile cells in an attractant gradient. The device includes an input chamber for receiving a fluid sample, where the fluid sample may contain multiple motile cells and non-motile cells. For example, the fluid sample could be a droplet of whole blood that contains both neutrophils and red blood cells (RBCs). As shown in the example of , the input chamber is surrounded by one or more attractant chambers , in which a motile cell attractant may be provided. Each of the attractant chambers is fluidly coupled to the input chamber through a baffle and a fluid migration channel . In the present example, the input chambers have a circular profile, though other shapes may also be used. The diameter of the input chamber can be between about 100 microns to about 2000 microns. For example, the diameters can be about 200, 250, 300, 350, 400, 450, 500, 750, or 1000 microns. If the diameter is too large, it will increase the time required for the motile cells, e.g., neutrophils, far from the baffle to exit the loading chamber and enter the next part of the device.
102
102
104
104
The input chamber can have a volume in the range of about 1 microliter to about 20 microliters. For example, the input chamber can have a volume of about 2, 3, 4, 5, 7, 8, 10, 12, 15, 18, or 20 microliters. The one or more attractant chambers have rectangular shaped profiles in the present embodiment, but other shapes can also be used. The volume of the attractant chambers can be between about 10-1000 nanoliter. For example, the volume of the attractant chamber can be about 50 to 750 nanoliters, or 100 to 500 nanoliters.
FIG. 1B
FIG. 1A
FIG. 1B
104
106
108
104
106
102
108
106
108
108
106
is a close-up view of a portion of the device of , showing one of the attractant chambers and the corresponding baffle and migration channel to which the attractant chamber is coupled. As shown in , the baffle is arranged in fluid communication between an outlet of the input chamber and the migration channel , such that a fluid sample deposited in the input chamber can move through the baffle into the migration channel . The migration channel is arranged in fluid communication between an outlet of the baffle and an inlet of the attractant chamber.
100
104
102
104
104
104
108
106
102
102
104
106
108
102
104
During operation of the device , an attractant solution is added to the attractant chamber to establish an attractant concentration gradient between the input chamber and the attractant chamber . In general, the concentration of the attractant is highest in the attractant chamber and decreases from the chamber through the migration channel and the baffle to the input chamber . When a fluid sample containing a motile cell responsive to the attractant is added to the input chamber , the attractant concentration gradient induces chemotaxis of the motile cell toward the region where the concentration of the attractant is highest, i.e., the attractant chamber . Both the baffle and the migration channel are sized to allow the desired motile cell to pass from the input chamber to the attractant chamber .
102
106
After establishing the attractant gradient, the input chamber is filled with the fluid sample. Cells within the fluid sample begin moving toward the openings of the baffle(s) . Cell movement does not occur based on external pressure source (e.g., introducing pressure differences using syringe pumps or liquid pumps) or any flow of the liquid within the device. Instead, cell movement through the fluid in the device is the result of a combination of passive factors including a static pressure difference created by filling the input chamber with the fluid sample, natural diffusion, and/or random Brownian motion. The primary mechanism for motion of motile cells (e.g., by “crawling” in the case of neutrophils or swimming for other motile cells such as certain bacteria and sperm) in the fluid sample will be in response to the existence of an attractant gradient, depending on the choice of the attractant, e.g., chemoattractant.
106
100
106
110
108
110
FIG. 1B
As explained above, to help prevent the undesired cells from inhibiting or interfering with the active migration of the motile cells, the baffle is configured to inhibit the movement of undesired cells to a greater extent than the desired motile cells. For example, in the device shown in , the baffle includes one or more passageways sized to selectively allow migration of the desired motile cells, while being small enough to substantially block the movement of other undesired motile and non-motile cells into the migration channel . For instance, for neutrophil analysis in a whole blood sample, the dimensions of the passageways are sized to allow migration of neutrophils in the blood sample, while being small enough to substantially impede the passage of other cells in the sample, such as RBCs and other leukocytes (e.g., monocytes and lymphocytes). While neutrophils are generally the same size or larger than RBCs and other leukocytes (e.g., a typical human neutrophil is between about 8-15 microns in diameter; typical human lymphocytes and monocytes have diameters of about 7 microns and between about 10-30 microns, respectively; a typical disc-shaped human RBC has a diameter between about 6-8 microns and a height between about 2-2.5 microns), neutrophils are more deformable in shape than RBCs and other blood cells, and thus can change dimensions to migrate through tight passages that would otherwise impede the movement of other leukocytes and RBCs.
110
110
108
110
110
One way of appropriately sizing the passageways to allow neutrophils, but not other cells to pass through is to restrict the cross-sectional area of the passageway along a plane normal to the direction of cell movement. For example, one could use microfluidic channels having cross sections smaller than that of the undesired cells. However, such channels could be completely obstructed at their entrance by a collection of cells, precluding the formation of gradients. Furthermore, cross-sectional areas that are smaller than that of the undesired cells could also impede the desired motile cell migration, since such cells would have no gaps to pass through. Instead, for the microfluidic devices disclosed herein, the cross-sectional areas of the passageways and/or of the migration channels are configured to be larger than the largest diameter or cross-sectional dimension of one or more of the different undesired cells. At the same time, a first dimension of the passageway and/or migration channel cross-section is configured to be about equal to or less than a size of the undesired cell. With this configuration, substantial movement of the undesired cell(s) through the passageway is still be restricted, but the desired motile cell (e.g., neutrophil) modifiesits shape as it follows the attractant gradient so as to migrate around and between the undesired cells through open gaps in the passageway . The desired motile cells thus “squeeze” their way around and between the undesired cells. As an example, a first dimension of the cross-section (e.g., width) can be set larger than the cell(s) to be blocked, whereas a second dimension of the cross-section (e.g., height) is set to be shorter than the cell(s) to be blocked.
FIG. 2
110
200
110
202
110
204
110
110
110
108
110
110
is a schematic depicting an example of a passageway cross-section along a plane that is normal to a direction of transport through the passageway (the direction of transport in this example is along the y-direction into the page). The cross-section is bound by walls of the passageway . For separating RBCs and undesired leukocytes from neutrophils in a human blood sample, the height of the passageway is configured to be less than the diameter of a RBC, e.g., between about 1-6 microns. Additionally, the width of the passageway is configured to be larger than the diameter of the RBC, e.g., between about 8-12 microns. With such dimensions, the device tends to force RBCs tend to lay on their side before entering the passageway , thus limiting RBC passage. Furthermore, monocytes and lymphocytes tend to require much larger openings (e.g., at least 10 microns by 10 microns) to allow migration. However, given the relatively large passageway width, there is still room for the neutrophils to migrate past the RBCs and other leukocytes. Furthermore, the larger width prevents complete clogging of the passageway by the RBCs and other leukocytes. If the channel cross-section is too small (e.g., less than 1 μm), gaps do not form such that the attractant gradient cannot be maintained and the neutrophils do not have enough room, even after deforming their shape, to pass through into the migration channel . The cross-sectional area of the passageways can be smaller or larger than those described here, and can be used to analyze the migration of cells other than neutrophils including, for example, lymphocytes, monocytes, natural killer lymphocytes, platelets and megacariocytes, epithelial cells, endothelial cells, cancer cells, bacteria, sperm, and the like. Table 1 provides a list of different examples of motile blood cells, their typical concentration in human blood, and an appropriate cross-sectional area of a rectangular shaped channel for allowing migration of the motile cells. Table 1 also lists channel cross-sectional areas for other motile cells such as bacteria, parasites and sperm cells. The device can also be used with cells from blood of other animals, e.g., murine, rabbit, monkey, or canine blood, as well. However, the dimensions of the passageways should be modified to accommodate the different sizes of cells obtained from these other types of blood. For example, neutrophils and RBCs from murine blood are smaller than those of humans (murine RBC are about 5-6 μm in diameter which humans are about 7 μm, and murine neutrophils cell diameter ranges 5-6 μm while human neutrophil cell diameter ranges 7-8 μm).
TABLE 1
Cells/μL blood -
Typical channel size for
healthy individual
migration
Blood Cell Type
Neutrophil (granulocyte)
5,000
6 × 6
μm<sup>2</sup>
Eosinophil
10
6 × 6
μm<sup>2</sup>
Monocyte
50
10 × 10
μm<sup>2</sup>
Lymphocyte
3,000
8 × 8
μm<sup>2</sup>
Dendritic Cell
1
10 × 10
μm<sup>2</sup>
Circulating endothelial cell
0.1
10 × 10
μm<sup>2</sup>
Fibrocyte
0.1
6 × 6
μm<sup>2</sup>
Mast cell
0.1
6 × 6
μm<sup>2</sup>
Circulating tumor cell
0.01-1
10 × 10
μm<sup>2</sup>
Platelets
50,000
2 × 2
μm<sup>2</sup>
Other motile cells in
complex mixtures
Bacteria
1 × 1
μm<sup>2</sup>
Parasites (sporozoite phase)
5 × 5
μm<sup>2</sup>
Sperm cells
2 × 2
μm<sup>2</sup>
110
The movement of undesired cells relative to the movement of desired motile cells also can also be restricted by adding a relatively sharp turn in the passageway , e.g., a turn of at least about 90 degrees. Such a turn creates congestion/gridlock in the movement of undesired cells. In particular, as a cell moves, tumbles, floats, or is pushed into the corner, it tends to block the advance of other trailing cells behind it by restricting the cross-section of the channel to less than the diameter of a single cell. This configuration works well for cells that move based on granular flow (e.g., RBCs), because the granular flow force pushing the cells in the channel is not enough to deform the cells through the restricted section. However, since the cross-section of the channel is larger than that of the undesired cell, gaps still exist for the desired motile cells to pass through.
FIG. 1B
110
112
114
112
102
114
108
108
106
112
114
110
110
As an example, referring again to , the passageway is composed of two portions in fluid communication with each other, in which a direction of cell movement in a first portion is angled with respect to a direction of cell movement in the second portion . The first portion is in fluid communication with an output of the input chamber , whereas the second portion is in fluid communication with an input to the migration channel . Preferably, the angle between the directions of cell movement in the two portions is sharp enough to trap the undesired cells and prevent them from dispersing into the migration channel . The angle can be measured from a position where the second portion would otherwise be co-linear with the first portion. For example, in the baffle , the angle between the direction of cell movement in the first portion and the direction of cell movement second portion is about 90 degrees. Other angles also can be used. For example, the angle can include, but is not limited to, about 45 degrees, about 55 degrees, about 65 degrees, about 75 degrees, about 85 degrees, 105 degrees, 115 degrees, 125 degrees, or 135 degrees. The angle should be at least about 45° to effectively restrict movement of the undesired cells. While the angle between the two portions of the passageway further inhibits the movement of the undesired cells, the desired motile cells can continue to progress toward the attractant gradient by moving through gaps in the passageway left by the undesired cells.
106
110
110
108
106
110
112
110
116
106
112
110
112
110
114
110
110
112
114
110
116
112
FIG. 1B
FIG. 1B
The baffle can include a single passageway or multiple passageways , each of which is in fluid communication with the input of the migration channel and each of which is configured to inhibit movement of undesired cells as described above. For example, as shown in , the baffle includes several passageways , with the first portion of each passageway separated by a wall , creating a comb-like structure. The baffle may include, but is not limited to, between 5 and 30 passageways, e.g., 20 passageways. As shown in , the first portion of each passageway may be arranged in parallel with a first portion of an adjacent passageway . The second portion of each passageway can be coupled together to form a single horizontal channel. Although each passageway is shown to have a 90 degree turn, different angles can be selected for each passageway as discussed above, and the angles can all be the same or different. The lengths (i.e., the distance along the direction of cell transport) for the first portions can be in the range of between about 10 to about 100 microns. For example, the length can be between about 20 to about 80 microns, between about 30 to about 70 microns, or between about 40 to about 60 microns. Other lengths also are possible. The length of the horizontal channel (formed by the second portions of each passageway ) can be in the range of about 15 to about 500 microns. For example, the length of the horizontal channel can be between about 50 microns to about 400 microns, between about 100 microns to about 300 microns, or between about 150 microns to about 250 microns. Other lengths also are possible. The width of the walls separating the first portion of each passageway can range from about 5 microns to about 100 microns, e.g., 10 microns.
100
120
108
106
108
104
120
130
The device also may include an exit channel , e.g., an open-ended channel that exits the device and is open to the fluid media outside the device, in fluid communication with the migration channel at a point beyond the baffle and before the migration channel enters the inlet of the attractant chamber . As in the migration channel, the fluid in the exit channel does not move or flow during the monitoring of chemotaxis. The exit channel creates a bifurcation that allows one to monitor the ability of motile cells to follow the attractant gradient toward the attractant chamber. For example, if one or more motile cells migrate towards the exit channel instead of towards the attractant chamber, this may be an indication that the motile cell is damaged or functioning improperly. Thus, the presence of the exit channel allows one to quantify the number of desired cells that correctly follow the attractant gradient, as opposed to moving into the attractant chamber. Alternatively, the migration of motile cells through the exit channel may be an indication that the attractant is inducing chemokinesis and not chemotaxis of the desired cell.
120
108
120
108
120
108
120
108
Both the exit channel and the migration channel should be sized to allow at least the desired cells to pass through. For example, the height of the exit channel and/or the migration channel can be between about 1-3 microns, though larger heights also can be used. The lengths (distance along the direction of propagation) of the exit channel and the migration channel can be between about 10-2000 microns, e.g., 75 microns long. The widths of the exit channel and the migration channel can be between about 8-12 microns, though other widths may also be used.
FIG. 1A
100
104
102
104
As shown in , the device of this embodiment includes multiple attractant chambers in fluid communication with the input chamber , although a device with a single attractant chamber can be used. If desired, two or more of the attractant chambers may be loaded with different attractants. Accordingly, the effect of different attractants on the directionality and responsiveness of motile cells from a single fluid sample can be studied simultaneously.
In some implementations, the baffle's ability to restrict the movement of undesired cells can be enhanced by adding antibodies to the surfaces enclosing the baffle's passageways. The antibodies can be selected to specifically bind to RBCs (e.g., GlyA+) or other undesired cells (e.g., CD14+ can be used for monocytes), further reducing the number of the undesired cells that pass into the migration channel. In addition, or alternatively, other agents may be added to the baffle surfaces enclosing the passageways. For example, the surfaces can be coated with agents, such as proteins, glycoproteins, or combinations of them. Such coatings can prevent the absorption of soluble factors to the surfaces, and facilitate migration of the desired motile cells and/or impede the movement of undesired cells. With certain motile cells, such as neutrophils, it can be important in certain embodiments not to include any antibodies or other agents in the device that activate the motile cells in a way that can alter their motility.
Other areas of the device may also be coated with agents for facilitating or modifying the motile cell functionality. For example, as an alternative or in addition to coating the baffle, the inlet chamber and/or migration channel can be coated with agents. The agents can include proteins such as albumin for preventing surface adhesion of neutrophil. Alternatively, the protein can be configured to promote neutrophil adhesion, such as P selectin.
In some implementations, a repellant can be used in the device to influence motile cell directionality. For example, a repellant may be added to the exit channel or in a separate repellant chamber that is in fluid communication with the migration channel. Examples of repellants include Slit2, Slit3, high concentrations [mM] of IL-8, Dipeptidyl Peptidase IV, and quorum sensing bacteria. Other chemorepellents are known to affect different types of motile cells and can be selected by those skilled in this field.
100
100
100
100
100
100
FIG. 1A
As one example, the microfluidic device can be manufactured using the following methods. First, a mold defining the features of the device is obtained. For example, the mold can be formed by applying and sequentially patterning two layers of photoresist (e.g., SU8, Microchem, Newton, Mass.) on a silicon wafer using two photolithography masks according to known methods. The masks can contain features that define the different aspects of the device such as the input chamber, the baffle, the migration channel, the attractant chamber, and the exit channel. The wafer with the patterned photoresist then may be used as a master mold to form the microfluidic parts. A polydimethylsiloxane (PDMS) solution then is applied to the master mold and cured. After curing, the PDMS layer solidifies and can be peeled off the master mold. The solidified PDMS layer includes grooves and/or recesses corresponding to the passageways, migration channels, exit channels, and attractant chamber of the device . In some implementations, the mold pattern is designed to include the features of multiple devices . Each device can be cut out from the PDMS layer using, for example, a hole puncher (e.g., a 5 mm hole puncher). Similarly, the input chamber also can be formed by using a smaller hole puncher (e.g., a 1.5 mm diameter hole puncher) to punch out PDMS material from the PDMS layer. The PDMS devices then are bonded to a substrate such as a glass slide or multi-well plate (i.e., each device is positioned in a corresponding well of the well plate). For example, a bottom surface of the PDMS devices can be plasma treated to enhance the bonding properties of the PDMS. The plasma treated PDMS devices then are placed on the glass slide or into the bottom of a well on a plate and heated to induce bonding. The microfluidic channels of the device can also be exposed to plasma treatment prior to bonding to render the channels hydrophilic. Hydrophilic channels can enhance priming of the device with the attractant due to capillary wicking effects. is a schematic depicting a perspective view of a microfluidic device fabricated according to the foregoing procedures.
100
102
106
108
104
The example of a microfluidic device described above, includes a substrate layer of glass and a top layer of PDMS in which the input chamber , the baffle , the migration channel and the attractant chamber are formed. In other implementations, both the substrate layer and the top layer can be PDMS substrates or other similar materials.
110
In general, the top layer (or the bottom layer) in which the baffle, migration channel, exit channel and attractant chamber are formed should be selected to have the following characteristics. The layer can be gas-permeable so that air in the baffle, migration channel, and attractant chamber can be displaced through the layer, either by pumping fluid into the device or by placing the device under vacuum. Furthermore, the layer can be transparent so as to facilitate image capture of cell motility within the device. As explained above, the surfaces of the baffle walls enclosing the passageways may be coated with agents, for example, antibodies, to facilitate capture of undesired cells from the fluid sample.
FIGS. 3A-3D
FIGS. 3A-3C
FIGS. 3A-3C
FIG. 3A
FIG. 3A
100
are schematics depicting examples of the different stages in preparing the device for an assay. On the left side of each of , the microfluidic device is shown situated in a petri dish. The right side of each of depicts a cross-section of the input chamber (labeled “whole blood reservoir”), the migration channel (labeled “neutrophil migration channel”), and the attractant chamber (labeled “chemokine reservoir”) of the microfluidic device. Though the figures reference neutrophil migration channel and whole blood reservoir, the process shown is applicable to other motile cells and fluid samples as well. As shown in , a first stage of assay preparation includes priming the device with an attractant solution (for example, a chemokine solution, as referred to in ). Various attractant solutions can be selected based on the motile cell to be analyzed.
4
4
Examples of attractant solutions for neutrophils include N-formyl-methionyl-leucyl-pheylalanine (fMLP), leukotriene B(LTB), interleukin-8 (IL-8), the protein fragment C5a, adhenosine tri-phosphate (ATP), tumor growth factor beta (TGFb), or endothelial derived neutrophil attractant factor (ENA), or the like. In some implementations, the attractant solution includes the extracellular matrix protein fibronectin to promote neutrophil surface adhesion. Preferably, the attractant solution is added shortly after performing plasma treatment and bonding the device so that the hydrophilic property of the microfluidic channels has not dissipated and can assist priming the channels through capillary effects. Other methods of rendering the inner surfaces of the device hydrophilic can also be used. Other materials that are inherently hydrophilic can also be used to manufacture the device. The attractant solution can be added to the device through the input chamber, e.g., through pipetting using a gel-loading tip and surrounding the circumference of the whole device.
100
100
100
The device then is placed under vacuum. By applying a vacuum to the device , the attractant solution is forced completely into the fluidic channels and all chambers of the device . At the same time, the vacuum causes any air present in the channels or chambers to diffuse through the gas-permeable material of the top layer (e.g., the PDMS layer). This process removes air bubbles that would otherwise be present in the fluidic channels of the device, and which could potentially block the passage of cells through the baffle and migration channel. To establish the vacuum, the device can be placed into a desiccator, in which air pressure is reduced to a vacuum level of about 17-25 inches of water, for at least about 15 minutes.
FIG. 3B
FIG. 3B
100
100
100
In a second stage (), the device then is removed from the vacuum and the input chamber is washed to remove excess attractant solution. For example, the input chamber can be washed using a phosphate buffer saline (PBS) solution. The attractant solution in the baffle, migration channel, and attractant chamber remains in the device . After washing, the wells in which the devices sit are filled with a fluid suspension (referred to as “cell culture media” in ) that is free of the attractant and allowed to sit for a specified time so that a stable attractant gradient can form between the attractant chamber and the input chamber. The exit channel is surrounded by the fluid suspension and therefore acts as a sink for the attractant. Since the gradient in the exit channel decreases away from the migration channel, normal functioning cells will not migrate toward the exit channel. The fluid suspension can include media solutions such as RPMI 1640 media.
102
102
102
110
106
100
FIG. 3C
FIG. 3D
After establishing the attractant gradient, the fluid sample of interest is introduced into the input chamber of the device in a third stage (). For example, using gel-loading tips, samples of whole blood (or samples containing isolated motile cells) can be pipetted into the chamber . Once the fluid sample is in place in the chamber , static fluid pressure causes some of the cells of the sample to move, e.g., by granular flow (e.g., like grains of sand tumbling down an incline) into the passageways of the baffle , where the desired motile cells begin migrating in a direction of the attractant gradient (see, e.g., ). Various properties of the motile cells may be monitored in the device include, for example, the absolute number of desired motile cells that reach the attractant chamber, the number of desired motile cells that reach the attractant chamber relative to the total number of cells (motile and/or non-motile) in the fluid sample, the rate at which the motile cells reach the attractant chamber, and/or the directionality of motile cells in the device .
110
110
108
104
120
104
100
100
2
In the case of whole blood, the RBCs will move according to a granular flow pattern. Once the RBCs reach the turns in the passageways , the cell movement will slow or stop and cause a backup of trailing cells. In contrast, healthy neutrophils in the sample will continue to follow the attractant gradient and squeeze through openings left by the RBCs in the passageways . The healthy neutrophils will then proceed through the migration channel towards the attractant chamber . Unhealthy neutrophils may continue migrating into the exit channel or never reach the attractant chamber . During the migration assays, the device can be maintained at a temperature suitable for cell migration. For example, in the case of neutrophil migration, the device can be placed in a biochamber and heated to about 37° C. and having a 5% COatmosphere with 80% humidity to maintain the viability of the cells. The humidified environmental chamber can, in certain implementations, increase the observation duration several hours.
FIG. 4
400
100
100
100
450
400
430
430
is a schematic of an example of a system for analyzing the properties of motile cells following an attractant gradient, and includes a microfluidic device . The microfluidic device includes a baffle for inhibiting the movement of undesired cells in a fluid sample relative to the movement of desired cells, a migration channel, and an attractant chamber for establishing an attractant gradient. The device is held in a container , such as a petri dish or the like. The system includes an imaging system configured to capture images and/or video of the cell migration. For example, the imaging system is configured to perform time-lapse imaging. An example of an imaging system suitable for use to record images of the cell migration is the Nikon Eclipse Ti microscope with 10-20× magnification.
400
The total time required to record the movement of the motile cells toward the attractant chamber depends on various factors, including the baffle passageway length, the migration channel length, the attractant being used, and the attractant gradient itself. Since neutrophils typically require about 5 minutes to begin migration in response to an attractant gradient, a minimum time necessary for monitoring the neutrophil motion with the system is no less than approximately 10 minutes (e.g., a distance of neutrophil migration of about 150-200 μm for a neutrophil migration rate of 18 μm/min). Longer monitoring times also may be required, depending on the nature of the particular assay being conducted. For example, the time to monitor motile cell movement (e.g., neutrophil movement) may be on the order of 20 minutes, 30 minutes, 40 minutes, 50 minutes, 1 hour, or longer. While time-lapse imaging is one particular way for characterizing the cell migration, one could also count the number of cells that reach the attractant chamber at the end of the assay, without the aid of time-lapse imaging.
400
435
430
435
430
435
430
The system further includes a computer system that is operatively coupled to the imaging system . The computer system can include a computer-readable storage medium (for example, a hard disk and the like) that stores computer program instructions executable by data processing apparatus (for example, a computer system, a processor, and the like) to perform operations. The operations can include controlling the imaging system to capture images of the migration of cells through the device toward the attractant chamber. In addition, the computer system can receive the captured images from the imaging system , and process the images to obtain various parameters, e.g., one or more of a migration speed of motile cells in a channel, a number of motile cells reaching the attractant chamber, and a directionality of the motile cells. Directionality of motile cells can be quantified by counting the number of motile cells that follow the attractant gradient into the attractant chamber, as opposed to the exit channel.
435
430
435
430
435
In some implementations, the computer system is configured to execute computer software applications that perform statistical analysis of the data captured by the imaging system . For example, the computer system can be configured to perform multivariate analysis to determine correlations between neutrophil migration speed and clinical parameters. In some implementations, experiments to characterize the formation of gradients inside the device in the absence of motile cells can be performed by replacing all or portions of the attractant solution (for example, the fMLP) with a fluorescent agent (e.g., fluorescein) of comparable molecular weight, and analyzing the distribution and changes in fluorescence intensity from time-lapse imaging using the imaging system and the computer system .
The microfluidic motility assays and methods described herein can be used in various applications. For example, the measurement of neutrophil directionality is important in patients at high risk for infection where directionality of neutrophils is known to be impaired, such as those with burn injuries or tissue trauma, patients undergoing chemotherapy, neonates in intensive care units, and/or diabetics. Impaired and/or over-stimulated neutrophils may migrate away from the site of the injury and therefore cause injury to healthy tissues. The devices disclosed herein provide a platform to analyze neutrophil behavior to determine the extent of damage to the neutrophils. For example, the devices can be used to determine the percentage of neutrophils that behave abnormally, as well as the particular type of abnormal behavior, such as failing to follow an attractant gradient or changes in migration rate. The devices can be used on samples of whole blood without requiring a separate isolation step for the neutrophils, thus reducing processing time. Additionally, the use of whole blood preserves the natural environment for neutrophils without inducing neutrophil activation. The devices can be designed to handle small quantities of fluid sample, e.g., samples having a volume of about 2 microliters, or about 1 microliter. The devices can be used with blood obtained from humans or animal subjects. Both reductions in processing time and reduced sample volume requirements are advantageous for clinical applications, where it may not be feasible to obtain larger amounts of sample fluid, e.g., in infants or small mammals.
In some implementations, the devices can be used to analyze efficacy of one or more medications on neutrophil activity. For example, a medication that affects, e.g., enhances, neutrophil motility can be administered to a subject (e.g., a patient having a burn injury or other tissue trauma) to vary the motility of neutrophils within the subject. The medication can include one or more of several modulators of neutrophil migration such as endogenous modulators (e.g., acetylcholine, interleukin-10 (IL-10), TNFalpha, interleukin-1 (IL-1), interleukin-6 (IL-6)), resolvins, lipoxins, or exogenous modulators (e.g., curcumin, lysophosphatidylglycerol, or cholinergic drugs). Blood samples then can be obtained from the patient once or periodically after administration of the drug. Using the devices and systems described herein, neutrophil activity can be analyzed to determine the drug's effect on neutrophil motility. In some situations, neutrophils obtained from a subject can be studied over a one week period, for example, at 48 hour intervals. To determine a long-term effect of an injury and treatment on the subject, the study period can be expanded to longer periods, e.g., six months, at regular intervals. With respect to neutrophils, an increase in the rate of migration observed over such time intervals may indicate wound healing. If the rate of migration of the neutrophils does not suggest wound healing, then a treatment can be altered to administer a different drug.
100
104
102
100
FIG. 1A
The device shown in includes multiple attractant chambers coupled to a single input chamber . Accordingly, the device , or similar devices having multiple attractant chambers, can be loaded with multiple attractants (e.g., a different attractant for each attractant chamber) to establish different attractant gradients. Using the systems disclosed herein, one can then analyze the responsivity of motile cells to the different attractant gradients. One can quantify a person's neutrophil functionality after injury or infection (or any perturbation to the immune system) as well as to measure the efficacy of potential drugs or treatments.
Further characterization of neutrophil motility using the microfluidic devices described herein can have important diagnostic implications not only for burn patients, but also for patients afflicted by other diseases that compromise neutrophil functions. For example, the device can be applied to analyze neutrophil motility in pediatric patients to identify patients who are at a higher risk for certain diseases. In transplantations, the device can be used to analyze neutrophil motilities to determine if there is a correlation between neutrophil motility under medication and the occurrence of complications, for example, infections and rejections. By determining a range of neutrophil motilities that correlate to low infection and at which immuno-suppressant functions are not suppressed, it may be possible to vary the quantities of immuno suppressant medication that is being administered to patients.
In some implementations, the devices can be used to screen hundreds, 1000s, 10,000s, or 100,000s of small molecules or other chemical agents for their effect on motile cell motility, e.g., on neutrophil chemotaxis. That is, the devices can be used to screen such compounds to see which, if any, have an effect on cell motility, and the degree to which motile cells are affected.
The invention is further described in the following examples, which do not limit the scope of the invention described in the claims.
To understand how a group of RBCs move through a baffle passageway, a 2-D finite-element model was employed using the COMSOL Multiphysics® software program, which performed biophysical modeling of chemoattractant diffusion in the device. The model was based on a strong solid-fluid coupling, which allows the incorporation of deformable solid bodies (e.g., RBCs) in fluid-filled channels. The channel geometries and initial RBC positions were inputted into a custom-built finite-element software package PAK45 and run on a desktop supercomputer consisting of 32 cores (Supermicro Super Server: 4×Eight-Core Intel Xeon Processor 2.70 GHz; 512 GB total memory). Using the model, the granular movement of RBCs through small channels was simulated. This movement is a result of the mutual interaction of many RBCs in the whole blood loading chamber, pushing the RBCs at the periphery to enter the connected microchannels. To simulate this force and induce movement, we assumed that the top-most RBC in the channel experiences an external force (equal to 1/(50 g), where g is gravitational acceleration constant. This was equivalent to a stack of twelve RBCs with 5% higher density than that of media pushing one RBC into the channels) in the y-direction. The other RBCs below the topmost RBC had no externally applied forces.
FIGS. 5A-5D
FIG. 5A
FIG. 5A
FIG. 5B
FIG. 5B
FIG. 5C
FIG. 5A
FIG. 5D
FIG. 5A
FIGS. 5C-5D
are schematics depicting the different stages of RBCs moving through a channel from an input chamber and subsequently traversing a 90° turn. Three different passageway widths (9, 12, and 14 μm) were examined. Each RBC was assumed to have a diameter of 7.5 m. shows the initial RBC configuration of 10 RBCs in the entrance segment of a baffle passageway where the passageway width is 9 μm. The scale bar on the left of is a color scale corresponding to different internal stresses that may be experienced by the RBCs during migration. depicts the final steady-state configuration inside the 9 μm channel. shows that as soon as one RBC is pushed into the corner, it substantially blocks the advance of all RBCs behind it by restricting the cross section of the passageway to less than one RBC diameter. Only two RBCs have moved pass the corner. This strategy is successful because the force pushing the RBCs is not enough to deform the cells through the corner of the passageway. depicts the final steady-state configuration for a 12 μm channel after the same initial configuration shown in . depicts the final steady-state configuration for a 14 μm channel after the same initial configuration shown in . The results in demonstrate that reaching a stable configuration is less likely in larger channels when progressively more and more RBCs traverse the corner as the channel width increases.
To analyze the effectiveness of the baffle design containing restricted passageway cross-sections and turns for blocking RBCs versus that of a straight channel, one of each different device design was fabricated and tested with fMLP and LTB4 chemoattractants. The device containing the baffle was also analyzed for its ability to allow neutrophil migration.
Device Fabrication
Each microfluidic device was designed with three main components: chemokine side chambers (200×200 μm), a central whole-blood loading chamber, and migration channels that did or did not contain a RBC baffle region. The device containing the baffle included 10 short microfluidic channels (length ˜75 μm) connected horizontally through an approximately 200-μm-long channel to create several 90° bending sections capable of trapping the RBCs in order to prevent them from dispersing into the rest of the migration channel. All migration channels were designed to be 12 μm wide and 3 μm high.
The microfluidic devices were produced by replica molding polydimethylsiloxane (Sylgard 184, Elsworth Adhesives, Wilmington, Mass.) on a master wafer fabricated using standard photolithographic technologies with Mylar photomasks (FineLine Imaging, Colorado Springs, Colo.). After curing for at least 3 hours in an oven set to 65° C., the PDMS layer covering the master was peeled off and holes were punched. First, the central loading chamber was punched using a 1.5 mm puncher and then a 5 mm puncher was used to cut out the entire donut-shaped device (Harris Uni-Core, Ted Pella Inc., Reading, Ca). A 12-well plate was then plasma treated along with the PDMS donut-shaped devices and bonded on a hot plate set to 85° C. for 10 minutes.
Whole Blood Handling
2
2
Capillary blood (50 μL) was collected by pricking a finger of healthy volunteers. The blood was then pipetted into an eppendorf tube containing a mixed solution of HBSS media, heparin anti-coagulant (1.65 USP/50 μL of blood), and Hoescht stain (10 μL, 32.4 μM). The eppendorf tube was then incubated for 10 minutes at 37° C. and 5% COto allow for proper staining of the nuclei. Afterwards, 50 μL of the finger prick blood was pipetted into media containing the same Hoescht stain concentration as previously described and incubated for 10 minutes at 37° C. and 5% CO. Using a gel-loading tip, 2 uL of whole blood was slowly pipetted into the central input chamber of the device.
Establishing Chemoattractant Gradient
For the devices in which neutrophil migration was to be observed, the donut-shaped devices were filled with the chemoattractant solution of N-formyl-methionyl-leucyl-phenylalanine (fMLP) [100 nM](Sigma-Aldrich, St. Louis, Mo.) or Leukotriene B4 (LTB4) (Caymen Chemicals, Ann Arbor, Mich.) [100 nM] immediately after the PDMS donut-shaped device was bonded to the well plate. The chemoattractant solution also contained fibronectin [25 nM](Sigma-Aldrich, St. Louis, Mo.) to promote neutrophil surface adhesion. The chemoattractant was pipetted into the whole blood loading chamber (WBLC) and directly around the circumference of the device. The glass bottom 12-well plate was then placed in a desiccator to de-gas for 15 minutes to ensure proper filling of the chambers while the PDMS surface was still hydrophilic from plasma treatment. Afterwards, the central whole-blood loading chamber and the outside region surrounding the donut were washed thoroughly with Phosphate Buffered Saline (PBS) in each well to wash away excess chemoattractant. The wells of the plate were then filled with RPMI 1640 media and allowed to sit for a period of 15 minutes to generate stable chemoattractant gradients.
Results
2
Time-lapse imaging was performed on a Nikon Eclipse Ti microscope with 10-15× magnification and a biochamber heated to 37° C. with 5% COand 80% humidity. For each experiment in which an attractant gradient was established, at least 50 neutrophils were manually tracked.
FIG. 6A
FIG. 6B
is a bright-field (BF) image of the device as fabricated above without a baffle to filter the RBCs. RBCs are seen to clog the migration channel and contaminate the attractant chamber (see arrows). is a BF image of the device as fabricated above with a baffle to filter the RBCs. The incorporation of the RBC baffle upstream of the migration channel significantly reduces RBC contamination in the attractant chamber by 63% and eliminates RBCs at the channel exit. Thus, the baffle containing the comb design proved more efficient in blocking the entrance of RBCs in the migration channels compared to the straight channels.
FIGS. 7A-7H
FIGS. 7A-7H
FIG. 7H
4
are time-lapse images of the device containing the comb-shaped baffle, where a LTBchemoattractant gradient was established between the attractant reservoir and the input chamber. As shown in , few RBCs are able to enter the migration channel, while a neutrophil is able to migrate past the RBCs into the migration channel (see arrow in ). These results demonstrated that the microfluidic device is capable of inhibiting the movement of the RBCs relative to that of the neutrophils, without causing perturbations in cell motility or directionality. Integration of the on-chip baffle in the novel microfluidic device removes RBCs from actively migrating neutrophils and circumvents the need for cumbersome cell separation methods such as density gradients, positive selection, or negative selection, which are prone to introduce artifacts by activating neutrophils. The results also show that the microfluidic device produces a stable linear chemoattractant gradient without the need for peripherals like an outside pressure source (i.e. syringe pump).
The microfluidic devices were also validated by loading whole blood from finger prick and venous sources, as well as isolated neutrophils toward the chemoattractant fMLP. Specifically, neutrophil migration was analyzed for whole blood from the finger prick, from the venous blood source, and from the isolated neutrophils. The devices were fabricated and the fMLP chemoattractant gradient was established as explained above in Example 2.
Whole Blood Handling
2
2
Capillary blood (50 μL) was collected by pricking a finger of healthy volunteers. The blood was then pipetted into an eppendorf tube containing a mixed solution of HBSS media, heparin anti-coagulant (1.65 USP/50 μL of blood), and Hoescht stain (10 μL, 32.4 μM). The eppendorf tube was then incubated for 10 minutes at 37° C. and 5% COto allow for proper staining of the nuclei. For venous blood samples, 10 mL of peripheral blood was drawn from a health volunteer into tubes containing 33 US Pherparin (Vacutainer, Becton Dickinson, Franklin Lakes, N.J.). Afterwards, 50 μL of the blood was pipetted into media containing the same Hoescht stain concentration as previously described and incubated for 10 minutes at 37° C. and 5% CO. Using a gel-loading tip, 2 uL of whole blood was slowly pipetted into the central input chamber of the device.
Neutrophil Isolation
To compare the whole blood results with neutrophil migration from an isolated sample, we also isolated human neutrophils from whole blood using HetaSep followed by the EasySep Human Neutrophil Enrichment Kits (STEMCELL Technologies Inc. Vancouver, Canada) following the manufacturer's protocol. The final aliquots of neutrophils were re-suspended in 1×HBSS+0.2% human serum albumin (Sigma-Aldrich, St. Louis, Mo.) at a density of ˜40,000 cells/L and kept at 37° C. cell until devices were properly primed.
Results
Time-lapse imaging was performed on a Nikon Eclipse Ti microscope with 10-15× magnification and a biochamber heated to 37° C. with 5% CO2 and 80% humidity. For each experiment in which an attractant gradient was established, at least 50 neutrophils were manually tracked over a period of 200 min. Directionality of primed neutrophils was quantified by counting the number of cells that followed the chemotactic gradient and turned at the bifurcation toward the chemoattractant chamber as opposed to the number of cells that exited the device to the peripheral region. Cell velocities were calculated using Image J (NIH) and data analysis with GraphPad Prism.
FIG. 8A
FIG. 8A
2
is a plot of neutrophil migration counts among venous blood source, finger prick blood source, and isolated neutrophils. A similar delay time, accumulation rate and final cell count are observed in all three conditions migrating to fMLP [100 nM]. Graphs correspond to average cell counts (n=16) in all attractant chambers. Neutrophils from the two whole blood sources, as well as isolated neutrophils, began migrating towards the fMLP [100 nM] gradient within 20 minutes and neutrophil accumulation numbers from all three sources were consistent around 135±20 cells/device after 3.5 hours (P-value<0.001, R=0.98). The rate of neutrophil accumulation remained constant for the length of the 200 min experiment in all blood sources. As shown in , we observed higher variability of neutrophil counts with the finger prick blood source compared to the venous or isolated neutrophil sources, which may reflect higher heterogeneity of the neutrophil population in the capillaries compared to the whole blood. These results suggest that the device will operate just as well as devices that require neutrophil isolation, but without the need to include the preliminary isolation step.
FIG. 8B
We then measured variability between healthy donor neutrophil migration from whole blood finger source towards fMLP [100 nM]. is a plot of neutrophil migration counts per chamber compared for the 7 healthy volunteers. Of interest, for 5 out of 7 donors' neutrophil migration counts clustered tightly around the average of 92±35 cells after 200 min. However, two donors had significantly higher neutrophil migration counts, which may be representative of the variation in innate immune response in the human population.
FIG. 8C
To determine device-device variation, we loaded whole blood from a finger prick of a healthy donor into 6 separate devices and quantified neutrophil accumulation to a gradient of fMLP [100 nM]. is a plot of the average neutrophil migration counts for the fMLP gradient in the 6 separate devices. The device-device variation was 14.1% (83±12 cells after 200 min.), over the duration of the experiment.
FIG. 8D
We also established a healthy donor baseline, measuring neutrophil accumulation from finger prick whole blood from the same healthy donor at one week intervals for a total of three weeks. is a plot of the base-line for neutrophil migration in the healthy donor over the three week time period. The experiment yielded equivalent neutrophil accumulation values, thus suggesting high experimental reproducibility as well as a consistent accumulation baseline for the same volunteer. The confirmation of a consistent baseline of neutrophil recruitment in a healthy volunteer, suggests that perturbations in this baseline could represent significant clinical changes in the innate immune response, such as injury, infection or dysfunction that would be useful in diagnosis or predictions of future clinical outcomes.
4
4
Neutrophil migration toward different attractants (fMLP and LTB) was also examined. The devices were fabricated and the fMLP and LTB4 chemoattractant gradients were established as explained above in Example 2. The concentrations of the three different chemoattractants were varied from 10 nM to 50 nM to 100 nM. Solutions containing fibronectin [25 nM] and exclusive of fibronectin were prepared. Finger prick blood and venous blood were obtained as described above in Example 3. Velocity measurements were obtained as explained above in Example 3 using time-lapse imaging. Neutrophil migration toward the fMLP and the LTBchemoattractants were compared against a control device, in which no chemoattractant gradient was established.
Results
FIG. 9A
FIG. 9A
FIG. 9A
FIG. 9B
FIG. 9B
4
4
4
4
is a plot of the dose-response of neutrophils migrating out of whole blood to the LTBand fMLP chemoattractants. The plots correspond to an average cell count reached in 16 different attractant chambers. As shown in , the neutrophils do not migrate in the absence of a chemoattractant gradient. Overall, increased cells migration was observed at higher concentration. Maximal cell recruitment was observed at the 100 nM concentration in the attractant chamber of the microfluidic device. As shown in , a gradient of fMLP [100 nM] recruited neutrophils from finger pick whole blood at a two-fold higher count (86±7 cells/device after 200 min) than LTB(39±12 cells/device after 200 min). is a plot of neutrophil velocity from finger prick whole blood in response to fMLP and LTB. As shown in , neutrophil velocity was comparable for both fMLP (19±6 μm/min) and LTB(20±7 μm/min). Neutrophil velocities were also consistent between venous whole blood and finger prick whole blood for both fMLP and LTB4 chemoattractants.
FIG. 1A
FIG. 9C
FIG. 9C
4
4
The bifurcation in the microfluidic device design (i.e., where the exit channel splits off from the migration channel, see ) allows for the quantification of neutrophils directionality by comparing the number of cells that migrated towards the chemoattractant gradient to the number of cells that become “lost” and exit the device. This directional index is clinically relevant as it provides a quantitative measurement for correct neutrophil response to a site of injury or infection. The “lost” or non-directional neutrophils would potentially migrate and cause unnecessary damage to healthy tissue or organs. is a plot that shows a directionality index for neutrophils in response to both fMLP and LTB. Directionality is calculated by finding a ratio of the cells that correctly follow the attractant gradient to the attractant chamber divided by the total number of cells (cells that migrate to the attractant chamber plus the cells that exit the device). As shown in , neutrophils from finger prick and venous healthy donor whole blood sources have a directionality index greater than 0.9 for both fMLP and LTB.
FIG. 9D
FIG. 9D
The effect of fibronectin inclusion in the chemoattractant solution was also analyzed. Fibronectin promotes neutrophil adherence and acts as a blocking agent (in addition to 0.2% human serum albumin) for the glass surface of the device. is a plot that shows neutrophil count in the fMLP attractant chamber for pin prick and venous blood in response to attractant with and without fibronectin. As shown in , fibronectin does not appear to change final neutrophil counts migrating towards fMLP. The foregoing results demonstrate that the device is suitable for comparing motile cell response to different attractant gradients.
We also utilized the novel microfluidic device to monitor neutrophil chemotaxis function in a burn patient.
Blood Samples
Blood samples of 1 mL were collected from one burn patient suffering from 24% total body surface area (TBSA) burn. Procedures for fabricating the device, preparing the chemoattractant gradient, and performing time-lapse imaging were conducted as explained above with respect to examples 2-4.
Results
FIG. 10A
FIG. 10B
FIG. 10C
4
4
4
16
Neutrophil chemotaxis was monitored over a 3 week treatment period. is a plot of neutrophil migration counts in the attractant chamber with no chemoattractant gradient, with fMLP chemoattractant gradient [100 nM], and with LTBchemoattractant gradient [100 nM]. is a plot of velocity [μm/min] of neutrophils migrating to LTBcompared with fMLP over a three week period. is a plot of directionality index of neutrophils migrating to LTBcompared with fMLP. All graphs correspond to average cell counts across attractant chambers.
FIG. 10A
FIGS. 10B and 10C
4
As shown in , there was an order of magnitude decrease in neutrophil cell count compared with the average range (shown with dotted line) of a healthy volunteer 1 week after the burn injury. Moreover, as shown in , we observed a 75% reduction in neutrophil velocity and a 50% reduction in directionality in neutrophils migrating toward a fMLP [100 nM] gradient. The number of cells accumulating towards fMLP spiked from below normal values to 15% above the normal healthy volunteer range at two weeks post burn, which corresponded to a period when the patient was observed to have a fever. At two weeks post-burn, neutrophil velocity remained impaired in both fMLP and LTBconditions (60% and 40% reduction respectively), but neutrophil directionality had been restored to the range of healthy volunteers. Three weeks post-burn, neutrophil cell counts to fMLP were lower than the average healthy volunteer count, whereas LTB4 accumulation counts were in the normal range. Velocity and directionality were both restored to the normal range 3 weeks post-burn. These results demonstrate the microfluidic device may be useful for monitoring variation in cell motility of subjects over prolonged periods.
Animal models of human disease differ in innate immune responses to stress, pathogens, or injury. Current technologies for measuring neutrophil phenotype prevent precise inter-species comparisons because they require the separation of neutrophils from blood using species-specific protocols. For example, current neutrophil separation methods, developed originally for human donors, require large volumes of blood and are less suitable for mice due to their significantly lower circulatory volume. Therefore, many studies on mouse neutrophils are done with bone marrow cells. However, bone marrow neutrophils appear to be heterogeneous and functionally immature. Furthermore, standard negative enrichment of neutrophils include a lengthy (3 hour) protocols during which the neutrophil phenotype can change. Moreover, antibody cocktails for neutrophil isolation are less specific for mouse than human and activation levels of neutrophils affect the purity and yield. However, by using the novel microfluidic device described herein, we performed a robust characterization of neutrophil migratory phenotypes from different species directly from a droplet of whole blood. In particular, using the new device, neutrophil measurements were performed from minute volumes (less than 2 μL) of whole blood (WB), from various species donors (rat, murine and human), with high precision and single cell resolution.
Microfluidic Device Fabrication
The microfluidic device to study mouse, rat and human neutrophil chemotaxis from one droplet of whole blood was designed with three main components: focal chemoattractant chambers (FCCs) (200×200 μm), a central whole-blood loading chamber, and migration channels containing RBC filtering regions. The filter for each migration channel included 10 short channels (Length ˜75 μm) with a 3.5 μm narrowing region (‘pinch’) connected horizontally through an approximately 200-μm-long channel to create 90° bending sections capable of trapping the RBCs to prevent them from dispersing into the rest of the migration channel. All migration channels were designed to be 12 μm wide and 3 μm high to establish only a single column of RBCs for efficient trapping while allowing active mouse, rat and human neutrophils to easily migrate through. The devices were fabricated as described above in Example 2.
Whole Blood Sample Collection
For humans, 50 μL of capillary blood was collected by pricking a finger of healthy volunteers. For mice, 50 μL of capillary blood was collected by the facial vein method (Institutional animal care and use Protocol #2007N000136) requiring no anesthesia. For rats, 50 μL of venous blood was collected from the tail vein (Institutional animal care and use Protocol #2012N000034) using 1-2% Isoflurane inhalant. The blood was then pipetted into an eppendorf tube containing a mixed solution of HBSS media, heparin anti-coagulant (1.65 USP/50 μL of blood), and Hoechst stain (10 μL, 32.4 μM). The eppendorf tube was then incubated for 10 minutes at 37° C. and 5% CO2 to allow for proper staining of the nuclei.
Device Priming and Cell Loading
All reagents and whole blood were pipetted into the device and there was no flow or requirement of external syringe pump. A gradient of the chemoattractant was established along the migration channels by diffusion between the chemoattractant chambers and the central loading chamber. Prior to cell loading and immediately after bonding to the well plate, donut-shaped devices were filled with the chemoattractant solution containing 25 nM of fibronectin (Sigma-Aldrich, St. Louis, Mo.). The well plate was then placed in a desiccator under vacuum to de-gas for 15 minutes to ensure proper filling of the chambers while the PDMS surface was still hydrophilic. Afterwards, the central whole-blood loading chamber and the outside region surrounding the donut were washed thoroughly in each well to establish the gradient along the migration channels. The wells of the plate were then filled with RPMI 1640 media and allowed to sit for a period of 15 minutes to generate stable chemoattractant gradients. Finally, using a gel-lading tip, 2 μL of whole blood was slowly pipetted into the central whole-blood loading chamber.
Chemotaxis Imaging and Measurements
Time-lapse imaging was performed on a Nikon Eclipse Ti microscope with 10-15× magnification and a biochamber heated to 37° C. with 5% CO2 and 80% humidity. Separate experiments to characterize the formation of gradients along the migration channels in the absence of cells were performed under similar temperature and gas conditions but by replacing the chemoattractant with fluorescein (Sigma-Aldrich, St. Louis, Mo.) of molecular weight comparable to that of fMLP (MW=438) and LTB4 (MW=336). For each experiment, at least 50 neutrophils were manually tracked. Directionality of primed neutrophils was quantified by counting the number of cells that followed the chemotactic gradient and turned at the bifurcation toward the chemoattractant chamber as opposed to the number of cells that exited the device to the peripheral region. Cell velocities were calculated using ImageJ (NIH). Total percentage of neutrophils to migrate was estimated using a COMSOL simulation model that estimated that 30.6% of area in whole blood loading chamber from which neutrophils could migrate in experimental time to be above critical gradient concentration. For an average human experiment, this would estimate ˜277 neutrophils per well that are exposed to chemoattractant gradient.
Results
An important feature that enabled the use of WB directly in the microfluidic device was the red blood cell (RBC) filter. Murine RBCs (average diameter=6 μm, thickness=1 μm) are of smaller geometry than human (average diameter=7-8 μm, thickness=2 μm). The RBC filter combined flat channels, a comb of 90° angles, and a 3.5 μm ‘pinch’ of square cross section to prevent the granular-flow of RBCs into the neutrophil migration channels. RBCs pushing on each other under the effect of gravity, were mechanically blocked at the entrance of the channels and remained confined inside the central loading chamber. The neutrophil migration channels remain clear because the blocked RBCs do not clog the channel and sufficient space remains between the RBC membrane and the channels walls to allow chemokine diffusion. Thus, the formation of neutrophil-guiding gradients from the WB to the FCC, along the migration channel, was unperturbed. Neutrophils were able to actively deform and migrate through the pinch, which assured the selectivity of the assay by preventing the migration of lymphocytes and monocytes, which deform less and require larger channels for migration. The selectivity was verified by observing the characteristic polymorph shape of the nucleus of moving cells. Once the neutrophils passed the pinch, they continued to follow the chemoattractant gradient along the migration channel and enter the FCC. The bifurcation in the channel created a ‘decision point’ where neutrophils can migrate toward or away from the chemoattractant gradient, providing critical information about their directionality.
4
4
4
FIG. 11A
FIG. 11B
FIG. 11C
To compare neutrophil migration phenotype between species, we measured chemotaxis to two standard chemoattractants (fMLP and LTB) in humans, C57BL/6 and Sv129S6 mice and Wistar rats. Human neutrophils in 2 μL whole blood samples migrated towards fMLP (55.8±19.8%) and LTB4 (54.0±5.6%) (see ). A lower percentage of Wistar rat neutrophils migrated to fMLP and LTB(10.9±8.5 and 2.8±1.1%, respectively). Surprisingly, LTBwas the only chemoattractant able to induce significant neutrophil migration in all mouse neutrophils tested: Sv129S6 (99.5±13%) and C57BL/6 (52.7±24%). The velocity of C57BL/6 neutrophils towards fMLP (13±7.4 μm/min) was ˜2.5-fold lower than that of Sv129S6 (19.7±8.7 μm/min), rat (26.2±5.9 μm/min), or human (23.4±5.4 μm/min) neutrophils (see ). The directional index of C57BL/6 neutrophils toward fMLP (0.61±0.07) was comparable to that of rat neutrophils (0.62±0.12) and significantly lower than that of Sv129S6 (0.82±0.06) or human (0.85±0.06) neutrophils (see ). The velocity and directionality deficits in C57BL/6 neutrophils have multiplicative effects and suggest less effective neutrophil migration upon stimulation.
4
FIGS. 12C-D
FIG. 12A
FIG. 12B
The directionality of Sv129S6 mouse neutrophils towards C5a was lower than in humans, while directionality towards LTBwas comparable (). C5a activation of mouse neutrophils led to random migration. These patterns of neutrophil migration in response to C5a are consistent with migration patterns reported previously from isolated human neutrophils. More human neutrophils migrated in response to C5a in the FCC than mouse neutrophils (26.8±5.9% versus 1±0.8%, ), and more neutrophils were activated by C5a compared to Sv129S6 mouse neutrophils (63.3±7.6% versus 18.3±3%, ).
The results demonstrate fundamental differences in neutrophil migratory responses between mice, rats and humans. Amongst hundreds of laboratory mouse strains available, two-thirds of all murine research is undertaken with the C57BL/6J (B6) strain (compared to 1% for Sv129S6) because of its robustness and availability of congenic strains. As therapies for human diseases become specifically targeted, it is increasingly important to further understand mouse strain differences in innate immune function and to wisely choose a mouse strain that most accurately models the human response to disease or drug therapeutic interventions. Using the new microfluidic device described herein, our results show that strain differences in neutrophil migratory function between common laboratory mouse models are significant and must be considered when selecting the appropriate model to mimic human infection or inflammation.
The novel device and techniques described herein were used to allow precise measurement of neutrophil chemotaxis from micro-volume samples of murine, rat and human blood in the same conditions and following the same sample preparation protocols. The assay was performed in the presence of all blood components, and was highly multiplexed. The results demonstrate that the novel device and techniques may be used by researchers to understand species and mouse strain differences in neutrophil migratory phenotypes from conscious animals over time. Compared to traditional methods (e.g., transwell assay), the device avoids lengthy neutrophil isolation steps, uses micro-volume amounts of blood from conscious animals, which allows for repeated measures without potential confounding effects of anesthetic drugs, and provides single-cell-resolution information regarding neutrophil directionality and speed. The novel microfluidic device and techniques described herein also have two specific advantages compared to recent techniques that rely on neutrophil capture from blood by selective adhesion e.g. P-selectin. First, by avoiding the cell washing steps, the new device preserves the integrity of the blood sample, and with it important cues that may modulate neutrophil activity from serum or other cells in the whole blood. Second, by relying on physical (channel geometry) rather than biological mechanisms (selectins or endothelial cells) to achieve selectivity for neutrophils, the device eliminates the artificial activation of neutrophils via capture mechanisms and the need for species-matched capture molecules. The requirement of small numbers of cells is particularly advantageous when studying mice, where blood volumes are limited. The novel device and techniques described herein also eliminate the necessity of pooling blood from several animals and permits repeated single animal neutrophil phenotype data measurements over-time, so as to potentially monitor progression of disease and/or therapeutic responses. Measuring neutrophil migration in the whole blood native microenvironment mimics the in vivo, holistic animal response more accurately.
It is to be understood that while the invention has been described in conjunction with the detailed description thereof, the foregoing description is intended to illustrate and not limit the scope of the invention, which is defined by the scope of the appended claims. Other aspects, advantages, and modifications are within the scope of the following claims.
DESCRIPTION OF DRAWINGS
FIG. 1A
is a schematic example of a microfluidic device as described herein.
FIG. 1B
FIG. 1A
is a close-up view of a portion of the device of .
FIG. 2
is a schematic depicting an example of a baffle passageway cross-section.
FIGS. 3A-3D
depict stages of microfluidic assay preparation.
FIG. 4
is a schematic of an example of a system for analyzing an assay performed in a microfluidic device.
FIGS. 5A-5D
are schematics depicting the different stages of red blood cells moving through a channel from an input chamber of a microfluidic device and subsequently traversing a 90° turn.
FIG. 6A
is a bright-field (BF) image of a microfluidic device without a baffle to filter red blood cells.
FIG. 6B
is a BF image of a microfluidic device fabricated with a baffle to filter red blood cells.
FIGS. 7A-7H
are time-lapse images of a microfluidic device through which red blood cells and neutrophils migrate.
FIG. 8A
is a plot of neutrophil migration counts for neutrophils from a venous blood source, a finger prick blood source, and for isolated neutrophils.
FIG. 8B
is a plot of neutrophil migration counts per attractant chamber for whole blood samples obtained from seven different subjects.
FIG. 8C
is a plot of average neutrophil migration counts in six separate microfluidic devices.
FIG. 8D
is a plot of baseline neutrophil migration in a healthy subject over a three week time period.
FIG. 9A
4
is a plot of the dose-response of neutrophils migrating out of whole blood to the leukotriene B4 (LTB) and N-formyl-methionyl-leucyl-pheylalanine (fMLP) chemoattractants.
FIG. 9B
4
is a plot of neutrophil velocity in response to LTBand fMLP chemoattractants for venous blood and finger prick blood.
FIG. 9C
4
is a plot that shows a directionality index for neutrophils in response to both fMLP and LTB.
FIG. 9D
is a plot that shows a neutrophil count in the fMLP attractant chamber for pin prick and venous blood in response to attractant with and without fibronectin.
FIG. 10A
4
is a plot of neutrophil migration counts in an attractant chamber with no chemoattractant gradient, with fMLP chemoattractant gradient, and with LTBchemoattractant gradient.
FIG. 10B
4
is a plot of velocity of neutrophils migrating to LTBcompared with fMLP over a three week period.
FIG. 10C
4
is a plot of a directionality index of neutrophils migrating to LTBcompared with fMLP.
FIG. 11A
4
is a plot of the percentage of cells that have migrated to LTBcompared with fMLP for human, rat, and mouse cell strains.
FIG. 11B
is a plot of cell migration velocity for each different cell strain.
FIG. 11C
is a plot of directionality index for each different cell strain.
FIG. 12A
is a plot of the percentage of human and mouse cells that have migrated in response to C5a.
FIG. 12B
is a plot of the percentage of neutrophil cells activated in response to C5a.
FIG. 12C
is a directionality plot of human and mouse cells towards C5a.
FIG. 12D
4
is a directionality plot of human and mouse cells towards LBT. | |
Innovation has become a key factor for economic growth, but how does the process take place at the level of individual firms? This book presents the main results of the OECD Innovation Microdata Project -- the first large-scale effort to exploit firm-level data from innovation surveys across 20 countries in an internationally harmonised way, with a view to addressing common analytical questions of great importance to policy makers who seek to promote innovation.
These issues include:
Through the use of common indicators and econometric modeling, this analytical report presents a broad overview of how firms innovate in different countries, highlights some of the limitations of current innovation surveys, and identifies directions for future research.
Innovation in Firms is part of the OECD Innovation Strategy, a comprehensive policy strategy to harness innovation for stronger and more sustainable growth and development, and to address the key global challenges of the 21st century.
Introduction
How do we measure innovation? | Microdata: what more can they tell us? | Exploiting the potential of microdata: a comparative project | Exploiting innovation surveys: lessons learned
Chapter 1. Innovation Indicators
Introduction | Rationale and methodology | Simple indicators | Composite indicators | Conclusions | Annex: Statistical tables
Chapter 2. Exploring Non-technological and Mixed Modes of Innovation Across Countries
Introduction | Theoretical context | Data and methodology | Results | Summary of findings
Chapter 3. Innovation and Productivity: Estimating the Core Model Across 18 Countries
Background | The innovation and productivity link in a simplified framework | Preliminary findings and messages | Conclusions and sensitivity analysis | Annex: The model specification: advantages and limitations | Annex: Characteristics of the sample of surveys underlying the econometric analysis
Chapter 4. Innovation and Productivity: Extending the Core Model
Background | Extended models used by selected countries | Conclusion and research agenda | Annex tables
Chapter 5. Innovation and Intellectual Property Rights
Background | The link between innovation and IPRs | A first look at countries’ and firms’ propensity to patent | Main findings from the regression analysis | Conclusions and research agenda | Annex: Economic modelling | Annex: Empirical strategy | Annex: Data and variable definitions | Annex: Full set of estimation results
Annex A. Methodology
How to obtain this publication
Readers can access the book choosing from the following options: | https://www.oecd.org/sti/inno/innovationinfirmsamicroeconomicperspective.htm |
The aim of Manifold Church of England Academy is to provide opportunities for children to develop as independent, confident, successful learners with high aspirations who know how to make a positive contribution to their community and the wider society. There is a high focus on developing children’s moral, spiritual, social and cultural understanding.
The school’s focus on curriculum development has been carefully designed to ensure coverage and progression with frequent opportunities to embed prior learning. It provides pupils with memorable experiences, in addition to diverse and rich opportunities from which children can learn and develop a range of transferable skills (Manifold's Life Skills). The children's own community is frequently used as a starting point for engaging interest. A primary focus of our curriculum is to raise aspirations
We want our children to experience:
Implementation of our curriculum
Our curriculum is implemented with our intentions as the drivers behind our actions.
Our curriculum is centered around and driven by the children. Each term, the children study an area of learning which is driven by an overall question, encouraging the children to think and find out things for themselves. Examples include ‘What makes the Earth angry?’ and 'Would a dinosaur make a good pet?’ At the start of the topic, we work with children to find out what they already know and what they would like to know more about. This ensures that we deliver a personalised curriculum and give the children ownership of their learning.
All of the National Curriculum subjects are covered through this approach and there is a strong focus on the basic skills of reading, writing, mathematics, oracy and ICT. As well as covering academic subjects, we also encourage the children to take responsibility for their own learning and we aim to provide them with the skills required to be a life-long 21st century learner. Here the children are shown how to be:
All teaching and learning is carefully planned to ensure that the needs of all learners are met. Our work is also enhanced by a range of exciting visits and visitors – in the past we have travelled to Ford Green Hall, Sudbury Museum of Childhood and Colwyn Bay! Children are always encouraged to reflect on their learning at the end of lessons, the end of a week and the end of a topic. This maximises the potential for children to recognise the learning journey they have been on and where they would like to take their learning next.
Our curriculum intent is implemented throughout all subjects and curriculum activities ensuring a broad and balanced learning experience is provided for every child at Manifold Church of England Academy. Clear strategic planning allows the curriculum to be unique, adapting to our school and our children’s needs. Always having high expectations in all areas enables the best possible outcomes and learning journey. Across all subject areas we demonstrate a breadth of vocabulary and develop strong cross-curricular links. We don’t confuse coverage with progress when assessing. Learning is measured through careful analysis of application of skills across the curriculum; showing how acquisition of knowledge is enhanced dramatically by expectations to evidence quality thinking and demonstrate individual understanding.
Our Curriculum Organisation
We teach the School Curriculum over a 2 year cycle which ensures that all pupils will cover all aspects of the National Curriculum. Further information can be found below.
What is the Difference between the National Curriculum and the Learning Challenge Curriculum?
The National Curriculum outlines the essentials that every child must learn, for example it outlines that children must learn to read but does not determine which texts they should read, it outlines which scientific processes and concepts children need to know but not the context in which they must learn them. It outlines the main geographic and historical principles and key events that need to be taught but does not determine how children should learn them. The National Curriculum is only one part of the much broader curriculum children should experience at school, it should not be all that children learn.
What is the learning Challenge Curriculum?
It is a curriculum which starts with the learner. Teachers find out from pupils what it is they are interested in and want to find out about. They then weave the essential elements of learning into that framework. We believe all learning starts with a BIG question and from that other questions, or subsidiary questions arise, at various points along the way. Each half term pupils' work with teachers to plan their own BIG question and subsidiary questions based on their Learning Challenge. This ensures pupils are motivated and well engaged because they are learning about things that interest and excite them. Learning Challenges are underpinned by a secure framework which weaves together the critical elements of teaching, learning and assessment to ensure pupils make good progress.
If you have any questions regarding the content of the curriculum for your child, please speak to the class teacher. | https://manifold.staffs.sch.uk/our-school/curriculm |
The term ‘music producer’ means different things to different people. Some are musicians, some are engineers, some are remixers.
So what does a music producer actually do ?
In very pragmatic terms, the producer is a ‘project manager’ for the recording, mixing and mastering process.
She has an overall vision for the music, the sound and the goals of the project, and brings a unique perspective to inspire, assist and sometimes provoke the artists.
The producer should make the record more than the sum of it’s parts – you could almost say she is trying to create musical alchemy.
Every producer brings different skills and a different approach, and this can make what they do difficult to summarise. In this post I’ve identified seven distinct types of record producer to try and make this clearer.
1. The Engineer
This is probably most people’s stereotypical idea of the “classic” record producer – hunched over a mixing desk, obsessing about compression settings, reverb tails and drum sounds. The studio is an instrument, and the producer “plays” it like a virtuoso, working late into the night to create a mysterious sonic masterpiece.
In fact this is often far from the norm though, as we’ll see. | http://productionadvice.co.uk/tag/engineer/page/2/ |
Brian and Rob are researchers at Lucent Technologies's Bell Labs and can be contacted at [email protected] and [email protected], respectively.
The right programming language can make all the difference in how easy it is to write a program. This is why a programmer's arsenal holds not only general-purpose languages like C and its relatives, but also programmable shells, scripting languages, and lots of application-specific languages.
Regular expressions are one of the most broadly applicable specialized languages, a compact and expressive notation for describing patterns of text. Regular expressions are algorithmically interesting, easy to implement in their simpler forms, and very useful.
Regular expressions come in several flavors. The so-called "wildcards" used in command-line processors or shells to match patterns of file names are a particularly simple example. Typically, "*" is taken to mean "any string of characters," so a command like
del *.exe
uses a pattern "*.exe" that matches all files with names that contain any string followed by the literal string ".exe".
Regular expressions pervade UNIX, in editors, tools like grep, and scripting languages like Awk, Perl, and Tcl. Although the variations among different programs may suggest that regular expressions are an ad hoc mechanism, they are, in fact, a language in a strong technical sense -- a formal grammar specifies their structure and a precise meaning can be attached to each utterance in the language. Furthermore, the right implementation can run very fast; a combination of theory and engineering practice pays off handsomely.
The Language of Regular Expressions
A regular expression is a sequence of characters that defines a pattern. Most characters in the pattern simply match themselves in a target string, so the regular expression "abc" matches that sequence of three letters wherever it occurs in the target. A few characters are used in patterns as metacharacters to indicate repetition, grouping, or positioning. In POSIX regular expressions, "^" stands for the beginning of a string and "$" for the end, so "^x" matches an "x" only at the beginning of a string, "x$" matches an "x" only at the end, "^x$" matches "x" only if it is the sole character of the string, and "^$" matches the empty string.
The character "." (a period) matches any character, so "x.y" matches "xay," "x2y," and so on, but not "xy" or "xyxy." The regular expression "^.$" matches a string that contains any single character.
A set of characters inside brackets "" matches any single one of the enclosed characters; for example, "" matches a single digit. This pattern may be abbreviated "[0-9]."
These building blocks are combined with parentheses for grouping, "|" for alternatives, "*" for zero or more occurrences, "+" for one or more occurrences, and "?" for zero or one occurrences. Finally, "\" is used as a prefix to quote a metacharacter and turn off its special meaning.
These can be combined into remarkably rich patterns. For example, "\.[0-9]+" matches a period followed by one or more digits; "[0-9]+\.?[0-9]*" matches one or more digits followed by an optional period and zero or more further digits; "(\+|-)" matches a plus or a minus ("\+" is a literal plus sign); and "[eE](\+|-)?[0-9]+" matches an "e" or "E" followed by an optional sign and one or more digits. These are combined in the following pattern that matches floating-point numbers:
(\+|-)?([0-9]+\.?[0-9]*|\.[0-9]+)([eE](\+|-)?[0-9]+)?
A Regular Expression Search Function
Some systems include a regular expression library, usually called "regex" or "regexp." However, if this is not available, it's easy to implement a modest subset of the full regular expression language. The regular expressions we present here make use of four metacharacters: "^," "$," ".," and "*," with "*" specifying zero or more occurrences of the preceding period or literal character. This provides a large fraction of the power of general regular expressions with a tiny fraction of the implementation complexity. We'll use these functions to implement a small but eminently useful version of grep (available electronically; see "Resource Center," page 5).
In Example 1, the function match determines whether a string matches a regular expression. If the regular expression begins with "^" the text must begin with a match of the remainder of the expression. Otherwise, we walk along the text, using matchhere to see if the text matches at each position in turn. As soon as we find a match, we're done. Expressions that contain "*" can match the empty string (for example, ".*y" matches "y" among many other things), so we must call matchhere even if the text is empty.
In Example 2, the recursive function matchhere does most of the work. If the regular expression is empty, we have reached the end and thus have found a match. If the regular expression ends with "$," it matches only if the text is also at the end. If the regular expression begins with a period, it matches any character. Otherwise, the expression begins with a plain character that matches itself in the text. A "^" or "$" that appears in the middle of a regular expression is thus taken as a literal character, not a metacharacter.
Notice that matchhere calls itself after matching one character of pattern and string. Thus the depth of recursion can be as much as the length of the pattern.
The one tricky case occurs when the expression begins with a starred character, "x*," for example. Then we call matchstar with three arguments -- the operand of the star (x), the pattern after the star, and the text; see Example 3. Again, a starred regular expression can match zero characters. The loop checks whether the text matches the remaining expression, trying at each position of the text as long as the first character matches the operand of the star.
Our implementation is admittedly unsophisticated, but it works. And, at fewer than 30 lines of code, it shows that regular expressions don't need advanced techniques to be put to use.
Grep
The pattern-matching program grep, invented by Ken Thompson (the father of UNIX), is a marvelous example of the value of notation. It applies a regular expression to each line of its input files and prints those lines that contain matching strings. This simple specification, plus the power of regular expressions, lets it solve many day-to-day tasks. In the following examples, note that the regular expression used as the argument to grep is different from the wildcard pattern used to specify file names.
- Search for a name in a set of source files: grep fprintf *.c
- Search for a phrase in a set of text files: grep 'regular expression' *.txt
- Filter output from some other program, for example to print all error messages: gcc *.c | grep Error:
- Filter input to some other program, for example to count non-blank lines: grep . *.cpp | wordcount
With flags to print line numbers of matched lines, count matches, do case-insensitive matching, select lines that don't match the pattern, and other variations of the basic idea, grep is so widely used that it has become the classic example of tool-based programming.
Example 4 is the main routine of an implementation of grep that uses match. It is conventional that UNIX programs return 0 for success and nonzero values for various failures. Our grep, like the UNIX version, defines success as finding a matching line, so it returns 0 if there were any matches, 1 if there were none, and 2 if an error occurred. These status values can be tested by other programs like a shell.
As Example 5 illustrates, the function grep scans a single file, calling match on each line. This is mostly straightforward, but there are a couple of subtleties. First, the main routine doesn't quit if it fails to open a file. This is because it's common to say something like
% grep herpolhode *.*
and find that one of the files in the directory can't be read. It's better for grep to keep going after reporting the problem, rather than to give up and force users to type the file list manually to avoid the problem file. Second, grep prints the matching line and the file name, but suppresses the file name if it is reading standard input or a single file. This may seem an odd design, but it reflects a style of use based on experience. When given only one input, grep's task is usually selection, and the file name would clutter the output. But if it is asked to search through many files, the task is most often to find all occurrences of something, and the file names are helpful. Compare
% strings enormous.dll | grep Error:
with
% grep grammer *.txt
Our implementation of match returns as soon as it finds a match. For grep, that is a fine default. But, for implementing a substitution (search-and-replace) operator in a text editor, the leftmost longest match is more useful. For example, given the text "aaaaa," the pattern "a*" matches the null string at the beginning of the text, but the user probably intended to match all five characters. To cause match to find the leftmost longest string, matchstar must be rewritten to be greedy: Rather than looking at each character of the text from left to right, it should skip over the longest string that matches the starred operand, then back up if the rest of the string doesn't match the rest of the pattern. In other words, it should run from right to left. Example 6 is a version of matchstar that does leftmost longest matching. This might be the wrong version of matchstar for grep, because it does extra work; but for a substitution operator, it is essential.
What Next?
Our grep is competitive with system-supplied versions, regardless of the regular expression. For example, it takes about six seconds to search a 40-MB text file on a 400-MHz Pentium (compiled with Visual C++). Pathological expressions can cause exponential behavior, such as "a*a*a*a*a*b" when given the input "aaaaaaaaac," but the exponential behavior exists in many commercial implementations, too. A more sophisticated matching algorithm can guarantee linear performance by avoiding backtracking when a partial match fails; the UNIX egrep program implements such an algorithm, as do scripting languages.
Full regular expressions would include character classes like "[a-zA-Z]" to match a single alphabetic character; the ability to quote a metacharacter (for example, to search for a literal period); parentheses like "(abc)*" for grouping; and alternatives, where "abc|def" matches "abc" or "def."
The first step is to help match by compiling the pattern into a representation that is easier to scan. It is expensive to parse a character class every time we compare it against a character; a precomputed representation based on bit vectors could make character classes much more efficient.
For regular expressions with parentheses and alternatives, the implementation must be more sophisticated. One approach is to compile the regular expression into a parse tree that captures its grammatical structure. This tree is then traversed to create a state machine -- a table of states, each of which gives the next state for each possible input character. The string is scanned by the state machine, which reports when it reaches a state corresponding to a match of the pattern. Another approach is similar to what is done in just-in-time compilers: The regular expression is compiled into instructions that will scan the string; the state machine is implicit in the generated instructions.
Further Reading
J.E.F. Friedl's Mastering Regular Expressions (O'Reilly & Associates, 1997) is an extensive treatment of the subject. Regular expressions are one of the most important features of some scripting languages; see The AWK Programming Language by A.V. Aho, B.W. Kernighan and P.J. Weinberger (Addison-Wesley, 1988) and Programming Perl, by Larry Wall, Tom Christiansen, and Randal L. Schwartz (O'Reilly & Associates, 1996). | http://www.drdobbs.com/architecture-and-design/regular-expressions/184410904 |
Read and also analyze a phylogenetic tree that records evolutionary relationships
In clinical terms, the evolutionary background and connection of an organism or team of biology is dubbed phylogeny. Phylogeny explains the relationships of an organism, such as from which biology it is thought to have actually evolved, come which types it is most carefully related, and also so forth. Phylogenetic relationships administer information on common ancestry however not have to on just how organisms are comparable or different.
You are watching: Type of classification based on evolutionary history
Learning ObjectivesIdentify how and why researchers classify the organisms on earthDifferentiate between types the phylogenetic trees and what their framework tells usIdentify some limitations of phylogenetic treesRelate the taxonomic group system and binomial nomenclature
Figure 1. Only a few of the much more than one million known species of insects are represented in this beetle collection. Beetles are a significant subgroup the insects. They do up about 40 percent of all insect varieties and about 25 percent of every known varieties of organisms.
Why carry out biologists classify organisms? The major reason is to make sense of the significant diversity of life top top Earth. Scientists have established millions of different types of organisms. Amongst animals, the most varied group of organisms is the insects. Much more than one million different types of insects have already been described. An approximated nine million insect species have yet to be identified. A tiny portion of insect types is presented in the beetle arsenal in number 1.
As diverse as insects are, there may be even more species that bacteria, another significant group the organisms. Clearly, there is a need to organize the remarkable diversity that life. Classification permits scientists come organize and better understand the basic similarities and differences amongst organisms. This understanding is crucial to understand the present diversity and the past evolutionary background of life top top Earth.
Phylogenetic Trees
Scientists use a tool referred to as a phylogenetic tree to display the evolutionary pathways and connections among organisms. A phylogenetic tree is a diagram used to reflect evolution relationships amongst organisms or groups of organisms. Scientists take into consideration phylogenetic tree to it is in a hypothesis of the evolutionary past because one can not go earlier to check the propose relationships. In other words, a “tree of life” have the right to be created to illustrate when different organisms evolved and to present the relationships amongst different biology (Figure 2).
Each group of organisms went v its very own evolutionary journey, referred to as its phylogeny. Every organism shares relatedness through others, and based on morphologic and also genetic evidence, scientists attempt come map the evolutionary pathways of every life ~ above Earth. Numerous scientists construct phylogenetic tree to show evolutionary relationships.
Structure of Phylogenetic Trees
A phylogenetic tree can be read like a map of evolution history. Countless phylogenetic trees have a solitary lineage in ~ the basic representing a typical ancestor. Scientists contact such trees rooted, which way there is a single ancestral lineage (typically attracted from the bottom or left) come which all organisms represented in the diagram relate. Notice in the rooted phylogenetic tree that the 3 domains—Bacteria, Archaea, and Eukarya—diverge indigenous a single point and branch off. The small branch the plants and also animals (including humans) accounting in this diagram shows how recent and miniscule these groups are contrasted with various other organisms. Unrooted trees don’t display a common ancestor but do display relationships among species.
Figure 2. Both of these phylogenetic trees reflects the connection of the three domain names of life—Bacteria, Archaea, and also Eukarya—but the (a) rooted tree do the efforts to identify when various species diverged from a common ancestor when the (b) unrooted tree does not. (credit a: modification of work by Eric Gaba)
In a rooted tree, the branching shows evolutionary relationships (Figure 3). The suggest where a split occurs, called a branch point, represents whereby a single lineage evolved into a distinct new one. A family tree that evolved early from the root and also remains unbranched is dubbed basal taxon. Once two lineages stem indigenous the very same branch point, lock are referred to as sister taxa. A branch with an ext than 2 lineages is dubbed a polytomy and serves to highlight where scientists have actually not definitively determined all of the relationships. The is vital to note that return sister taxa and also polytomy do share an ancestor, that does not median that the teams of organisms break-up or evolved from every other. Biology in 2 taxa might have split apart at a particular branch point, yet neither taxa gave rise to the other.
Figure 3. The source of a phylogenetic tree indicates that an genealogical lineage offered rise to every organisms ~ above the tree. A branch suggest indicates where 2 lineages diverged. A lineage that progressed early and remains unbranched is a basal taxon. As soon as two lineages stem native the same branch point, they are sister taxa. A branch with more than two lineages is a polytomy.
The diagrams above can serve as a pathway to understanding evolutionary history. The pathway deserve to be traced native the beginning of life to any individual species by navigating with the evolutionary branches in between the two points. Also, by beginning with a single species and also tracing back towards the “trunk” the the tree, one can uncover that species’ ancestors, and where lineages share a common ancestry. In addition, the tree deserve to be used to study entire groups the organisms.
Another suggest to cite on phylogenetic tree framework is that rotation at branch points walk not adjust the information. For example, if a branch suggest was rotated and also the taxon stimulate changed, this would certainly not alter the information due to the fact that the advancement of every taxon native the branch allude was elevation of the other.
Many self-controls within the study of biology add to understanding exactly how past and present life evolved over time; these techniques together add to building, updating, and maintaining the “tree that life.” details is supplied to organize and classify organisms based upon evolutionary relationship in a scientific field dubbed systematics. Data may be accumulated from fossils, from examining the framework of body components or molecules used by an organism, and also by DNA analysis. By combine data from numerous sources, scientists deserve to put with each other the phylogeny of an organism; because phylogenetic trees space hypotheses, they will continue to readjust as brand-new types of life are uncovered and new information is learned.
Limitations the Phylogenetic Trees
It might be simple to i think that much more closely related organisms look much more alike, and while this is regularly the case, it is not constantly true. If two very closely related lineages progressed under significantly varied surroundings or ~ the evolution of a significant new adaptation, it is possible for the two teams to appear more different than other teams that room not as very closely related. Because that example, the phylogenetic tree in figure 4 shows the lizards and also rabbits both have amniotic eggs, whereas frogs execute not; yet lizards and frogs appear more similar than lizards and rabbits.
Figure 4. This ladder-like phylogenetic tree that vertebrates is rooted by one organism the lacked a vertebral column. At each branch point, organisms with different personalities are inserted in various groups based on the attributes they share.
Another facet of phylogenetic trees is that, uneven otherwise indicated, the branches do not account for size of time, just the evolutionary order. In other words, the length of a branch walk not typically mean an ext time passed, no one does a short branch mean less time passed— unless specified on the diagram. For example, in figure 4, the tree does no indicate just how much time passed in between the advancement of amniotic eggs and also hair. What the tree does show is the stimulate in which points took place. Again using figure 4, the tree reflects that the oldest trait is the vertebral column, adhered to by hinged jaws, and so forth. Psychic that any phylogenetic tree is a part of the greater whole, and also like a real tree, the does not flourish in just one direction after a new branch develops.
So, because that the biology in number 4, just because a vertebral column developed does not typical that invertebrate advancement ceased, that only way that a new branch formed. Also, groups that are not closely related, but evolve under comparable conditions, might appear much more phenotypically comparable to each other than to a nearby relative.
Head to this website to check out interactive exercises that enable you to check out the evolution relationships amongst species.
The Taxonomic classification System
Taxonomy (which literally way “arrangement law”) is the scientific research of classifying organisms to construct internationally shared category systems with each organism inserted into much more and much more inclusive groupings. Think about how a grocery keep is organized. One large space is separated into departments, such as produce, dairy, and also meats. Then each department further divides into aisles, then each aisle into categories and also brands, and then lastly a solitary product. This company from larger to smaller, an ext specific categories is called a hierarchical system.
The taxonomic classification system (also called the Linnaean mechanism after the inventor, Carl Linnaeus, a sweden botanist, zoologist, and also physician) uses a ordered model. Relocating from the suggest of origin, the teams become much more specific, till one branch ends together a single species. Because that example, after the common start of every life, researchers divide organisms into three big categories dubbed a domain: Bacteria, Archaea, and Eukarya. Within every domain is a second category dubbed a kingdom. After kingdoms, the succeeding categories of boosting specificity are: phylum, class, order, family, genus, and also species (Figure 5).
Figure 5. The taxonomic classification system offers a hierarchical model come organize living organisms into increasingly particular categories. The common dog, Canis lupus familiaris, is a subspecies the Canis lupus, which also includes the wolf and also dingo. (credit “dog”: modification of job-related by Janneke Vreugdenhil)
The kingdom Animalia stems native the Eukarya domain. Because that the typical dog, the classification levels would certainly be as presented in figure 5. Therefore, the complete name the an biology technically has eight terms. For the dog, it is: Eukarya, Animalia, Chordata, Mammalia, Carnivora, Canidae, Canis, and lupus. An alert that each name is capitalized except for species, and also the genus and species names space italicized. Scientists usually refer come an organism only by its genus and also species, i beg your pardon is that two-word clinical name, in what is dubbed binomial nomenclature. Therefore, the scientific name that the dog is Canis lupus. The surname at every level is also called a taxon. In other words, dogs are in order Carnivora. Carnivora is the surname of the taxon at the bespeak level; Canidae is the taxon at the household level, and also so forth. Organisms additionally have a typical name that world typically use, in this case, dog. Keep in mind that the dog is furthermore a subspecies: the “familiaris” in Canis lupus familiaris. Subspecies room members of the same types that are qualified of mating and also reproducing viable offspring, yet they are thought about separate subspecies because of geographic or behavioral isolation or various other factors.
Figure 6 shows exactly how the levels relocate toward specificity with other organisms. Notification how the dog shares a domain with the widest diversity that organisms, including plants and butterflies. At each sublevel, the biology become an ext similar due to the fact that they are much more closely related. Historically, scientists classified organisms utilizing characteristics, but as DNA technology developed, much more precise phylogenies have actually been determined.
Figure 6. At every sublevel in the taxonomic classification system, organisms become much more similar. Dogs and wolves are the same types because they deserve to breed and also produce viable offspring, yet they are various enough to be classified as different subspecies. (credit “plant”: modification of occupational by “berduchwal”/Flickr; credit “insect”: change of work by Jon Sullivan; credit “fish”: modification of work by Christian Mehlführer; credit transaction “rabbit”: modification of occupational by Aidan Wojtas; credit transaction “cat”: modification of work-related by Jonathan Lidbeck; credit “fox”: alteration of work by Kevin Bacher, NPS; credit “jackal”: alteration of job-related by thomas A. Hermann, NBII, USGS; credit “wolf”: modification of work by Robert Dewar; credit transaction “dog”: change of work by “digital_image_fan”/Flickr)
Cats and dogs are part of the same group at five levels: both space in the domain Eukarya, the kingdom Animalia, the phylum Chordata, the class Mammalia, and the bespeak Carnivora.
Recent genetic evaluation and other breakthroughs have discovered that some earlier phylogenetic classifications do not align v the evolutionary past; therefore, changes and also updates have to be do as new discoveries occur. Recall that phylogenetic trees are hypotheses and are modified as data becomes available. In addition, category historically has focused on grouping organisms mainly by shared characteristics and does not necessarily illustrate just how the various teams relate to each other from an evolutionary perspective. Because that example, despite the fact that a hippopotamus each other a pig much more than a whale, the hippopotamus may be the the next living loved one of the whale.
Check her Understanding
Answer the question(s) below to see how well you recognize the topics extended in the vault section. This quick quiz does not count towards your grade in the class, and also you have the right to retake it an unlimited number of times.
See more: How Long To Build A Car From Scratch? How Long Does It Take To Build A Car
Use this quiz to check your understanding and decide even if it is to (1) examine the vault section more or (2) relocate on to the following section. | https://gaianation.net/type-of-classification-based-on-evolutionary-history/ |
How to Form a Habitable Planet: More than 20% of nearby main sequence stars are surrounded by debris disks, where planetesimals, larger bodies similar to asteroids and comets in our own Solar System, are ground down through collisions. The resulting dusty material is directly linked to any planets in the system, providing an important probe of the processes of planet formation and subsequent dynamical evolution. The Atacama Large Millimeter/submillimeter Array (ALMA) has revolutionized our ability to study planet formation, allowing us to see planets forming in disks and sculpting the surrounding material in high resolution. I will present highlights from ongoing work using ALMA and other facilities that explores how planetary systems form and evolve by (1) connecting debris disk structure to sculpting planets and (2) understanding the impact of stellar flares on planetary habitability. Together these results provide an exciting foundation to investigate the evolution of planetary systems through multi-wavelength observations.
Tues March 30th, 11am, Zoom: Hanno Rein (U. of T) Recording
Chaos, Instability, and Machine Learning: We have known the equations which determine the trajectories of planets for over 300 years. Yet, the long term evolution of the Solar System was not well understood until just a few years ago. In this talk, I will explain why it is so hard to solve these differential equations and describe the recent algorithmic breakthroughs that have made such problems tractable. These new numerical tools allow us to address many exciting scientific questions. I will outline some of my current research projects which aim to improve our understanding of planet formation in our galactic neighbourhood, and put constraints on General Relativity on timescales of billions of years. I will also present how we construct a Bayesian neural network to accurately predict instabilities orders of magnitudes faster than was possible before. This model enables us to include stability constraints in data reduction pipelines for extrasolar planetary systems.
Tues April 6th, 11am, Zoom: Tuan Do (UCLA) Recording Unavailable
The Galactic Center: a laboratory for the study of the physics and astrophysics of supermassive black holes: The center of the Milky Way hosts the closest supermassive black hole and nuclear star cluster to the Earth, offering us the opportunity to study the physics of supermassive black holes and their environment at a level of detail not possible elsewhere. I will discuss 2 major questions that are at the forefront of Galactic center research: (1) What is the nature of the near-infrared emission from Sgr A*? and (2) How do nuclear star clusters form and evolve in the vicinity of a supermassive black hole? I will show how the long time-baseline of Galactic center observations, improved instrumental capabilities, and use of statistical methods to combine many types of data have led us to new insights into these questions. I will discuss what we have learned in 20 years of observations of the supermassive black hole, Sgr A*, in the near-infrared and its surprising increase in activity in recent years. I will also discuss how the results the first chemical-dynamical model of the Milky Way Nuclear Star Cluster allow us to disentangle its complex formation.
Tues April 13th, 11am, Zoom: Deep Anand (U. of Hawaii) Recording
Tues April 20th, 11am, Zoom: Auriane Egal (Western U.) Recording
Comet Halley’s twin meteor showers: 1P/Halley is a famous comet that aroused the interest of the general public and the scientific community for several centuries. Its most recent apparition in 1986 motivated an unprecedented observational effort, combining spacecraft rendezvous and ground-based telescopic programs led by different countries. Most of our knowledge about the comet’s activity and evolution comes from the results of this exceptional observation campaign. From the analysis of ancient Chinese and Babylonian inscriptions, we suspect that 1P/Halley has been delivering meteoroids to Earth for several millennia. In particular, the comet is known to produce two meteor showers at the present epoch, the Eta-Aquariids in May and the Orionids in October. However, and despite decades of meteor observations, most of the showers’ characteristics are still unexplained. In this presentation, we expose the results of a new numerical model of 1P/Halley’s meteoroid streams, allowing to reproduce the meteor showers’ formation, intensity, duration, and predict the apparition of future meteor outbursts to watch. In particular, we expect three Eta-Aquariids outbursts in the future that deserves special attention.
Tues April 27th, 11am, Zoom: Oliver Müller (U. of Strasbourg) Recording
A cosmic ballet of dwarf galaxies as challenge for dark matter cosmology: Dwarf galaxies are not only the most common galaxies but also the most dark matter dominated objects in the universe. By studying their abundance and distribution, we can test our current model of cosmology. Around the Milky Way and the Andromeda – the Local Group –, several discrepancies between observations and the predictions for these dwarf galaxies have been identified, constituting a small-scale crisis. The most severe of them is the plane-of-satellites problem: the dwarf galaxy satellites around the Milky Way and the Andromeda are aligned in thin, planar, co-rotating structures. This is in stark contrast to the results of cosmological simulations, where for the satellite system an isotropic distribution with random motions is expected. This raises the question: Is the Local Group unique? Recent observations of the nearby Centaurus group say it is not. In my talk, I will give a review over the current state of this peculiar question in near-field cosmology.
Tues May 4th, 11am, Zoom: Judit Prat (DES/U. of Chicago) Recording
Galaxy-galaxy lensing and Lensing Ratios for Cosmological Analyses in the Dark Energy Survey: Galaxy cosmic surveys such as the Dark Energy Survey are a powerful tool to extract cosmological information. In particular, the combination of weak lensing and galaxy clustering measurements, usually known as 3x2pt, provides a potent and robust way to constrain the parameters controlling the structure formation in the late Universe. Galaxy-galaxy lensing, which is the cross-correlation of the shapes of source background galaxies with lens foreground galaxy positions, is one of the three probes that is part of this combination. In this talk, I will describe how we can accurately measure and model galaxy-galaxy lensing correlations using the well-understood large scales with the purpose of extracting cosmological information. Besides this, I will also describe how we can construct suitable ratios of these measurements to exploit the otherwise usually disregarded small-scale information and naturally integrate it as a part of the 3x2pt analysis.
Tues May 25th, 11am, Zoom: Carl Fields (MSU/Arizona/LANL) Recording
Next-Generation Simulations of The Remarkable Deaths of Massive Stars: Core-collapse supernova explosions (CCSN) are one possible fate of a massive star. Simulations of CCSNe rely on the properties of the massive star at core-collapse. As such, a critical component is the realization of realistic initial conditions. Multidimensional progenitor models can enable us to capture the chaotic nuclear shell burning occurring deep within the stellar interior. I will discuss ongoing efforts to progress our understanding of the nature of massive stars through next-generation hydrodynamic stellar models. In particular, I will present recent results of three-dimensional hydrodynamic massive star models evolved for the final 10 minutes before collapse. These recent results suggest that realistic 3D progenitor models can be favorable for obtaining robust models of CCSN explosions and are an important aspect of massive star explosions that must be taken into consideration. I will conclude with a brief discussion of the implications our models have for predictions of multi-messenger signals from CCSNe.
Tues June 1st, 11am, Zoom: Yamila Miguel (Leiden) Recording
Unveiling the secrets of Jupiter with the Juno mission: With more than 4000 exoplanets found and about 2-dozens of planets with detected atmospheric chemical species, we moved from an era of discovery to a new era of exoplanet characterisation. On the other hand, extremely accurate measurements by Juno and Cassini missions, make this an exceptional time to combine the detail information on the solar system giant planets and the large amount of data from exoplanets to get a better understanding on planetary physics and a better comprehension on planet formation and evolution. Because our knowledge on the interior structure of the giant planets is linked with the data we obtain from space missions, these last years were crucial for this field: the outstanding accuracy of the gravity data provided by Juno has fundamentally changed our understanding of the interior of Jupiter. It has allowed us to put constrains on the zonal flows, the extent of differential rotation and lead us to find that Jupiter has most likely a dilute core. In this presentation I will review our knowledge on the interior structure of Jupiter and will also show some new results where we find that a non-homogenous envelope is also a constraint set up by the Juno measurements, which is helping us to get closer to unveiling Jupiter’s deep secrets and to reach a better understanding of the giant planets formation history.
Tues June 8th, 11am, Zoom: Shany Danieli (IAS) Recording
Towards a better understanding of low mass galaxies beyond the Local Group: Low mass galaxies provide an essential testing ground for theoretical predictions of cosmology. Their number densities, structures, and internal dynamics can be extremely insightful for studying dark matter and galaxy formation on small scales. I will discuss recent results studying dwarf galaxies and ultra-diffuse galaxies (UDGs). UDGs hold the promise of new constraints on low mass galaxies dynamics, as their spatial extent and often significant globular cluster populations provide probes on spatial scales where dark matter should dominate the kinematics. I will also discuss the dynamics of two UDGs that seem to lack most, if not all, of their dark matter and host an intriguing population of globular clusters. I will finish by presenting a new wide-field survey carried out with the 48-lens Dragonfly Telephoto Array. With an excellent photometric depth, the Dragonfly Wide Field Survey will provide an unprecedented view of the low surface brightness universe over a wide area of the sky (350 square degrees). The main goal of the survey is to provide information on the properties and statistics of the dwarf galaxy population beyond the Local Group but it will also provide a useful resource for other resolved, low surface brightness phenomena, such as stellar streams and tidal tails, stellar halos, intragroup light and the extent of massive galaxies.
Tues June 22nd, 11am, Zoom: Jane Huang (U. of Michigan) Recording
The ALMA View of Planet Formation: The ubiquity and diversity of planets tell us that they can emerge under an astonishing range of conditions. By enabling us to map the distributions of dust grains and molecules in protoplanetary disks at an unprecedented level of detail, the Atacama Large Millimeter/Submillimeter Array (ALMA) has transformed our understanding of planet formation. In the Disk Substructures at High Angular Resolution Project (DSHARP), we undertook the first high angular resolution disk survey at millimeter wavelengths. Although protoplanets are difficult to detect directly, the widespread presence of dust gaps and rings in disks suggests that giant planet formation may occur readily on Myr-timescales at surprisingly wide separations. Meanwhile, in a small but growing number of systems, detections of puzzling spiral structures oblige us to re-examine common assumptions about the reservoir of material available for planet formation. ALMA has also revealed strong chemical heterogeneity within and among disks, laying the observational groundwork for linking the compositions of planets to their formation location. Together, these new data show that the natal environments of planets are far more dynamic and varied than earlier observations have indicated. | https://astroherzberg.org/seminars/dao-colloquium/ |
The Indeed Editorial Team comprises a diverse and talented team of writers, researchers and subject matter experts equipped with Indeed's data and insights to deliver useful tips to help guide your career journey.
Although managers can choose to apply any organizational structure, it's important to consider the company's size, type of business, and employee preferences. One of the most effective organizational structures managers can adopt is the organic structure. Understanding the benefits and challenges of organic organizational structures can help you decide if it's suitable for a company. In this article, we answer "What is an organic organizational structure?", differentiate between organic and mechanist structures, discuss its benefits, outline its challenges, review helpful tips, and highlight examples.
What is an organic organizational structure?
You may wonder, "What is an organic organizational structure?". It's a flexible workplace structure characterized by a flat reporting process. Flat or horizontal communication methods imply that employees interact with various colleagues, managers, and departments and share responsibilities in teams and groups to ensure they complete their tasks successfully. This organizational structure encourages managers to develop communication channels between all levels in the organization and more adaptable workspaces to create a positive and welcoming environment that values employees' opinions.
Related: Definition of an Organization Type (With a List of Benefits)
What is the difference between organic structure and mechanistic structure?
The mechanistic structure is the opposite of the organic structure. It's more traditional, and the communication in this working environment is specific and vertical. This implies that an entry-level employee in this organizational structure typically communicates with and reports to a direct manager. In addition, the chain of communication continues upward until it reaches the employee with the highest authority, typically the chief executive officer or president.
Mechanistic structures define the decision-making authority of managers at different levels and create fixed communication methods. In contrast, organic structures support an engaging and more open work community where employees make decisions together and share ideas. Each structure has different requirements and uses, but organic structures typically flourish in creative and smaller organizations.
Related: Mechanistic Structure: Benefits, Challenges, and Tips
Benefits of organic organizational structures
Here are some benefits a company can enjoy if you implement an organic organizational structure:
Creates opportunities for creativity
Implementing an organic organizational structure creates multiple opportunities for employees to innovate solutions and demonstrate creativity. This structure allows employees to collaborate at various levels, brainstorm new ideas for problems in the workplace, and hear the opinions and perspectives of employees in other departments. In addition, an organic organizational structure allows employees to work on a flexible schedule. Here, employees can develop new strategies to improve productivity rather than follow traditional methods.
Fosters open communication
Implementing an organic organizational structure allows for open communications with managers and other employees. Open communication helps employees perform effectively, collaborate, and improve productivity. In addition, creating an environment where employees feel comfortable asking for help can provide quicker access to relevant information rather than having to wait for information from managers before performing duties.
Builds better employee relationships
As an organic organizational structure encourages employees to collaborate and communicate across levels and departments, employees may build stronger relationships, improving cooperation and teamwork. In addition, a collaborative environment creates a rewarding work environment as people with different experiences and perspectives come together to solve problems. It also helps employees build an extensive network and helps them work together in the future.
Related: Definition of an Organization Type (With a List of Benefits)
Improves employee satisfaction
Organically structured organizations help improve employee satisfaction as employees feel valued and have more freedom to perform their responsibilities. In companies where employees feel valued and heard, they can perform their duties with confidence and trust in their abilities. This helps them enjoy their daily activities and their overall work environment. In addition, working in an organically structured environment eliminates the pressures associated with rigid work environments, such as attending regular meetings and understanding hierarchies.
Optimizes formal procedures
An organic organizational structure helps optimize the formal procedures in the company. Although certain procedures, like training protocols and filing human resource (HR) complaints, are essential to running a company, there may be some operational processes you can optimize to improve efficiency and productivity. Optimizing and changing the formal procedures in your workplace can encourage employees to work with more flexibility.
Challenges of an organic organizational structure
Although implementing an organic organizational structure in the workplace may have various benefits, there are some challenges associated with this structure. Common challenges you may experience when implementing this structure in the workplace include:
Lower productivity
Implementing an organic organizational structure in an environment where employees may find it difficult to agree with each other or work together may lower productivity. In addition, you may experience lower productivity levels if you implement this structure too quickly without providing employees with enough time and resources to understand how to use it effectively. It's important to have a clear goal you want employees to work toward before implementing this system.
Excess ideas
When working in an organically structured organization, you may have too many ideas during the decision-making process. As this structure allows all employees to state their perspectives and ideas, there may be conflicting ideas and viewpoints on the best approach to problems. You can solve this problem by guiding brainstorming sessions and meetings with clear objectives to ensure everybody focuses on the same goal. In addition, you can categorize similar ideas together and choose the most appealing idea or group of ideas.
Slower decision making
The decision-making process of organically structured organizations is typically slower than in other organizations. This is because the organic organizational structure emphasizes getting input from every employee and hearing different perspectives. Although this strategy may be beneficial when making decisions for the whole company, it may not be ideal for emergencies or solving pressing issues affecting the company's success. When implementing the organic organizational structure, you can make provisions for emergencies or scenarios where executives or upper management employees can make decisions in the company's best interests.
Related: Functional Organization Structure: Pros and Cons
Less regulated work
Less regulated work typically helps employees work creatively and reduce the stress of working according to strict quotas and deadlines. Despite its advantages, it may be challenging when employees fail to meet deadlines or produce high-quality work, affecting the company's profit margins or overall success. Consider establishing stable expectations for employees alongside their flexibility. For example, you can give more freedom to employees who have worked with the company for a particular number of years and have a record of performing well.
Related: What Is a Professional Organization? (With Tips)
Slower adaptation for new employees
The flexibility of organic organizations may make them unpredictable and challenging for new employees. New employees typically spend some time learning about their roles and the company. Learning how to perform their duties and their manager's expectations may be challenging while navigating an organic organizational structure. As a manager, you can make the transition easier for new employees by assigning mentors to them and providing training and onboarding activities to ensure they understand the company's structure.
Related: What Is Organizational Design and Structure and Why Is It Important?
Tips for managing an organic organizational structure
Some tips to consider when managing a company with an organic organizational structure include:
Consider the size of the company. It's important to consider the size of the organization before implementing an organic organizational structure. Consider the necessary resources, such as experienced managers and specific training, to help guide employees through the transition.
Create stable procedures when necessary. Certain stable formal procedures, such as HR procedures, are necessary to run an organization. It's useful to create processes and procedures for break times, filing relevant paperwork, and for other regulated business areas to ensure employees have the necessary resources and feel comfortable working effectively.
Provide employee feedback regularly. Employees are essential assets and necessary to the success of a business, so it's necessary to give them reviews and surveys regularly on their performance. Conducting regular employee reviews can help you create an organic structure that suits all employees, increases productivity, and improves job satisfaction.
Be clear about work expectations. Ensure you clearly communicate your expectations to employees and team members to allow them to produce quality output while working within these expectations. In addition, communicating expectations can help manage stress, as employees can set actionable goals and understand the criteria for judging their performance.
Related: Understanding a Matrix Organizational Structure
Organic organization structure examples
Organizations in different industries can use the organic organizational structure if employees can successfully perform their responsibilities within this structure. Businesses that typically perform well with this structure have employees with various skill sets who can perform different tasks when necessary. Examples of organizations using the organic organizational structure include:
Technology companies: Information technology companies that hire software engineers typically use the organic organizational structure to ensure they get perspectives and ideas from employees with different skills and backgrounds.
Law firms: In most law firms, lawyers typically collaborate and work together on various cases and help clients using individual skills and expertise, although senior lawyers may have more authority.
Small businesses: Owners and managers of small businesses in most industries can create an organic organizational structure, as they have a few essential employees to perform all business duties and make decisions.
Explore more articles
- Guide to a Project Management Framework (With Types)
- Data Integrity: Definitions, Best Practices, and Risks
- What Are Plumber Skills? (With Tips to Learn Them)
- What Makes Happy Customers? (9 Effective Strategies to Try)
- What Is SSL and Why Is It Important for Websites? | https://ca.indeed.com/career-advice/career-development/what-is-organic-organizational-structure |
A Study of Modern Trends in Traditional Farming Methods of Paddy Cultivation
Kumari, J.A.P.
URI:
http://repository.kln.ac.lk/handle/123456789/20268
Citation:
Kumari, J.A.P. (2015). A Study of Modern Trends in Traditional Farming Methods of Paddy Cultivation. Reviewing International Encounters 2015, Research Center for Social Sciences, University of Kelaniya, Sri Lanka.P.17
Date:
2015
Abstract:
Rice is main food as well as paddy cultivation is the major agriculture cultivation in Sri Lanka. Currently, around 807,763 hectares of land cultivated in Sri Lanka for paddy, 64% is cultivated in Maha season while 35% is cultivated in Yala season. Modern agricultural methods in the paddy sector were supported by the new and improved high yielding rice varieties, machineries, Pesticides and weedicides, Inorganic fertilizer. It was revealed that the modern paddy farming methods are prone to issues. Several problems were associated with modern rice cultivation, these crops were more susceptible to pest and disease attacks. Further, crop suffering from micronutrient deficiencies such as whitening, yellowing and retardation of growth. This research intends to identify and analyze the new trends of traditional farming practices on paddy cultivation. Information gathered through conducting interviews with total of 100 farmers in four paddy cultivation areas in Sri Lanka and utilized by case study and observation. Secondary data collected from books, articles, relevant websites and other relevant documents. Collected data analyzed by descriptive and advance research methods. Traditional farming method such as rituals and spiritual practices, Pirith, Suthra and Gatha.Kem methods, organic fertilizers, traditional rice varieties have identified as the success traditional agricultural practices for pest and diseases control as well as the high yield, increased production, profitability, etc.As well as the research was revealed that modern industrial farmers trend to use traditional farming methods for three reasons. They are, to restrict long term side effects inherited from the modern methods, as an ecofriendly agricultural practice and health and nutrition purposes
Show full item record
Files in this item
Name:
17.pdf
Size:
292.2Kb
Format: | http://repository.kln.ac.lk/xmlui/handle/123456789/20268 |
NORTH PORT, Fla. -- It's been talked about for years, and now the wheels are finally in motion on a plan to widen Price Boulevard in North Port. It's a move to help the flow of traffic, but it could have a large impact on the many residents who live along the stretch.
North Port Mayor Jim Blucher says widening Price Boulevard from two to four lanes is a must.
The current two lane road gets congested during busy parts of the day, and city leaders say the widening is inevitable for the growing city.
"Traffic here in the morning, and in the evening coming back from school and back from work, is horrendous," Blucher says.
After U.S. 41 and I-75, it's really the city's only other east-west route. Commissioners and staff agreed at a recent workshop to really start looking at it.
"We are all in agreement we are going to put it on our capital expenditures five year plan," Blucher says. The officials also agreed on which section needs to be done first, the stretch running from Sumter Boulevard to Toledo Blade. The big problem with that is there are currently dozens of homes there.
"We are not happy with it," says resident Rick Backiel. "Nobody wants to live on a four lane highway."
Backiel says he and some of his neighbors have been hearing stories about potential plans for years. He's not sure how bad the impact will be.
"We've heard they were going to buy houses on one side of the street," he says. "Who knows? If they do widen it, are we going to lose half our driveway and some of our trees?"
Blucher says it's likely the plan would not be similar to the Sumter Boulevard plan, which is currently in the last phase of a widening that cost more than $30 million.
"We think we can do it in the 100 foot right away we currently have without taking any homes and without taking any property," he says.
Still, more traffic isn't very appealing to many here. A proposed Dollar General store along the road has brought out neighbors saying it doesn't fit in. However some predict that the largest city in Sarasota County will balloon from the current level of 60,000 residents to more than 200,000 by the year 2050.
"We've got to stay in front of it," Blucher says. "I will not let happen what has happened in the past, where we let our roads go." | http://www.mysuncoast.com/news/local/price-boulevard-widening-plan-in-north-port-s-future/article_53bef052-cf14-11e3-8cb6-0017a43b2370.html?mode=story |
-1.5cm =22.5cm =16.5cm
NTUA–08/01\
hep-ph/0111125
\
[**P. Manousselis**]{}$^{a}$ and [**G. Zoupanos**]{}$^{b}$\
Physics Department, National Technical University,\
Zografou Campus, 157 80 Athens, Greece.\
[**Abstract**]{}\
We address the question of supersymmetry breaking of a higher dimensional supersymmetric theory due to coset space dimensional reduction. In particular we study a ten-dimensional supersymmetric $E_{8}$ gauge theory which is reduced over all six-dimensional coset spaces. We find that the original supersymmetry is completely broken in the process of dimensional reduction when the coset spaces are symmetric. On the contrary softly broken four-dimensional supersymmetric theories result when the coset spaces are non-symmetric. From our analysis two promising cases are emerging which lead to interesting GUTs with three fermion families in four dimensions, one being non-supersymmetric and the other softly broken supersymmetric.
$^{a}$e-mail address: [email protected]. Supported by $\Gamma\Gamma$ET grand 97E$\Lambda$/71.\
$^{b}$e-mail address: [email protected]. Partially supported by EU under the RTN contract HPRN-CT-2000-00148 and the A.v.Humboldt Foundation.
Introduction
============
The celebrated Standard Model (SM) of Elementary Particle Physics had so far outstanding successes in all its confrontations with experimental results. However the apparent success of the SM is spoiled by the presence of a plethora of free parameters mostly related to the ad-hoc introduction of the Higgs and Yukawa sectors in the theory. It is worth recalling that the Coset Space Dimensional Reduction (CSDR) [@Manton; @Review; @Kuby] was suggesting from the beginning that a unification of the gauge and Higgs sectors can be achieved in higher dimensions. The four-dimensional gauge and Higgs fields are simply the surviving components of the gauge fields of a pure gauge theory defined in higher dimensions. In the next step of development of the CSDR scheme, fermions were introduced [@Slansky] and then the four-dimensional Yukawa and gauge interactions of fermions found also a unified description in the gauge interactions of the higher dimensional theory. The last step in this unified description in high dimensions is to relate the gauge and fermion fields that have been introduced. A simple way to achieve that is to demand that the higher dimensional gauge theory is ${\cal N}= 1$ supersymmetric which requires that the gauge and fermion fields are members of the same supermultiplet. An additional strong argument towards higher dimensional supersymmetry including gravity comes from the stability of the corresponding compactifying solutions that lead to the four-dimensional theory.
In the spirit described above a very welcome additional input is that string theory suggests furthermore the dimension and the gauge group of the higher dimensional supersymmetric theory [@Theisen]. Further support to this unified description comes from the fact that the reduction of the theory over coset [@Review] and CY spaces [@Theisen] provides the four-dimensional theory with scalars belonging in the fundamental representation of the gauge group as are introduced in the SM. In addition the fact that the SM is a chiral theory lead us to consider $D$-dimensional supersymmetric gauge theories with $D=4n+2$ [@Chapline; @Review], which include the ten dimensions suggested by the heterotic string theory [@Theisen].
Concerning supersymmetry, the nature of the four-dimensional theory depends on the corresponding nature of the compact space used to reduce the higher dimensional theory. Specifically the reduction over CY spaces leads to supersymmetric theories [@Theisen] in four dimensions, the reduction over symmetric coset spaces leads to non-supersymmetric theories, while a reduction over non-symmetric ones leads to softly broken supersymmetric theories [@Pman]. Concerning the latter as candidate four-dimensional theories that describe the nature, in addition to the usual arguments related to the hierarchy problem [@Dim], we should remind a further evidence established in their favor the last years. It was found that the search for renormalization group invariant (RGI) relations among parameters of softly broken supersymmetric GUTs considered as a unification scheme at the quantum level, could lead to successful predictions in low energies. More specifically the search for RGI relations was concerning the parameters of softly broken GUTs beyond the unification point and could lead even to all-loop finiteness [@Mondragon1; @Mondragon2]. On the other hand in the low energies lead to successful predictions not only for the gauge couplings but also for the top quark mass, among others, and to interesting testable predictions for the Higgs mass [@Kobayashi].
The paper is organized as follows. In section 2 we present various details of the Coset Space Geometry with emphasis in the inclusion of torsion and more than one radii when possible. The CSDR scheme is also presented in sufficient detail to make the paper self-contained. In section 3 supersymmetry breaking via CSDR is examined. In 3.1 the issue of supersymmetry breaking in CSDR over symmetric coset spaces is analyzed with presentation of explicit examples of reduction of a supersymmetric ten-dimensional $E_{8}$ gauge theory over all symmetric six-dimensional coset spaces. Section 3.2 contains an explicit and detailed study of the supersymmetry breaking of the same supersymmetric ten-dimensional $E_{8}$ gauge theory, compactified over all non-symmetric coset spaces i.e. $G_{2}/SU(3)$, $Sp(4)/(SU(2) \times U(1))_{non-max.}$ and $SU(3)/U(1) \times U(1)$. In section 4 we present our conclusions. The appendix A contains the commutation relations on which the calculations of section 3 are based on, while the appendix B contains details related to the calculation of the gaugino mass in the reduction over the $SU(3)/U(1) \times U(1)$ coset space.
The Coset Space Dimensional Reduction.
======================================
Given a gauge theory defined in higher dimensions the obvious way to dimensionally reduce it is to demand that the field dependence on the extra coordinates is such that the Lagrangian is independent of them. A crude way to fulfill this requirement is to discard the field dependence on the extra coordinates, while an elegant one is to allow for a non-trivial dependence on them, but impose the condition that a symmetry transformation by an element of the isometry group $S$ of the space formed by the extra dimensions $B$ corresponds to a gauge transformation. Then the Lagrangian will be independent of the extra coordinates just because it is gauge invariant. This is the basis of the CSDR scheme [@Manton; @Review; @Kuby], which assumes that $B$ is a compact coset space, $S/R$.
In the CSDR scheme one starts with a Yang-Mills-Dirac Lagrangian, with gauge group $G$, defined on a $D$-dimensional spacetime $M^{D}$, with metric $g^{MN}$, which is compactified to $ M^{4}
\times S/R$ with $S/R$ a coset space. The metric is assumed to have the form $$g^{MN}=
\left[\begin{array}{cc}\eta^{\mu\nu}&0\\0&-g^{ab}\end{array}
\right],$$ where $\eta^{\mu\nu}= diag(1,-1,-1,-1)$ and $g^{ab}$ is the coset space metric. The requirement that transformations of the fields under the action of the symmetry group of $S/R$ are compensated by gauge transformations lead to certain constraints on the fields. The solution of these constraints provides us with the four-dimensional unconstrained fields as well as with the gauge invariance that remains in the theory after dimensional reduction. Therefore a potential unification of all low energy interactions, gauge, Yukawa and Higgs is achieved, which was the first motivation of this framework.
It is interesting to note that the fields obtained using the CSDR approach are the first terms in the expansion of the $D$-dimensional fields in harmonics of the internal space $B$. The effective field theories resulting from compactification of higher dimensional theories contain also towers of massive higher harmonics (Kaluza-Klein) excitations, whose contributions at the quantum level alter the behaviour of the running couplings from logarithmic to power [@Taylor]. As a result the traditional picture of unification of couplings may change drastically [@Dienes]. Higher dimensional theories have also been studied at the quantum level using the continuous Wilson renormalization group [@Kubo] which can be formulated in any number of space-time dimensions with results in agreement with the treatment involving massive Kaluza-Klein excitations.
Before we proceed with the description of the CSDR scheme we need to recall some facts about coset space geometry needed for subsequent discussions. Complete reviews can be found in [@Review; @Castellani].
Coset Space Geometry.
---------------------
Assuming a $D$-dimensional spacetime $M^{D}$ with metric $g^{MN}$ given in eq.(1) it is instructive to explore further the geometry of all coset spaces $S/R$.
We can divide the generators of $S$, $ Q_{A}$ in two sets : the generators of $R$, $Q_{i}$ $(i=1, \ldots,dimR)$, and the generators of $S/R$, $ Q_{a}$( $a=dimR+1 \ldots,dimS)$, and $dimS/R=dimS-dimR =d$. Then the commutation relations for the generators of $S$ are the following: $$\begin{aligned}
\left[ Q_{i},Q_{j} \right] &=& f_{ij}^{\ \ k} Q_{k},\nonumber \\
\left[ Q_{i},Q_{a} \right]&=& f_{ia}^{\ \ b}Q_{b},\nonumber\\
\left[ Q_{a},Q_{b} \right]&=& f_{ab}^{\ \ i}Q_{i}+f_{ab}^{\ \
c}Q_{c} .\end{aligned}$$ So $S/R$ is assumed to be a reductive but in general non-symmetric coset space. When $S/R$ is symmetric, the $f_{ab}^{\ \ c}$ in eq.(2) vanish. Let us call the coordinates of $M^{4} \times S/R$ space $z^{M}= (x^{m},y^{\alpha})$, where $\alpha$ is a curved index of the coset, $a$ is a tangent space index and $y$ defines an element of $S$ which is a coset representative, $L(y)$. The vielbein and the $R$-connection are defined through the Maurer-Cartan form which takes values in the Lie algebra of $S$ : $$L^{-1}(y)dL(y) = e^{A}_{\alpha}Q_{A}dy^{\alpha} .$$ Using eq.(3) we can compute that at the origin $y = 0$, $
e^{a}_{\alpha} = \delta^{a}_{\alpha}$ and $e^{i}_{\alpha} = 0$. A connection on $S/R$ which is described by a connection-form $\theta^{a}_{\ b}$, has in general torsion and curvature. In the general case where torsion may be non-zero, we calculate first the torsionless part $\omega^{a}_{\ b}$ by setting the torsion form $T^{a}$ equal to zero, $$T^{a} = de^{a} + \omega^{a}_{\ b} \wedge e^{b} = 0,$$ while using the Maurer-Cartan equation, $$de^{a} = \frac{1}{2}f^{a}_{\ bc}e^{b}\wedge e^{c} +f^{a}_{\
bi}e^{b}\wedge e^{i},$$ we see that the condition of having vanishing torsion is solved by $$\omega^{a}_{\ b}= -f^{a}_{\ ib}e^{i}-\frac{1}{2}f^{a}_{\ bc}e^{c}-
\frac{1}{2}K^{a}_{\ bc}e^{c},$$ where $K^{a}_{\ bc}$ is symmetric in the indices $b,c$, therefore $K^{a}_{\ bc}e^{c} \wedge e^{b}=0$. The $K^{a}_{\ bc}$ can be found from the antisymmetry of $\omega^{a}_{\ b}$, $\omega^{a}_{\
b}g^{cb}=-\omega^{b}_{\ c}g^{ca}$, leading to $$K_{\ \ bc}^{a}=g^{ad}(g_{be}f_{dc}^{\ \ e}+g_{ce}f_{db}^{\ \ e}).$$ In turn $\omega^{a}_{\ b}$ becomes $$\omega^{a}_{\ b}= -f^{a}_{\ ib}e^{i}-D_{\ \ bc}^{a}e^{c},$$ where $$D_{\ \ bc}^{a}=\frac{1}{2}g^{ad}[f_{db}^{\ \
e}g_{ec}+f_{cb}^{\ \ e} g_{de}- f_{cd}^{\ \ e}g_{be}].$$ The $D$’s can be related to $f$’s by a rescaling [@Review]: $$D^{a}_{\
bc}=(\lambda^{a}\lambda^{b}/\lambda^{c})f^{a}_{\ bc},$$ where the $\lambda$’s depend on the coset radii. Note that in general the rescalings change the antisymmetry properties of $f$’s, while in the case of equal radii $D^{a}_{\ bc}=\frac{1}{2}f^{a}_{\ bc}$. Note also that the connection-form $\omega^{a}_{\ b}$ is $S$-invariant. This means that parallel transport commutes with the $S$ action [@Castellani]. Then the most general form of an $S$-invariant connection on $S/R$ would be $$\omega^{a}_{\ b} = f_{\ \ ib}^{a}e^{i}+J_{\ \ cb}^{a}e^{c},$$ with $J$ an $R$-invariant tensor, i.e. $$\delta J_{cb}^{\ \ a}
=-f_{ic}^{\ \ d}J_{db}^{\ \ a}+ f_{id}^{\ \ a}J_{cb}^{\ \
d}-f_{ib}^{\ \ d}J_{cd}^{\ \ a}=0.$$ This condition is satisfied by the $D$’s as can be proven using the Jacobi identity.
In the case of non-vanishing torsion we have $$T^{a} = de^{a} + \theta^{a}_{\ b} \wedge e^{b},$$ where $$\theta^{a}_{\ b}=\omega^{a}_{\ b}+\tau^{a}_{\ b},$$ with $$\tau^{a}_{\ b} = - \frac{1}{2} \Sigma^{a}_{\ bc}e^{c},$$ while the contorsion $ \Sigma^{a}_{\ \ bc} $ is given by $$\Sigma^{a}_{\ \ bc} = T^{a}_{\ \ bc}+T_{bc}^{\ \ a}-T_{cb}^{\ \ a}$$ in terms of the torsion components $ T^{a}_{\ \ bc} $. Therefore in general the connection-form $ \theta^{a}_{\ b}$ is $$\theta^{a}_{\ b} = -f^{a}_{\ ic}e^{i} -(D^{a}_{\
bc}+\frac{1}{2}\Sigma^{a}_{\ bc})e^{c}= -f^{a}_{\
ic}e^{i}-G^{a}_{\ bc}e^{c}.$$ The natural choice of torsion which would generalize the case of equal radii [@DLust; @Gavrilik; @Batakis], $T^{a}_{\ bc}=\eta
f^{a}_{\ bc}$ would be $T^{a}_{\ bc}=2\tau D^{a}_{\ bc}$ except that the $D$’s do not have the required symmetry properties. Therefore we must define $\Sigma$ as a combination of $D$’s which makes $\Sigma$ completely antisymmetric and $S$-invariant according to the definition given above. Thus we are led to the definition $$\Sigma_{abc} \equiv 2\tau(D_{abc}+D_{bca}-D_{cba}).$$ In this general case the Riemann curvature two-form is given by [@Review], [@Batakis]: $$R^{a}_{\ b}=[-\frac{1}{2}f_{ib}^{\ a}f_{de}^{\ i}-
\frac{1}{2}G_{cb}^{\ a}f_{de}^{\ c}+ \frac{1}{2}(G_{dc}^{\
a}G_{eb}^{\ c}-G_{ec}^{\ a}G_{db}^{\ c})]e^{d} \wedge e^{e},$$ whereas the Ricci tensor $R_{ab}=R^{d}_{\ adb}$ is $$R_{ab}=G_{ba}^{\ c}G_{dc}^{\ d}-G_{bc }^{\ d}G_{da }^{\ c}-G_{ca
}^{\ d}f_{db }^{\ c}-f_{ia }^{\ d}f_{db }^{\ i}.$$ By choosing the parameter $\tau$ to be equal to zero we can obtain the [ *Riemannian connection*]{} $\theta_{R \ \ b}^{\ a}$. We can also define the [ *canonical connection*]{} by adjusting the radii and $\tau$ so that the connection form is $\theta_{C \ \ b}^{\ a}
= -f^{a}_{\ bi}e^{i}$, i.e. an $R$-gauge field [@DLust]. The adjustments should be such that $G_{abc}=0$. In the case of $G_{2}/SU(3)$ where the metric is $g_{ab}=a\delta_{ab}$, we have $
G_{abc}=\frac{1}{2}a(1+3\tau)f_{abc}$ and in turn $\tau=
-\frac{1}{3}$. In the case of $Sp(4)/(SU(2) \times
U(1))_{non-max.}$, where the metric is $g_{ab}=diag(a,a,b,b,a,a)$, we have to set $a=b$ and then $\tau=- \frac{1}{3}$ to obtain the canonical connection. Similarly in the case of $SU(3)/(U(1) \times
U(1))$, where the metric is $g_{ab}=diag(a,a,b,b,c,c)$, we should set $a=b=c$ and take $\tau=- \frac{1}{3}$. By analogous adjustments we can set the Ricci tensor equal to zero [@DLust], thus defining a [*Ricci flattening connection*]{}.
Reduction of a $D$-dimensional Yang-Mills-Dirac Lagrangian.
-----------------------------------------------------------
The group $S$ acts as a symmetry group on the extra coordinates. The CSDR scheme demands that an $S$-transformation of the extra $d$ coordinates is a gauge transformation of the fields that are defined on $M^{4}\times S/R$, thus a gauge invariant Lagrangian written on this space is independent of the extra coordinates.
To see this in detail we consider a $D$-dimensional Yang-Mills-Dirac theory with gauge group $G$ defined on a manifold $M^{D}$ which as stated will be compactified to $M^{4}\times S/R$, $D=4+d$, $d=dimS-dimR$: $$A=\int d^{4}xd^{d}y\sqrt{-g}\Bigl[-\frac{1}{4}
Tr\left(F_{MN}F_{K\Lambda}\right)g^{MK}g^{N\Lambda}
+\frac{i}{2}\overline{\psi}\Gamma^{M}D_{M}\psi\Bigr] ,$$ where $$D_{M}= \partial_{M}-\theta_{M}-A_{M},$$ with $$\theta_{M}=\frac{1}{2}\theta_{MN\Lambda}\Sigma^{N\Lambda}$$ the spin connection of $M^{D}$, and $$F_{MN}
=\partial_{M}A_{N}-\partial_{N}A_{M}-\left[A_{M},A_{N}\right],$$ where $M$, $N$ run over the $D$-dimensional space. The fields $A_{M}$ and $\psi$ are, as explained, symmetric in the sense that any transformation under symmetries of $S/R$ is compensated by gauge transformations. The fermion fields can be in any representation $F$ of $G$ unless a further symmetry such as supersymmetry is required. So let $\xi_{A}^{\alpha}$, $A
=1,\ldots,dimS$, be the Killing vectors which generate the symmetries of $S/R$ and $W_{A}$ the compensating gauge transformation associated with $\xi_{A}$. Define next the infinitesimal coordinate transformation as $\delta_{A} \equiv
L_{\xi_{A}}$, the Lie derivative with respect to $\xi$, then we have for the scalar,vector and spinor fields, $$\begin{aligned}
\delta_{A}\phi&=&\xi_{A}^{\alpha}\partial_{\alpha}\phi=D(W_{A})\phi,
\nonumber \\
\delta_{A}A_{\alpha}&=&\xi_{A}^{\beta}\partial_{\beta}A_{\alpha}+\partial_{\alpha}
\xi_{A}^{\beta}A_{\beta}=\partial_{\alpha}W_{A}-[W_{A},A_{\alpha}],
\\
\delta_{A}\psi&=&\xi_{A}^{\alpha}\psi-\frac{1}{2}G_{Abc}\Sigma^{bc}\psi=
D(W_{A})\psi. \nonumber\end{aligned}$$ $W_{A}$ depend only on internal coordinates $y$ and $D(W_{A})$ represents a gauge transformation in the appropriate representation of the fields. $G_{Abc}$ represents a tangent space rotation of the spinor fields. The variations $\delta_{A}$ satisfy, $[\delta_{A},\delta_{B}]=f_{AB}^{\\C}\delta_{C}$ and lead to the following consistency relation for $W_{A}$’s, $$\xi_{A}^{\alpha}\partial_{\alpha}W_{B}-\xi_{B}^{\alpha}\partial_{\alpha}
W_{A}-\left[W_{A},W_{B}\right]=f_{AB}^{\ \ C}W_{C}.$$ Furthermore the W’s themselves transform under a gauge transformation [@Review] as, $$\widetilde{W}_{A} = gW_{A}g^{-1}+(\delta_{A}g)g^{-1}.$$ Using eq.(23) and the fact that the Lagrangian is independent of $y$ we can do all calculations at $y=0$ and choose a gauge where $W_{a}=0$.
The detailed analysis of the constraints (21) given in refs.[@Manton; @Review] provides us with the four-dimensional unconstrained fields as well as with the gauge invariance that remains in the theory after dimensional reduction. Here we give the results. The components $A_{\mu}(x,y)$ of the initial gauge field $A_{M}(x,y)$ become, after dimensional reduction, the four-dimensional gauge fields and furthermore they are independent of $y$. In addition one can find that they have to commute with the elements of the $R_{G}$ subgroup of $G$. Thus the four-dimensional gauge group $H$ is the centralizer of $R$ in $G$, $H=C_{G}(R_{G})$. Similarly, the $A_{\alpha}(x,y)$ components of $A_{M}(x,y)$ denoted by $\phi_{\alpha}(x,y)$ from now on, become scalars at four dimensions. These fields transform under $R$ as a vector $v$, i.e. $$\begin{aligned}
S &\supset& R \nonumber \\
adjS &=& adjR+v.\end{aligned}$$ Moreover $\phi_{\alpha}(x,y)$ act as an intertwining operator connecting induced representations of $R$ acting on $G$ and $S/R$. This implies, exploiting Schur’s lemma, that the transformation properties of the fields $\phi_{\alpha}(x,y)$ under $H$ can be found if we express the adjoint representation of $G$ in terms of $R_{G} \times H$ : $$\begin{aligned}
G &\supset& R_{G} \times H \nonumber \\
adjG &=&(adjR,1)+(1,adjH)+\sum(r_{i},h_{i}).\end{aligned}$$ Then if $v=\sum s_{i}$, where each $s_{i}$ is an irreducible representation of $R$, there survives an $h_{i}$ multiplet for every pair $(r_{i},s_{i})$, where $r_{i}$ and $s_{i}$ are identical irreducible representations of $R$.
Turning next to the fermion fields [@Review; @Slansky; @Chapline; @Palla] similarly to scalars, they act as intertwining operators between induced representations acting on $G$ and the tangent space of $S/R$, $SO(d)$. Proceeding along similar lines as in the case of scalars to obtain the representation of $H$ under which the four-dimensional fermions transform, we have to decompose the representation $F$ of the initial gauge group in which the fermions are assigned under $R_{G} \times H$, i.e. $$F= \sum (t_{i},h_{i}),$$ and the spinor of $SO(d)$ under $R$ $$\sigma_{d} = \sum \sigma_{j}.$$ Then for each pair $t_{i}$ and $\sigma_{i}$, where $t_{i}$ and $\sigma_{i}$ are identical irreducible representations there is an $h_{i}$ multiplet of spinor fields in the four-dimensional theory. In order however to obtain chiral fermions in the effective theory we have to impose further requirements. We first impose the Weyl condition in $D$ dimensions. In $D = 4n+2$ dimensions which is the case at hand, the decomposition of the left handed, say spinor under $SU(2) \times SU(2) \times SO(d)$ is $$\sigma _{D} = (2,1,\sigma_{d}) + (1,2,\overline{\sigma}_{d}).$$ So we have in this case the decompositions $$\sigma_{d} = \sum \sigma_{k},~\overline{\sigma}_{d}= \sum
\overline{\sigma}_{k}.$$ Let us start from a vector-like representation $F$ for the fermions. In this case each term $(t_{i},h_{i})$ in eq.(26) will be either self-conjugate or it will have a partner $(
\overline{t}_{i},\overline{h}_{i} )$. According to the rule described in eqs.(26), (27) and considering $\sigma_{d}$ we will have in four dimensions left-handed fermions transforming as $
f_{L} = \sum h^{L}_{k}$. It is important to notice that since $\sigma_{d}$ is non self-conjugate, $f_{L}$ is non self-conjugate too. Similarly from $\overline{\sigma}_{d}$ we will obtain the right handed representation $ f_{R}= \sum \overline{h}^{R}_{k}$ but as we have assumed that $F$ is vector-like, $\overline{h}^{R}_{k}\sim h^{L}_{k}$. Therefore there will appear two sets of Weyl fermions with the same quantum numbers under $H$. This is already a chiral theory, but still one can go further and try to impose the Majorana condition in order to eliminate the doubling of the fermionic spectrum. We should remark now that if we had started with $F$ complex, we should have again a chiral theory since in this case $\overline{h}^{R}_{k}$ is different from $h^{L}_{k}$ $(\sigma_{d}$ non self-conjugate). Nevertheless starting with $F$ vector-like is much more appealing and will be used in the following along with the Majorana condition. The Majorana condition can be imposed in $D = 2,3,4+8n$ dimensions and is given by $\psi = C\overline\psi^{T}$, where $C$ is the $D$-dimensional charge conjugation matrix. Majorana and Weyl conditions are compatible in $D=4n+2$ dimensions. Then in our case if we start with Weyl-Majorana spinors in $D=4n+2$ dimensions we force $f_{R}$ to be the charge conjugate to $f_{L}$, thus arriving in a theory with fermions only in $f_{L}$. Furthermore if $F$ is to be real, then we have to have $D=2+8n$, while for $F$ pseudoreal $D=6+8n$.
Starting with an anomaly free theory in higher dimensions, in ref.[@Witten] was given the condition that has to be fulfilled in order to obtain anomaly free theories in four dimensions after dimensional reduction. The condition restricts the allowed embeddings of $R$ into $G$ [@Pilch; @Review]. For $G=E_{8}$ in ten dimensions the condition takes the form $$l(G) = 60,$$ where $l(G)$ is the sum over all indices of the $R$ representations appearing in the decomposition of the $248$ representation of $E_{8}$ under $ E_{8} \supset R \times H$. The normalization is such that the vector representation in eq.(24) which defines the embedding of $R$ into $SO(6)$, has index two.
The Four-Dimensional Theory.
----------------------------
Next let us obtain the four-dimensional effective action. Assuming that the metric is block diagonal, taking into account all the constraints and integrating out the extra coordinates we obtain in four dimensions the following Lagrangian : $$A=C \int d^{4}x \biggl( -\frac{1}{4} F^{t}_{\mu
\nu}{F^{t}}^{\mu\nu}+\frac{1}{2}(D_{\mu}\phi_{\alpha})^{t}
(D^{\mu}\phi^{\alpha})^{t}
+V(\phi)+\frac{i}{2}\overline{\psi}\Gamma^{\mu}D_{\mu}\psi-\frac{i}{2}
\overline{\psi}\Gamma^{a}D_{a}\psi\biggr),$$ where $D_{\mu} = \partial_{\mu} - A_{\mu}$ and $D_{a}=
\partial_{a}- \theta_{a}-\phi_{a}$ with $\theta_{a}=
\frac{1}{2}\theta_{abc}\Sigma^{bc}$ the connection of the coset space, while $C$ is the volume of the coset space. The potential $V(\phi)$ is given by: $$V(\phi) = - \frac{1}{4} g^{ac}g^{bd}Tr( f _{ab}^{C}\phi_{C} -
[\phi_{a},\phi_{b}] ) (f_{cd}^{D}\phi_{D} - [\phi_{c},\phi_{d}] )
,$$ where, $A=1,\ldots,dimS$ and $f$ ’ s are the structure constants appearing in the commutators of the generators of the Lie algebra of S. The expression (32) for $V(\phi)$ is only formal because $\phi_{a}$ must satisfy the constraints coming from eq.(21), $$f_{ai}^{D}\phi_{D} - [\phi_{a},\phi_{i}] = 0,$$ where the $\phi_{i}$ generate $R_{G}$. These constraints imply that some components $\phi_{a}$’s are zero, some are constants and the rest can be identified with the genuine Higgs fields. When $V(\phi)$ is expressed in terms of the unconstrained independent Higgs fields, it remains a quartic polynomial which is invariant under gauge transformations of the final gauge group $H$, and its minimum determines the vacuum expectation values of the Higgs fields [@Vinet; @Harnad; @Farakos]. The minimization of the potential is in general a difficult problem. If however $S$ has an isomorphic image $S_{G}$ in $G$ which contains $R_{G}$ in a consistent way then it is possible to allow the $\phi_{a}$ to become generators of $S_{G}$. That is $\overline{\phi}_{a} =
<\phi^{i}>Q_{ai} = Q_{a}$ with $<\phi^{i}>Q_{ai}$ suitable combinations of $G$ generators, $Q_{a}$ a generator of $S_{G}$ and $a$ is also a coset-space index. Then $$\begin{aligned}
\overline{F}_{ab}&=&f_{ab}^{\ \ i}Q_{i}+f_{ab}^{\ \
c}\overline{\phi}_{c}-[\overline{\phi}_{a},\overline{\phi}_{b}]\\
&=& f_{ab}^{\ \ i}Q_{i}+ f_{ab}^{\ \ c}Q_{c}- [Q_{a},Q_{b}] = 0\end{aligned}$$ because of the commutation relations of $S$. Thus we have proven that $V(\phi=\overline{\phi})=0$ which furthermore is the minimum, because $V$ is positive definite. Furthermore, the four-dimensional gauge group $H$ breaks further by these non-zero vacuum expectation values of the Higgs fields to the centralizer $K$ of the image of $S$ in $G$, i.e. $K=C_{G}(S)$ [@Review; @Vinet; @Harnad; @Farakos]. This can been seen if we examine a gauge transformation of $\phi_{a}$ by an element $h$ of $H$. Then we have $$\phi_{a} \rightarrow h\phi_{a}h^{-1}, h \in
H$$ We note that the v.e.v. of the Higgs fields is gauge invariant for the set of $h$’s that commute with $S$. That is $h$ belongs to a subgroup $K$ of $H$ which is the centralizer of $S_{G}$ in $G$.
More generally it can be proven [@Review] that dimensional reduction over a symmetric coset space always gives a potential of spontaneous breaking form. Note that in this case the potential acquires the form, $$V(\phi)=-\frac{1}{4}g^{ac}g^{bd}Tr(f_{ab}^{\ \
i}J_{i}-[\phi_{a},\phi_{b}])(f_{cd}^{\ \
j}J_{j}-[\phi_{a},\phi_{b}]).$$ since the structure constants $f_{ab}^{\ \ c}$ are equal to zero. Next we decompose the adjoint representation of $S$ under $R$, $$\begin{aligned}
S &\supset& R \nonumber \\
adjS &=& adjR+\Sigma(s_{a}+\overline{s}_{a}),\end{aligned}$$ and introduce the generators of the coset, $$Q_{S}=(Q_{i},Q_{s_{a}},Q_{\overline{s}_{a}}),$$ where $Q_{i}$ correspond to $R$ and $Q_{s_{a}}$ and $Q_{\overline{s}_{a}}$ to $s_{a}$ and $\overline{s}_{a}$. With this notation and using the complex metric $g^{i\overline{j}}$ the potential (34) can be rewritten as $$\begin{aligned}
V(\phi)=-\frac{1}{2}g^{s_{a}\overline{s}_{a}}g^{t_{a}\overline{t}_{a}}
Tr(f_{s_{a}t_{a}}^{\ \
i}J_{i}-[\phi_{s_{a}},\phi_{t_{a}}])(f_{\overline{s}_{a}\overline{t}_{a}}^{\
\ j}J_{j}-[\phi_{\overline{s}_{a}},\phi_{\overline{t}_{a}}])
\nonumber
\\
-\frac{1}{2}g^{s_{a}\overline{s}_{a}}g^{t_{a}\overline{t}_{a}}Tr(f_{s_{a}\overline{t}_{a}}^{\
\
i}J_{i}-[\phi_{s_{a}},\phi_{\overline{t}_{a}}])(f_{\overline{s}_{a}t_{a}}^{\
\ j}J_{j}-[\phi_{\overline{s}_{a}},\phi_{t_{a}}]).\end{aligned}$$ Note that the structure constants involved in the first and the second parentheses inside the traces in eq.(37) are of opposite sign, since they appear in the commutator of conjugate generators. The same is true for the commutator of two $\phi$ fields, since they are actually expressed as linear combinations of the gauge group generators; if $\phi_{s_{a}}$ is connected to one generator then $\phi_{\overline{s}_{a}}$ will be connected to its conjugate generator. As a result, terms involving two $J_{i}$ will be constant positive terms, terms with one $J_{i}$ and a $\phi$ commutator will be negative mass terms, and finally terms involving two $\phi$ commutators will be quatric positive terms. This result remains unaltered if the more general case is considered, where the vector of the coset $S/R$ decomposed under $R$ contains also real representations. So in conclusion the potential obtained from the dimensional reduction of a gauge theory over symmetric coset spaces is always of a spontaneous breaking form.
In the fermion part of the Lagrangian the first term is just the kinetic term of fermions, while the second is the Yukawa term [@Kapetanakis]. Note that since $\psi$ is a Majorana-Weyl spinor in ten dimensions the representation in which the fermions are assigned under the gauge group must be real. The last term in eq.(31) can be written as $$L_{Y}= -\frac{i}{2}\overline{\psi}\Gamma^{a}(\partial_{a}-
\frac{1}{2}f_{ibc}e^{i}_{\gamma}e^{\gamma}_{a}\Sigma^{bc}-
\frac{1}{2}G_{abc}\Sigma^{bc}- \phi_{a}) \psi \nonumber \\
=\frac{i}{2}\overline{\psi}\Gamma^{a}\nabla_{a}\psi+
\overline{\psi}V\psi ,$$ where $$\begin{aligned}
\nabla_{a}& =& - \partial_{a} +
\frac{1}{2}f_{ibc}e^{i}_{\gamma}e^{\gamma}_{a}\Sigma^{bc} + \phi_{a},\\
V&=&\frac{i}{4}\Gamma^{a}G_{abc}\Sigma^{bc},\end{aligned}$$ and we have used the full connection with torsion [@Batakis] given by $$\theta_{\ \ c b}^{a} = - f_{\ \
ib}^{a}e^{i}_{\alpha}e^{\alpha}_{c}-(D_{\ \ cb}^{a} +
\frac{1}{2}\Sigma_{\ \ cb}^{a}) = - f_{\ \
ib}^{a}e^{i}_{\alpha}e^{\alpha}_{c}- G_{\ \ cb}^{a}$$ with $$D_{\ \ cb}^{a} = g^{ad}\frac{1}{2}[f_{db}^{\ \ e}g_{ec} + f_{
cb}^{\ \ e}g_{de} - f_{cd}^{\ \ e}g_{be}]$$ and $$\Sigma_{abc}= 2\tau(D_{abc} +D_{bca} - D_{cba}).$$ We have already noticed that the CSDR constraints tell us that $\partial_{a}\psi= 0$. Furthermore we can consider the Lagrangian at the point $y=0$, due to its invariance under $S$-transformations, and as we mentioned $e^{i}_{\gamma}=0$ at that point. Therefore eq.(39) becomes just $\nabla_{a}= \phi_{a}$ and the term $\frac{i}{2}\overline{\psi}\Gamma^{a}\nabla_{a}\psi $ in eq.(38) is exactly the Yukawa term.
Let us examine now the last term appearing in eq.(38). One can show easily that the operator $V$ anticommutes with the six-dimensional helicity operator [@Review]. Furthermore one can show that $V$ commutes with the $T_{i}=
-\frac{1}{2}f_{ibc}\Sigma^{bc}$ ($T_{i}$ close the $R$-subalgebra of $SO(6)$). In turn we can draw the conclusion, exploiting Schur’s lemma, that the non-vanishing elements of $V$ are only those which appear in the decomposition of both $SO(6)$ irreps $4$ and $\overline{4}$, e.g. the singlets. Since this term is of pure geometric nature, we reach the conclusion that the singlets in $4$ and $\overline{4}$ will acquire large geometrical masses, a fact that has serious phenomenological implications. In supersymmetric theories defined in higher dimensions, it means that the gauginos obtained in four dimensions after dimensional reduction receive masses comparable to the compactification scale. However as we shall see in the next sections this result changes in presence of torsion. We note that for symmetric coset spaces the $V$ operator is absent because $f_{ab}^{c}$ are vanishing by definition in that case.
Supersymmetry Breaking by Dimensional Reduction over Coset Spaces.
==================================================================
Recently a lot of interest has been triggered by the possibility that superstrings can be defined at the TeV scale [@Anton]. The string tension became an arbitrary parameter and can be anywhere below the Planck scale and as low as TeV. The main advantage of having the string tension at the TeV scale, besides the obvious experimental interest, is that it offers an automatic protection to the gauge hierarchy [@Anton], alternative to low energy supersymmetry [@Dim], or dynamical electroweak symmetry breaking [@Fahri; @Marciano; @Trianta]. However the only vacua of string theory free of all pathologies are supersymmetric. Then the original supersymmetry of the theory, not being necessary in four dimensions, could be broken by the dimensional reduction procedure.
The weakly coupled ten-dimensional $E_{8} \times E_{8}$ supersymmetric gauge theory is one of the few to posses the advantage of anomaly freedom [@Green] and has been extensively used in efforts to describe quantum gravity along with the observed low energy interactions in the heterotic string framework [@Theisen]. In addition its strong coupling limit provides an interesting example of the realization of the brane picture, i.e. $E_{8}$ gauge fields and matter live on the two ten-dimensional boundaries, while gravitons propagate in the eleven-dimensional bulk [@Horava].
In the following we shall be reducing a supersymmetric ten-dimensional gauge theory based on $E_{8}$ over coset spaces and examine the consequences of the resulting four-dimensional theory mostly as far as supersymmetry breaking is concerned.
Supersymmetry Breaking by Dimensional Reduction over Symmetric Coset spaces.
----------------------------------------------------------------------------
Let us first examine the reduction of the ten dimensional ${\cal N} = 1$ supersymmetric $E_{8}$ gauge theory over the symmetric coset spaces of table 1.\
[*[ **a. Reduction of $G=E_{8}$ over $B=SO(7)/SO(6)$**]{}*]{}\
First we review the reduction of $E_{8}$ over the 6-sphere [@Review; @Chapline; @Pman]. In that case $B=SO(7)/SO(6)$, $D=10$ and the Weyl-Majorana fermions belong in the adjoint of G. The embedding of $R=SO(6)$ in $E_{8}$ is suggested by the decomposition $$E_{8} \supset SO(6) \times SO(10)$$ $$248 = (15,1)+(1,45)+(6,10)+(4,16)+(\overline{4},\overline{16}).$$ The $R=SO(6)$ content of the vector and spinor of $SO(7 )/SO(6)$ are $6$ and $4$ respectively. The condition that guarantees the anomaly freedom of the four-dimensional theory given in eq.(30) is satisfied and the resulting gauge group is $H=C_{E_{8}}(SO(6))=SO(10)$. According to the CSDR rules (24),(25) and (26),(27) the surviving scalars in four dimensions transform as a 10-plet under the gauge group $SO(10)$, while the surviving fermions provide the four-dimensional theory with a $16_{L}$ and a $\overline{16}_{R}$ which are identified by the Weyl-Majorana condition. Concerning supersymmetry obviously any sign of the supersymmetry of the original theory has disappeared in the process of dimensional reduction. On the other hand the four-dimensional theory is a GUT with fermions in a multiplet which is appropriate to describe quarks and leptons (including a right-handed neutrino). Finally since the $SO(7)$ has an isomorphic image in $E_{8}$, according to the theorem discussed in the subsection 2.3, the $SO(10)$ breaks further due to the v.e.v. of the $10$-plet Higgs down to $C_{E_{8}}(SO(7))=SO(9)$. Therefore the scalar field content of the four-dimensional theory is appropriate for the electroweak symmetry breaking but not for the GUT breaking.\
[*[**b. Reduction of $G=E_{8}$ over $B=SU(4)/SU(3) \times
U(1).$**]{}*]{}\
In this case $G=E_{8}$ and $S/R=SU(4)/SU(3) \times U(1)$. The embedding of $R=SU(3) \times U(1)$ is determined by the following decomposition $$E_{8} \supset SU(3) \times U(1) \times SO(10)$$ $$248=
(1_{0}+3_{-4}+\overline{3}_{4}+8_{0},1)+(1_{0},45)+(3_{2}+\overline{3}_{-2},10)
+(1_{3}+3_{-1},16)+(1_{-3}+3_{1},\overline{16}),$$ where the $SU(3) \times U(1)$ is the maximal subgroup of $SO(6)
\approx SU(4)$ appearing in the decomposition (44). The $R$ is chosen to be identified with the $SU(3) \times U(1)$ of the above decomposition. Therefore the resulting four-dimensional gauge group is $H=C_{E_{8}}(SU(3) \times U(1))=SO(10) \times U(1)$ (The $U(1)$ appears since the $U(1)$ in $R$ centralizes itself). The $R=SU(3) \times U(1)$ content of $SU(4)/SU(3) \times U(1)$ vector and spinor can be read from table 1 and are $3_{-2}+\overline{3}_{2}$ and $1_{3}+3_{-1}$ respectively. Therefore we find that the surviving scalars in four dimensions transform as $10_{2}$ and $10_{-2}$, while the four-dimensional fermions transform as $16_{3}$ and $16_{-1}$ under the four-dimensional gauge group $H=SO(10) \times U(1)$. Again there is no sign in four dimensions of the original supersymmetry.\
[*[ **c. Reduction of $G=E_{8}$ over $B=Sp(4)/(SU(2) \times
U(1))_{max.}$**]{}*]{}\
Next we choose $G=E_{8}$ and $S/R=Sp(4)/(SU(2) \times U(1))_{max.}
$. The embedding of $R=(SU(2) \times U(1))_{max}$ is determined by the decomposition (45) when the $SU(2)$ is chosen to be the maximal subgroup of $SU(3)$. In that case the decomposition of the $248$ of $E_{8}$ is the following $$E_{8} \supset SU(2) \times U(1) \times SO(10)$$ $$\begin{aligned}
248 =
(1,1)_{0}+(3,1)_{-4}+(3,1)_{4} \nonumber \\
+(3,1)_{0}+(5,1)_{0}+(1,45)_{0} \nonumber \\ + (3,10)_{2} +
(3,10)_{-2} + (1,16)_{3} \nonumber \\ +
(3,16)_{-1}+(1,\overline{16})_{-3}+(3,\overline{16})_{1}.\end{aligned}$$ From the decomposition (46) is clear that the four-dimensional gauge group is $H=C_{E_{8}}((SU(2) \times U(1))_{max.})=SO(10)
\times U(1)$. The $R=(SU(2) \times U(1))_{max.}$ content of $Sp(4)/(SU(2) \times U(1))_{max.}$ vector and spinor according to table 1 are $3_{-2}+3_{2}$ and $1_{3}+3_{-1}$ respectively under $R$. Therefore the particle content of the four-dimensional theory is a set of $10_{2}$, $10_{-2}$ scalars and a set of $16_{3}$, $16_{-1}$ left handed spinors. Once more no sign of the original supersymmetry is left in the spectrum of the four-dimensional theory.\
[*[**d. Reduction of $G=E_{8}$ over $B=Sp(4) \times
SU(2)/SU(2) \times SU(2) \times U(1)$.**]{}*]{}\
Next we choose again $G=E_{8}$ while $S/R=Sp(4) \times SU(2)/SU(2)
\times SU(2) \times U(1)$. The embedding of $R=SU(2) \times SU(2)
\times U(1)$ is given by the decomposition $$E_{8} \supset SU(2)
\times SU(2) \times U(1) \times SO(10)$$ $$\begin{aligned}
248=(3,1,1)_{0}+(1,3,1)_{0}+(1,2,16)_{-1}+(1,2,\overline{16})_{1}
\nonumber \\ +(1,1,1)_{0}+(1,1,45)_{0}+(1,1,10)_{2}+(1,1,10)_{-2}
\nonumber \\+(2,2,1)_{2}+(2,2,1)_{-2}+(2,2,10)_{0} \nonumber \\
+(2,1,16)_{1}+(2,1,\overline{16})_{-1},\end{aligned}$$ where the $R=SU(2) \times SU(2) \times U(1)$ is the maximal subgroup of $ SO(6) \approx SU(4)$ appearing in the decomposition (44). The four-dimensional gauge group that survives after dimensional reduction is $H=C_{E_{8}}(SU(2) \times SU(2) \times
U(1))=SO(10) \times U(1)$. According to table 1 the $R=SU(2)
\times SU(2) \times U(1)$ content of $Sp(4) \times SU(2)/SU(2)
\times SU(2) \times U(1)$ vector and spinor are $(2,2)_{0}+(1,1)_{2}+(1,1)_{-2}$ and $(2,1)_{1}+(1,2)_{-1}$, respectively. Therefore the scalar fields that survive in four dimensions belong to $10_{0}$, $10_{2}$, $10_{-2}$ of $H=SO(10)
\times U(1)$. Similarly the surviving fermions in four dimensions transform as $16_{1}$, $16_{-1}$ left-handed multiplets.\
[*[**e. Reduction of $G=E_{8}$ over $B=SU(3) \times
SU(2)/SU(2) \times U(1) \times U(1)$**]{}*]{}\
Choosing $G=E_{8}$ and $S/R=SU(3) \times SU(2)/SU(2) \times U(1)
\times U(1)$ we have another interesting example. The embedding of $R=SU(2) \times U(1) \times U(1)$ in $E_{8}$ is given by the decomposition $$E_{8} \supset SU(2) \times U(1) \times U(1) \times
SO(10)$$ $$\begin{aligned}
248=(1,45)_{(0,0)}+(3,1)_{(0,0)}+(1,1)_{(0,0)}+(1,1)_{(0,0)}+(1,1)_{(2,0)}
\nonumber \\
+(1,1)_{(-2,0)}+(2,1)_{(1,2)}+(2,1)_{(-1,2)}+(2,1)_{(-1,-2)}+(2,1)_{(1,-2)}
\nonumber \\
+(1,10)_{(0,2)}+(1,10)_{(0,-2)}+(2,10)_{(1,0)}+(2,10)_{(-1,0)}
\nonumber \\ +(2,16)_{(0,1)}+(1,16)_{(1,-1)}+(1,16)_{(-1,-1)}
\nonumber \\
+(2,\overline{16})_{(0,-1)}+(1,\overline{16})_{(-1,1)}+(1,\overline{16})_
{(1,1)},\end{aligned}$$ where the $R=SU(2) \times U(1) \times U(1)$ is identified with the one appearing in the following decomposition of maximal subgroups $$SO(6) \supset SU(2) \times SU(2) \times U(1) \supset SU(2)
\times U(1) \times U(1)$$ and the $SU(2) \times SU(2) \times
U(1)$ in $E_{8}$ is the maximal subgroup of $SO(6)$ appearing in the decomposition (44). We find that the four-dimensional gauge group is $H=C_{E_{8}}(SU(2) \times U(1) \times U(1))=SO(10) \times
U(1) \times U(1)$. The vector and spinor content under $R$ of the specific coset can be found in table 1. Choosing $a=b=1$ we find that the scalar fields of the four-dimensional theory transform as $10_{(0,2)}$, $10_{(0,-2)}$, $10_{(1,0)}$, $10_{(-1,0)}$ under $H$. Also, we find that the fermions of the four-dimensional theory are the following left-handed multiplets of $H$: $16_{(-1,-1)}$, $16_{(1,-1)}$, $16_{(0,1)}$.\
[*[ **f. Reduction of $G=E_{8}$ over $B=(SU(2)/U(1))^{3}$.**]{}*]{}\
Last we examine the case with $G=E_{8}$ and $S/R=(SU(2)/U(1))^{3}$. The $R=U(1)^{3}$ is chosen to be identified with the three $U(1)$ subgroups of $SO(6)$ appearing in the decomposition $$SO(6) \supset SU(2) \times SU(2) \times U(1)
\supset U(1) \times U(1) \times U(1),$$ where the $SO(6)$ is again the one of the decomposition (44). Then we find the following decomposition of $248$ of $E_{8}$, $$E_{8} \supset U(1) \times
U(1) \times U(1) \times SO(10)$$ $$\begin{aligned}
248 =
1_{(0,0,0)}+1_{(0,0,0)}+1_{(0,0,0)}+1_{(0,0,\pm2)}+1_{(\pm4,\mp2,0)}\nonumber\\
+1_{(0,\pm3,\pm1)}+1_{(0,\mp3,\pm1)}+1_{(\pm4,\pm1,\pm1)}+1_{(\pm4,\pm1,\mp1)}\nonumber\\
+45_{(0,0,0)}+16_{(-3,0,0)}+\overline{16}_{(3,0,0)}+16_{(1,-2,0)}\nonumber\\
+\overline{16}_{(-1,2,0)}+10_{(-2,-2,0)}+\overline{10}_{(2,2,0)}\nonumber\\
+16_{(1,1,1)}+\overline{16}_{(-1,-1,-1)}+16_{(1,1,-1)}
+\overline{16}_{(-1,-1,1)}\nonumber\\+10_{(-2,1,1)}+\overline{10}_{(2,-1,-1)}
+10_{(-2,1,-1)}+\overline{10}_{(2,-1,1)}.\end{aligned}$$ Therefore the four-dimensional gauge group is $H=C_{E_{8}}(U(1)^{3})=SO(10)\times U(1)^{3} $. The $R=U(1)^{3}$ content of $(SU(2)/U(1))^{3}$ vector and spinor can be found in table 1. With $a=b=c=1$ the vector and spinor become $(0,0,\pm
2)+(\pm 2,0,0)+(0,\pm 2,0)$ and $(1,1,1)+(-1,-1, 1)+(-1, 1,-1)+(1,
-1,-1)$ respectively and therefore the four-dimensional scalar fields transform as singlets, while fermions transform as left-handed $16_{(1,1,1)}$, and $16_{(1,1,-1)}$ under $H$.\
Note that in all above cases a - f the chosen embeddings of $R$ in $G$ satisfy eq.(30) and therefore the the resulting four-dimensional theories are anomaly free.
Supersymmetry Breaking by Dimensional Reduction over non-symmetric Coset Spaces
--------------------------------------------------------------------------------
Next we start with the same theory in ten dimensions but we reduce it over the three non-symmetric coset spaces listed in table 2.\
[*[**a. Soft Supersymmetry Breaking by dimensional reduction over $G_{2}/SU(3)$**]{}*]{}\
First we choose $B= G_{2}/SU(3)$ [@Pman]. We use the decomposition $$\begin{aligned}
E_{8} &\supset& SU(3) \times E_{6}\nonumber \\ 248 &=& (8,1) +
(1,78) + (3,27) + (\overline{3},\overline{27})\end{aligned}$$ and we choose $SU(3)$ to be identified with $R$. The $R=SU(3)$ content of $G_{2}/SU(3)$ vector and spinor is $ 3 + \overline{3}$ and $1+3$ as can be read from table 2. The condition (30) for the cancellation of anomalies is satisfied and the resulting four-dimensional gauge group is $ H = C_{E_{8}}(SU(3)) = E_{6}$, which contains fermion and complex scalar fields transforming as 78, 27 and 27 respectively. Therefore we obtain in four dimensions a ${\cal N}=1$ supersymmetric anomaly free $E_{6}$ gauge theory with a vector superfield grouping gauge bosons and fermions transforming according to the adjoint and a matter chiral superfield grouping scalars and fermions in the fundamental of the gauge group $E_{6}$. In addition a very interesting feature worth stressing is that the ${\cal N}=1$ supersymmetry of the four-dimensional theory is broken by soft terms. More precisely the scalar soft terms appear in the potential of the theory and the gaugino masses come from a geometric (torsion) term as already stated.
We proceed by calculating these terms. In order to determine the potential we begin by examining the decomposition of the adjoint of the specific $S$ under $R$, i.e. $$\begin{aligned}
G_{2} &\supset& SU(3) \nonumber \\ 14 &=& 8+3+\overline{3}.\end{aligned}$$ Corresponding to this decomposition we introduce the generators of $G_{2}$ $$Q_{G_{2}} = \{ Q^{a},Q^{\rho},Q_{\rho}\},$$ where $a=1,\ldots,8$ correspond to the $8$ of $SU(3)$, while $\rho = 1,2,3$ correspond to $3$ or $\overline{3}$. Then according to the decomposition (52), the non trivial commutation relations of the generators of $G_{2}$ are given in table 3 of appendix A.
The potential of any theory reduced over $G_{2}/SU(3)$ can be written in terms of the fields $$\{ \phi^{a} , \phi^{\rho} ,\phi_{\rho} \},$$ which correspond to the decomposition (52) of $G_{2}$ under $SU(3)$. The $\phi_{a}$ are equal to the generators of the $R$ subgroup. With the help of the commutation relations of Table 3 we find that the potential of any theory reduced over $G_{2}/SU(3)$ is given by [@Review] : $$\begin{aligned}
V(\phi)=\frac{8}{R_{1}^{4}}+\frac{4}{3R_{1}^{4}}Tr(\phi^{\rho}\phi_{\rho})
-\frac{1}{2R_{1}^{4}}(\lambda^{i})^{\rho}_{\sigma}Tr(J_{i}[\phi_{\rho},\phi^{\sigma}])
+\frac{1}{R_{1}^{4}}\sqrt{\frac{2}{3}}\epsilon^{\rho\sigma\tau}Tr(\phi_{\tau}[\phi_{\rho},\phi_{\sigma}])
\nonumber \\
-\frac{1}{4R_{1}^{4}} Tr([\phi_{\rho},\phi_{\sigma}][\phi^{\rho},\phi^{\sigma}] +
[\phi^{\rho},\phi_{\sigma}][\phi_{\rho},\phi^{\sigma}]),\end{aligned}$$ where the $R_{1}$ appearing in the denominator of various terms is the radius of $G_{2}/SU(3)$. Then to proceed with our specific choice $G=E_{8}$ we use the embedding (50) of $R=SU(3)$ in $E_{8}$ and divide accordingly the generators of $E_{8}$ $$Q_{E_{8}} = \{ Q^{a} , Q^{\alpha},Q^{i\rho},Q_{i\rho} \}$$ with $a = 1,\ldots,8$, $\alpha = 1,\ldots,78$, $i=1,\ldots,27$, $\rho=1,2,3$. The non-trivial commutation relations of the generators of $E_{8}$ according to the decomposition (55) are given in table 4 of appendix A. Next we would like to solve the constraints (33) which in the present case take the form $[
\phi^{a},\phi^{\rho}]=-(\lambda^{a})^{\rho}_{\sigma}\phi^{\sigma}$, and examine the resulting four-dimensional potential in terms of the unconstrained scalar fields $\beta$. The solutions of the constraints in terms of the genuine Higgs fields are $$\phi^{a}=Q^{a},\ \phi_{\rho}=R_{1}\beta^{i}Q_{i\rho},\
\phi^{\rho}=R_{1}\beta_{i}Q^{i\rho}.$$ In turn we can express the Higgs potential in terms of the genuine Higgs field $\beta$ and we find $$V(\beta)= \frac{8}{R_{1}^{4}}- \frac{40}{3R_{1}^{2}}\beta^{2} -
\left[\frac{4}{R_{1}}d_{ijk}\beta^{i}\beta^{j}\beta^{k} + h.c
\right] +\beta^{i}\beta^{j}d_{ijk}d^{klm}\beta_{l}\beta_{m}+
\frac{11}{4}\sum_{\alpha}\beta^{i}(G^{\alpha})_{i}^{j}
\beta_{j}\beta^{k}(G^{\alpha})_{k}^{l}\beta_{l},$$ where $d^{ijk}$, the symmetric invariant $E_{6}$ tensor, and $(G^{\alpha})^{i}_{j}$ are defined in ref.[@Kephart]. From the potential given in eq.(57) we can read directly the $F$-, $D$- and scalar soft terms which break softly the supersymmetric theory obtained by CSDR over $G_{2}/SU(3)$. The F-terms are obtained from the superpotential $${\cal W} (B) =\frac{1}{3}d_{ijk}B^{i}B^{j}B^{k},$$ where $B$ is the chiral superfield whose scalar component is $\beta$. Let us note that the superpotential could also be identified from the relevant Yukawa terms of the fermion part of the Lagrangian. Correspondingly the $D$-terms are $$D^{\alpha}
=\sqrt{\frac{11}{2}}\beta^{i}(G^{\alpha})^{j}_{i}\beta_{j}.$$ The terms in the potential $V(\beta)$ given in eq.(57) that do not result from the $F$- and $D$-terms belong to the soft supersymmetry part of the Lagrangian. These terms are the following, $${\cal L}_{scalarSSB} =
-\frac{40}{3R_{1}^{2}}\beta^{2}-\left[\frac{4}{R_{1}}d_{ijk}\beta^{i}\beta^{j}\beta^{k}+
h.c \right].$$
Note that the potential (57) belongs to the case, discussed in subsection $2.3$, that $S$ can be embedded in $G$ [@LZ]. Finally in order to determine the gaugino mass we calculate the V operator given in eq.(40). Using eq.(42) we find that $$D_{abc}=\frac{R_{1}^{2}}{2}f_{abc}$$ and in turn the $G_{abc}=D_{abc}+\frac{1}{2}T_{abc}$ is $$G_{abc}=\frac{R_{1}^{2}}{2}(1+3\tau)f_{abc}.$$ In order to obtain the previous results the most general $G_{2}$ invariant metric on $G_{2}/SU(3)$ was used which is $g_{ab}=R_{1}^{2}\delta_{ab}$.\
In addition we need the gamma matrices in ten dimensions given in appendix B. We find that the gauginos acquire a geometrical mass $$M=(1+3\tau)\frac{6}{\sqrt{3}R_{1}}.$$
[*[**b. Soft Supersymmerty breaking by dimensional reduction over\
$Sp(4)/(SU(2) \times U(1))_{non-max.}$** ]{}*]{}\
In this case we start again with a ten-dimensional supersymmetric gauge theory based on the group $E_{8}$ and reduce it over the non-symmetric coset $Sp(4)/(SU(2) \times U(1))_{non-max.}$. Therefore we have chosen $G=E_{8}$ and $B=Sp(4)/(SU(2) \times U(1))_{non-max.}$. We begin by giving the decompositions to be used, $$E_{8} \supset SU(3)
\times E_{6} \supset SU(2) \times U(1) \times E_{6}.$$ The decomposition of $248$ of $E_{8}$ under $SU(3) \times E_{6}$ is given in eq.(50) while under $ (SU(2) \times U(1)) \times E_{6}$ is the following, $$\begin{aligned}
248 =
(3,1)_{0}+(1,1)_{0}+(1,78)_{0}+(2,1)_{3}+(2,1)_{-3}\nonumber\\
+(1,27)_{-2}
+(1,\overline{27})_{2}+(2,27)_{1}+(2,\overline{27})_{-1}.\end{aligned}$$ In the present case $R$ is chosen to be identified with the $SU(2)
\times U(1)$ of the latter of the above decompositions. Therefore the resulting four-dimensional gauge group is $H=C_{E_{8}}(SU(2)
\times U(1))= E_{6} \times U(1)$. The $R=SU(2) \times U(1)$ content of $Sp(4)/(SU(2) \times U(1))_{non-max.}$ vector and spinor according to table 2 are $1_{2}+1_{-2}+2_{1}+2_{-1}$ and $1_{0}+1_{-2}+2_{1}$ respectively. Thus applying the CSDR rules (24),(25) and (26),(27) we find that the surviving fields in four dimensions can be organized in a ${\cal N}=1$ vector supermultiplet $V^{\alpha}$ which transforms as $78$ of $E_{6}$, a ${\cal N}=1$ $U(1)$ vector supermultiplet $V$ and two chiral supermultiplets ($B^{i}$, $C^{i}$), transforming as $(27,1)$ and $(27,-2)$ under $E_{6} \times U(1)$.
To determine the potential we have to go in the details and examine further the decomposition of the adjoint of the specific $S$ under $R$, i.e. $$Sp(4) \supset (SU(2) \times
U(1))_{non-max.}$$ $$10 = 3_{0}+1_{0}+1_{2}+1_{-2}+2_{1}+2_{-1}.$$ Then, according to the decomposition (65) the generators of $Sp(4)$ can be grouped as follows, $$Q_{Sp(4)} = \{ Q^{\rho},Q,Q_{+},Q^{+},Q^{a},Q_{a}\},$$ where $\rho$ takes values $1,2,3$ and $a$ takes the values $1,2$. The non-trivial commutation relations of the $Sp(4)$ generators given in (66) are given in table 5 of appendix A. Furthermore the decomposition (66) suggests the following change in the notation of the scalar fields $$\{ \phi_{I}, I=1,\ldots,10\} \longrightarrow ( \phi^{\rho}, \phi,
\phi_{+}, \phi^{+}, \phi^{a}, \phi_{a}),$$ which facilitates the solution of the constraints (33).
On the other hand the potential of any gauge theory reduced over the coset space $Sp(4)/(SU(2) \times U(1))_{non-max.}$ was found [@Review] to be in terms of the redefined fields in (67), $$\begin{aligned}
V(\phi) =
\frac{2\Lambda^{2}+6}{R_{1}^{4}}+\frac{4\Lambda^{2}}{R_{2}^{4}}+
\frac{2}{R_{2}^{4}}Tr(\phi_{+}\phi^{+})+
\frac{2}{R_{1}^{4}}Tr(\phi_{a}\phi^{a})\nonumber\\
-\frac{2\Lambda}{R_{2}^{4}}Tr(Q[\phi_{+},\phi^{+}])
-\frac{\Lambda}{R_{1}^{4}}Tr(Q[\phi_{a},\phi^{a}])
-\frac{1}{R_{1}^{4}}(\tau_{\rho})^{a}_{b}Tr(Q_{\rho}[\phi_{a},\phi^{b}])\nonumber\\
\biggl[-\frac{\sqrt{2}}{R_{1}^{2}}(\frac{1}{R_{2}^{2}}+\frac{1}{2R_{1}^{2}})\epsilon^{ab}
Tr(\phi_{+}[\phi_{a},\phi_{b}]) + h.c\biggr]\nonumber\\ +
\frac{1}{2}(\frac{1}{R_{2}^{2}}[\phi_{+},\phi^{+}]+
\frac{1}{R_{1}^{2}}[\phi_{a},\phi^{a}])^{2}\nonumber\\
-\frac{2}{R_{1}^{2}R_{2}^{2}}Tr([\phi_{+},\phi_{a}][\phi^{+},\phi^{a}])
-\frac{1}{R_{1}^{4}}Tr([\phi_{a},\phi_{b}][\phi^{a},\phi^{b}]),\end{aligned}$$ where, $R_{1}$ and $R_{2}$ are the coset space radii. In terms of the radii the real metric[^1] of the coset space is $$g_{ab}=diag(R_{1}^{2},R_{1}^{2},R_{2}^{2},R_{2}^{2},R_{1}^{2},R_{1}^{2})$$
To proceed we use the embedding (64) of $SU(2) \times U(1)$ in $E_{8}$ and divide its generators accordingly, $$Q_{E_{8}} = \{ G^{\rho}, G, G^{\alpha}, G^{a}, G_{a}, G^{i},
G_{i}, G^{ai}, G_{ai} \}$$ where, $\rho = 1,2,3$, $a=1,2$, $\alpha=1,\ldots,78$, $i=1,\ldots,27$. The non-trivial commutation relations of the $E_{8}$ generators grouped in (70) are given in table 6 of the appendix A.
Now the constraints (33) for the redefined fields in (67) become $$\left[\phi,\phi_{+}\right] = 2\phi_{+},\
\left[\phi,\phi_{a}\right]= \phi_{a},\
\left[\phi_{\rho},\phi_{a}\right]=(\tau_{\rho})^{b}_{a}\phi_{b}.$$ Then we can write the solutions of the constraints (71) in terms of the genuine Higgs fields $\beta^{i}$, $\gamma^{i}$ and the $E_{8}$ generators (70) corresponding to the embedding (64) as follows, $$\begin{aligned}
\phi^{\rho}=G^{\rho}, \phi=\sqrt{3}G, \nonumber \\
\phi_{a}=R_{1}\frac{1}{\sqrt{2}}\beta^{i}G_{1i},
\phi_{+}=R_{2}\gamma^{i}G_{i}.\end{aligned}$$
The potential (68) in terms of the physical scalar fields $\beta^{i}$ and $\gamma^{i}$ becomes $$\begin{aligned}
V(\beta^{i},\gamma^{i})= const -\frac{6}{R_{1}^{2}}\beta^{2}
-\frac{4}{R_{2}^{2}}\gamma^{2} \nonumber\\
+\bigg[4\sqrt{\frac{10}{7}}R_{2}
\bigl(\frac{1}{R_{2}^{2}}+\frac{1}{2R_{1}^{2}}\bigr)
d_{ijk}\beta^{i}\beta^{j}\gamma^{k} + h.c \biggr] \nonumber \\
+6\biggl(\beta^{i}(G^{\alpha})_{i}^{j}\beta_{j}
+\gamma^{i}(G^{\alpha})_{i}^{j}\gamma_{j}\biggr)^{2}+\nonumber\\
\frac{1}{3}\biggl(\beta^{i}(1\delta_{i}^{j})\beta_{j}+
\gamma^{i}(-2\delta_{i}^{j})\gamma_{j}\biggr)^{2}\nonumber\\
+\frac{5}{7}\beta^{i}\beta^ {j}d_{ijk}d^{klm}\beta_{l}\beta_{m}
+4\frac{5}{7}\beta^{i}\gamma^{j}d_{ijk}d^{klm}\beta_{l}\gamma_{m}.\end{aligned}$$ From the potential (73) we read the $F$-, $D$- and scalar soft terms as in the previous model. The $F$-terms can be derived from the superpotential $${\cal W}(B^{i},C^{j})= \sqrt{\frac{5}{7}}d_{ijk}B^{i}B^{j}C^{k}.$$ The $D$-term contributions are the sum $$\frac{1}{2}D^{\alpha}D^{\alpha}+\frac{1}{2}DD,$$ where $$D^{\alpha}=\sqrt{12}\bigl(\beta^{i}(G^{\alpha})_{i}^{j}\beta_{j}
+\gamma^{i}(G^{\alpha})_{i}^{j}\gamma_{j}\bigr)$$ and $$D=\sqrt{\frac{2}{3}}\bigl(\beta^{i}(1\delta_{i}^{j})\beta_{j}+
\gamma^{i}(-2\delta_{i}^{j})\gamma_{j}\bigr)$$ corresponding to $E_{6} \times U(1)$. The rest terms in the potential (73) are the soft breaking mass and trilinear terms and they form the scalar SSB part of the Lagrangian, $${ \cal L}_{scalarSSB}= -\frac{6}{R_{1}^{2}}\beta^{2}
-\frac{4}{R_{2}^{2}}\gamma^{2} + \bigg[4\sqrt{\frac{10}{7}}R_{2}
\bigl(\frac{1}{R_{2}^{2}}+\frac{1}{2R_{1}^{2}}\bigr)
d_{ijk}\beta^{i}\beta^{j}\gamma^{k} + h.c \biggr].$$ The gaugino mass has been calculated in ref.[@Kapetanakis] to be $$M=(1+3\tau)\frac{R_{2}^{2}+2R_{1}^{2}}{8R_{1}^{2}R_{2}}.$$ We note that the chosen embedding of $R=SU(2) \times U(1)$ in $E_{8}$ satisfies the condition (30) which guarantees the renormalizability of the four-dimensional theory, while the absence of any other term that does not belong to the supersymmetric $E_{6} \times U(1)$ theory or to its SSB sector guarantees the improved ultraviolet behaviour of the theory as in the previous model. Finally note the contribution of the torsion in the gaugino mass (77).\
[*[**c. Soft Supersymmetry breaking by reduction over $SU(3)/(U(1) \times U(1))$.**]{}*]{}\
In this model the only difference as compared to the previous ones is that the chosen coset space to reduce the same theory is the non-symmetric $B=SU(3)/U(1) \times U(1)$. The decompositions to be used are $$E_{8} \supset SU(2) \times U(1) \times E_{6} \supset
U(1) \times U(1) \times E_{6}$$ The $248$ of $E_{8}$ is decomposed under $SU(2) \times U(1)$ according to (64) whereas the decomposition under $U(1) \times U(1)$ is the following: $$\begin{aligned}
248 = 1_{(0,0)}+1_{(0,0)}+1_{(3,\frac{1}{2})}+1_{(-3,\frac{1}{2})}+\nonumber\\
1_{(0,-1)}+1_{(0,1)}+1_{(-3,-\frac{1}{2})}+1_{(3,-\frac{1}{2})}+\nonumber\\
78_{(0,0)}+27_{(3,\frac{1}{2})}+27_{(-3,\frac{1}{2})}+27_{(0,-1)}+\nonumber\\
\overline{27}_{(-3,-\frac{1}{2})}+\overline{27}_{(3,-\frac{1}{2})}
+\overline{27}_{(0,1)}.\end{aligned}$$ In the present case $R$ is chosen to be identified with the $U(1)
\times U(1)$ of the latter decomposition. Therefore the resulting four-dimensional gauge group is $$H=C_{E_{8}}(U(1) \times U(1)) =
U(1) \times U(1) \times E_{6}$$ The $R=U(1) \times U(1)$ content of $SU(3)/U(1) \times U(1)$ vector and spinor are according to table 2, $$(3,\frac{1}{2})+(-3,\frac{1}{2})
+(0,-1)+(-3,-\frac{1}{2})+(3,-\frac{1}{2})+(0,1)$$ and $$(0,0)+(3,\frac{1}{2})+(-3,\frac{1}{2}) +(0,-1)$$ respectively. Thus applying the CSDR rules we find that the surviving fields in four dimensions are three ${\cal N}=1$ vector multiplets $V^{\alpha},V_{(1)},V_{(2)}$, (where $\alpha$ is an $E_{6}$, $78$ index and the other two refer to the two $U(1)'s$) containing the gauge fields of $U(1) \times U(1) \times E_{6}$. The matter content consists of three ${\cal N}=1$ chiral multiplets ($A^{i}$, $B^{i}$, $C^{i}$) with $i$ an $E_{6}$, $27$ index and three ${\cal
N}=1$ chiral multiplets ($A$, $B$, $C$) which are $E_{6}$ singlets and carry $U(1) \times U(1)$ charges.
To determine the potential we examine further the decomposition of the adjoint of the specific $S=SU(3)$ under $R=U(1) \times
U(1)$, i.e. $$SU(3) \supset U(1) \times U(1)$$ $$\begin{aligned}
8 = (0,0)+(0,0)+(3,\frac{1}{2})+(-3,\frac{1}{2})
+(0,-1)+\nonumber\\(-3,-\frac{1}{2})+(3,-\frac{1}{2})+(0,1).\end{aligned}$$ Then according to the decomposition (79) the generators of $SU(3)$ can be grouped as $$Q_{SU(3)} = \{Q_{0},Q'_{0},Q_{1},Q_{2},Q_{3},Q^{1},Q^{2},Q^{3} \}.$$ The non trivial commutator relations of $SU(3)$ generators (80) are given in table 7 of the appendix A. The decomposition (80) suggests the following change in the notation of the scalar fields, $$(\phi_{I}, I=1,\ldots,8) \longrightarrow ( \phi_{0}, \phi'_{0},
\phi_{1}, \phi^{1}, \phi_{2}, \phi^{2}, \phi_{3}, \phi^{3}).$$
The potential of any theory reduced over $SU(3)/U(1) \times U(1))$ is given in terms of the redefined fields in (81) by $$\begin{aligned}
\lefteqn{V(\phi)=(3\Lambda^{2}+\Lambda'^{2})\biggl(\frac{1}{R_{1}^{4}}+\frac{1}{R_{2}^{4}}\biggr)
+\frac{4\Lambda'^{2}}{R_{3}^{2}}}\nonumber\\
&&+\frac{2}{R_{2}^{2}R_{3}^{2}}Tr(\phi_{1}\phi^{1})+
\frac{2}{R_{1}^{2}R_{3}^{2}}Tr(\phi_{2}\phi^{2})
+\frac{2}{R_{1}^{2}R_{2}^{2}}Tr(\phi_{3}\phi^{3})\nonumber\\
&&+\frac{\sqrt{3}\Lambda}{R_{1}^{4}}Tr(Q_{0}[\phi_{1},\phi^{1}])
-\frac{\sqrt{3}\Lambda}{R_{2}^{4}}Tr(Q_{0}[\phi_{2},\phi^{2}])
-\frac{\sqrt{3}\Lambda}{R_{3}^{4}}Tr(Q_{0}[\phi_{3},\phi^{3}])\nonumber\\
&&+\frac{\Lambda'}{R_{1}^{4}}Tr(Q'_{0}[\phi_{1},\phi^{1}])
+\frac{\Lambda'}{R_{2}^{4}}Tr(Q'_{0}[\phi_{2},\phi^{2}])
-\frac{2\Lambda'}{R_{3}^{4}}Tr(Q'_{0}[\phi_{3},\phi^{3}])\nonumber\\
&&+\biggl[\frac{2\sqrt{2}}{R_{1}^{2}R_{2}^{2}}Tr(\phi_{3}[\phi_{1},\phi_{2}])
+\frac{2\sqrt{2}}{R_{1}^{2}R_{3}^{3}}Tr(\phi_{2}[\phi_{3},\phi_{1}])
+\frac{2\sqrt{2}}{R_{2}^{2}R_{3}^{2}}Tr(\phi_{1}[\phi_{2},\phi_{3}])+ h.c\biggr]\nonumber\\
&&+\frac{1}{2}Tr \biggl(\frac{1}{R_{1}^{2}}([\phi_{1},\phi^{1}])+
\frac{1}{R_{2}^{2}}([\phi_{2},\phi^{2}])
+\frac{1}{R_{3}^{2}}([\phi_{3},\phi^{3}])\biggr)^{2}\nonumber\\
&&-\frac{1}{R_{1}^{2}R_{2}^{2}}Tr([\phi_{1},\phi_{2}][\phi^{1},\phi^{2}])
-\frac{1}{R_{1}^{2}R_{3}^{2}}Tr([\phi_{1},\phi_{3}][\phi^{1},\phi^{3}])\nonumber\\
&&-\frac{1}{R_{2}^{2}R_{3}^{2}}Tr([\phi_{2},\phi_{3}][\phi^{2},\phi^{3}]),\end{aligned}$$ where $R_{1},R_{2},R_{3}$ are the coset space radii[^2]. In terms of the radii the real metric[^3] of the coset is $$g_{ab}=diag(R_{1}^{2},R_{1}^{2},R_{2}^{2},R_{2}^{2},R_{3}^{2},R_{3}^{2}).$$
Next we examine the commutation relations of $E_{8}$ under the decomposition (78). Under this decomposition the generators of $E_{8}$ can be grouped as $$\begin{aligned}
Q_{E_{8}}=\{Q_{0},Q'_{0},Q_{1},Q_{2},Q_{3},Q^{1},Q^{2},Q^{3},Q^{\alpha},\nonumber\\
Q_{1i},Q_{2i},Q_{3i},Q^{1i},Q^{2i},Q^{3i} \},\end{aligned}$$ where, $ \alpha=1,\ldots,78 $ and $ i=1,\ldots,27 $. The non-trivial commutation relations of the $E_{8}$ generators (84) are given in tables 8.1 and 8.2 of appendix A.\
Now the constraints (33) for the redefined fields in (81) are, $$\begin{aligned}
\left[\phi_{1},\phi_{0}\right]=\sqrt{3}\phi_{1}&,&
\left[\phi_{1},\phi_{0}'\right]=\phi_{1}, \nonumber \\
\left[\phi_{2},\phi_{0}\right]=-\sqrt{3}\phi_{2}&,&
\left[\phi_{2},\phi_{0}'\right]=\phi_{2}, \nonumber \\
\left[\phi_{3},\phi_{0}\right]=0&,&
\left[\phi_{3},\phi_{0}'\right]=-2\phi_{3}.\end{aligned}$$ The solutions of the constraints (85) in terms of the genuine Higgs fields and of the $E_{8}$ generators (84) corresponding to the embedding (78) of $R=U(1) \times U(1)$ in the $E_{8}$ are, $\phi_{0}=\Lambda Q_{0}$ and $\phi_{0}'=\Lambda Q_{0}'$,with $\Lambda=\Lambda'=\frac{1}{\sqrt{10}}$, and $$\begin{aligned}
\phi_{1} &=& R_{1} \alpha^{i} Q_{1i}+R_{1} \alpha Q_{1},
\nonumber\\ \phi_{2} &=& R_{2} \beta^{i} Q_{2i}+ R_{2} \beta
Q_{2}, \nonumber\\ \phi_{3} &=& R_{3} \gamma^{i} Q_{3i}+ R_{3}
\gamma Q_{3},\end{aligned}$$ where the unconstrained scalar fields transform under $U(1)
\times U(1) \times E_{6}$ as $$\begin{aligned}
\alpha_{i} \sim 27_{(3,\frac{1}{2})}&,&\alpha \sim
1_{(3,\frac{1}{2})},\nonumber\\ \beta_{i} \sim
27_{(-3,\frac{1}{2})}&,&\beta \sim
1_{(-3,\frac{1}{2})},\nonumber\\ \gamma_{i} \sim
27_{(0,-1)}&,&\gamma \sim 1_{(0,-1)}.\end{aligned}$$ The potential (82) becomes $$\begin{aligned}
V(\alpha^{i},\alpha,\beta^{i},\beta,\gamma^{i},\gamma)= const. +
\biggl( \frac{4R_{1}^{2}}{R_{2}^{2}R_{3}^{2}}-\frac{8}{R_{1}^{2}}
\biggr)\alpha^{i}\alpha_{i} +\biggl(
\frac{4R_{1}^{2}}{R_{2}^{2}R_{3}^{2}}-\frac{8}{R_{1}^{2}}
\biggr)\overline{\alpha}\alpha \nonumber \\
+\biggl(\frac{4R_{2}^{2}}{R_{1}^{2}R_{3}^{2}}-\frac{8}{R_{2}^{2}}\biggr)
\beta^{i}\beta_{i}
+\biggl(\frac{4R_{2}^{2}}{R_{1}^{2}R_{3}^{2}}-\frac{8}{R_{2}^{2}}\biggr)
\overline{\beta}\beta \nonumber \\
+\biggl(\frac{4R_{3}^{2}}{R_{1}^{2}R_{2}^{2}}
-\frac{8}{R_{3}^{2}}\biggr)\gamma^{i}\gamma_{i}
+\biggl(\frac{4R_{3}^{2}}{R_{1}^{2}R_{2}^{2}}
-\frac{8}{R_{3}^{2}}\biggr)\overline{\gamma}\gamma \nonumber\\
+\biggl[\sqrt{2}80\biggl(\frac{R_{1}}{R_{2}R_{3}}+\frac{R_{2}}{R_{1}
R_{3}}+\frac{R_{3}}{R_{2}R_{1}}\biggr)d_{ijk}\alpha^{i}\beta^{j}\gamma^{k}\nonumber\\
+\sqrt{2}80\biggl(\frac{R_{1}}{R_{2}R_{3}}+\frac{R_{2}}{R_{1}
R_{3}}+\frac{R_{3}}{R_{2}R_{1}}\biggr)\alpha\beta\gamma+
h.c\biggr]\nonumber\\
+\frac{1}{6}\biggl(\alpha^{i}(G^{\alpha})_{i}^{j}\alpha_{j}
+\beta^{i}(G^{\alpha})_{i}^{j}\beta_{j}
+\gamma^{i}(G^{\alpha})_{i}^{j}\gamma_{j}\biggr)^{2}\nonumber\\
+\frac{10}{6}\biggl(\alpha^{i}(3\delta_{i}^{j})\alpha_{j} +
\overline{\alpha}(3)\alpha + \beta^{i}(-3\delta_{i}^{j})\beta_{j}
+ \overline{\beta}(-3)\beta \biggr)^{2}\nonumber \\
+\frac{40}{6}\biggl(\alpha^{i}(\frac{1}{2}\delta_{i}^{j})\alpha_{j}
+ \overline{\alpha}(\frac{1}{2})\alpha +
\beta^{i}(\frac{1}{2}\delta^{j}_{i})\beta_{j} +
\overline{\beta}(\frac{1}{2})\beta +
\gamma^{i}(-1\delta_{i}^{j})\gamma_{j} +
\overline{\gamma}(-1)\gamma \biggr)^{2}\nonumber \\
+40\alpha^{i}\beta^{j}d_{ijk}d^{klm}\alpha_{l}\beta_{m}
+40\beta^{i}\gamma^{j}d_{ijk}d^{klm}\beta_{l}\gamma_{m}
+40\alpha^{i}\gamma^{j}d_{ijk}d^{klm}\alpha_{l}\gamma_{m}\nonumber\\
+40(\overline{\alpha}\overline{\beta})(\alpha\beta) +
40(\overline{\beta}\overline{\gamma})(\beta\gamma) +
40(\overline{\gamma}\overline{\alpha})(\gamma\alpha).\end{aligned}$$ From the potential (88) we read the $F$-, $D$- and scalar soft terms. The $F$-terms are obtained from the superpotential $${ \cal W }(A^{i},B^{j},C^{k},A,B,C)
=\sqrt{40}d_{ijk}A^{i}B^{j}C^{k} + \sqrt{40}ABC.$$ The $D$-terms have the structure $$\frac{1}{2}D^{\alpha}D^{\alpha}+\frac{1}{2}D_{1}D_{1}+\frac{1}{2}D_{2}D_{2},$$ where $$D^{\alpha}= \frac{1}{\sqrt{3}}
\biggl(\alpha^{i}(G^{\alpha})_{i}^{j}\alpha_{j}
+\beta^{i}(G^{\alpha})_{i}^{j}\beta_{j}
+\gamma^{i}(G^{\alpha})_{i}^{j}\gamma_{j}\biggr),$$ $$D_{1}=
\sqrt{ \frac{10}{3} }\biggl(\alpha^{i}(3\delta_{i}^{j})\alpha_{j}
+ \overline{\alpha}(3)\alpha +
\beta^{i}(-3\delta_{i}^{j})\beta_{j} + \overline{\beta}(-3)\beta
\biggr)$$ and $$D_{2} = \sqrt{ \frac{40}{3}
}\biggl(\alpha^{i}(\frac{1}{2}\delta_{i}^{j})\alpha_{j} +
\overline{\gamma}(-1)\gamma \biggr),$$ which correspond to the $E_{6} \times U(1)_{1} \times U(1)_{2}$ structure of the gauge group. The rest terms are the trilinear and mass terms which break supersymmetry softly and they form the scalar SSB part of the Lagrangian, $$\begin{aligned}
\lefteqn{{\cal L}_{scalarSSB}= \biggl(
\frac{4R_{1}^{2}}{R_{2}^{2}R_{3}^{2}}-\frac{8}{R_{1}^{2}}
& &
\overline{\beta}\beta
& &
R_{3}}+\frac{R_{3}}{R_{2}R_{1}}\biggr)d_{ijk}\alpha^{i}\beta^{j}\gamma^{k}
\nonumber \\
& &+\sqrt{2}80\biggl(\frac{R_{1}}{R_{2}R_{3}}+\frac{R_{2}}{R_{1}
R_{3}}+\frac{R_{3}}{R_{2}R_{1}}\biggr)\alpha\beta\gamma+
h.c\biggr].\end{aligned}$$
Note that the potential (88) belongs to the case analyzed in subsection $2.3$ where $S$ has an image in $G$. Here $S=SU(3)$ has an image in $G=E_{8}$ [@LZ] so we conclude that the minimum of the potential is zero. Finally in order to determine the gaugino mass, we calculate the V operator using appendix B. We find that the gauginos acquire a geometrical mass $$M=(1+3\tau)\frac{(R_{1}^{2}+R_{2}^{2}+R_{3}^{2})}{8\sqrt{R_{1}^{2}R_{2}^{2}R_{3}^{2}}}.$$ Note again that the chosen embedding satisfies the condition (30) and the absence in the four-dimensional theory of any other term that does not belong to the supersymmetric $E_{6} \times U(1)
\times U(1)$ gauge theory or to its SSB sector. The gaugino mass (92), as in the two previous models, has a contribution from the torsion of the coset space. A final remark concerning the gaugino masses in all three models reduced over six-dimensional non-symmetric coset spaces with torsion is that the adjustments required to obtain the [*canonical connection*]{} lead also to vanishing gaugino masses. Contrary to the gaugino mass term the soft scalar terms of the SSB do not receive contributions from the torsion. This is due to the fact that gauge fields, contrary to fermions, do not couple to torsion.
Concluding the present subsection, we would like to note that the fact that, starting with a ${\cal
N} = 1$ supersymmetric theory in ten dimensions, the CSDR leads to the field content of an ${\cal N} = 1$ supersymmetric theory in the case that the six-dimensional coset spaces used are non-symmetric, can been seen by inspecting the table 2. More specifically, one notices in table $2$ that when the coset spaces are non-symmetric the decompositions of the spinor $4$ and antispinor $\overline{4}$ of $SO(6)$ under $R$ contain a singlet, i.e. have the form $1+r$ and $1+\overline{r}$, respectively, where $r$ is possibly reducible. The singlet under $R$ provides the four-dimensional theory with fermions transforming according to the adjoint as was emphasized in subsection $2.3$ and correspond to gauginos, which obtain geometrical and torsion mass contributions as we have seen in all three cases of the present subsection $3.2$. Next turning the decomposition of the vector $6$ of $SO(6)$ under $R$ in the non-symmetric cases, we recall that the vector can be constructed from the tensor product $4 \times 4$ and therefore has the form $r+\overline{r}$. Then the CSDR constraints tell us that the four-dimensional theory will contain the same representations of fermions and scalars since both come from the adjoint representation of the gauge group $G$ and they have to satisfy the same matching conditions under $R$. Therefore the field content of the four-dimensional theory is, as expected, ${\cal N} =1$ supersymmetric. To find out that furthermore the ${\cal N} =1$ supersymmetry is softly broken, requires the lengthy and detailed analysis that was done above.
Conclusions
===========
The CSDR was originally introduced as a scheme which, making use of higher dimensions, incorporates in a unified manner the gauge and the ad-hoc Higgs sector of the spontaneously broken gauge theories in four dimensions [@Manton]. Next fermions were introduced in the scheme and the ad-hoc Yukawa interactions have also been included in the unified description [@Slansky; @Chapline].
Of particular interest for the construction of fully realistic theories in the framework of CSDR are the following virtues that complemented the original suggestion: (i) The possibility to obtain chiral fermions in four dimensions resulting from vector-like representations of the higher dimensional gauge theory [@Chapline; @Review]. This possibility can be realized due the presence of non-trivial background gauge configurations which are introduced by the CSDR constructions [@Salam], (ii) The possibility to deform the metric of certain non-symmetric coset spaces and thereby obtain more than one scales [@Farakos; @Review; @Hanlon], (iii) The possibility to use coset spaces, which are multiply connected. This can be achieved by exploiting the discrete symmetries of the S/R [@Kozimirov; @Review]. Then one might introduce topologically non-trivial gauge field [@Zoupanos] configurations with vanishing field strength and induce additional breaking of the gauge symmetry. It is the Hosotani mechanism [@Hosotani] applied in the CSDR.
In the above list recently has been added the interesting possibility that the popular softly broken supersymmetric four-dimensional chiral gauge theories might have their origin in a higher dimensional supersymmetric theory with only vector supermultiplet [@Pman], which is dimensionally reduced over non-symmetric coset spaces.
In the present paper we have presented explicit and detailed examples of CSDR of a supersymmetric $E_{8}$ gauge theory over all possible six-dimensional coset spaces. Out of our study there are two cases that single out for further study as candidates to describe realistically the observed low energy world. Both are known GUTs containing three fermion families and scalars appropriate for the spontaneous electroweak breaking. One case is based on the reduction of the $E_{8}$ over the symmetric coset space $SU(3) \times SU(2)/SU(2) \times U(1) \times U(1)$ and leads to an $SO(10)$-type non supersymmetric GUT in four dimensions. The other is based of the reduction of the same ten-dimensional gauge group over the non-symmetric coset space $SU(3)/U(1)\times U(1)$ and leads to an $E_{6}$-type softly broken supersymmetric GUT in four dimensions. Both require some additional mechanism to break the four-dimensional GUT gauge group. Such a possibility is offered by the Hosotani mechanism, mentioned already, and in both cases there exist discrete symmetries acting freely on the corresponding coset spaces that can been used. We plan to return with a complete analysis of the possibilities to extract viable phenomenology from both models.
The current discussion on the higher dimensional theories with large extra dimensions provides a new framework to examine further the CSDR. An obvious advantage is a reexamination of CSDR over symmetric coset spaces. The fact that the four-dimensional scalar potential obtained from the reduction over symmetric coset spaces is tachyonic and appropriate for the electroweak symmetry breaking excludes the possibility of radii with size of the order of inverse of GUT or Planck scales, contrary to radii of inverse TeV scale. Similarly it is worth reexamining the cases that $S$ can be embedded in the higher dimensional gauge group $G$ and therefore the final gauge group after spontaneous symmetry breaking can be determined group theoretically. Again the spontaneous symmetry breaking is appropriate for the electroweak symmetry breaking, while there are not known examples that such a breaking is suitable for the GUT breaking. These latter cases provide also the advantage that the resulting four-dimensional theory has vanishing cosmological constant. Finally the classical treatment used in CSDR is justified in the case of large radii which are far away from the scales that the quantum effects of gravity are important.\
Acknowledgements {#acknowledgements .unnumbered}
================
We would like to thank L. Alvarez-Gaume, C. Bachas, G. L. Cardoso, P. Forgacs, A. Kehagias, C. Kounnas, G. Koutsoumbas, D. Luest and D. Suematsu for useful discussions.
[ ]{}\
In this appendix we collect the tables of the six-dimensional coset spaces $S/R$ with S simple or semisimple and $rankS=rankR$, and the tables of the commutation relations needed for our calculations.\
[|l l l|]{}\
\
$S/R$ & $SO(6)$ vector & $SO(6)$ spinor\
$SO(7)/SO(6)$ & $6$ & $4$\
$SU(4)/SU(3) \times U(1)$ & $3_{-2}+\overline{3}_{2}$ & $1_{3}+3_{-1}$\
$Sp(4)/(SU(2) \times U(1))_{max}$ & $3_{-2}+3_{2}$ & $1_{3}+3_{-1}$\
$SU(3) \times SU(2)/SU(2)
\times U(1) \times U(1)$ & $1_{0,2a}+1_{0,-2a}$ & $1_{b,-a}+1_{-b,-a}$\
& $+2_{b,0}+2_{-b,0}$ & $+2_{0,a}$\
$Sp(4) \times SU(2)/SU(2) \times SU(2) \times U(1)$ & $(1,1)_{2}+(1,1)_{-2}$ & $(2,1)_{1}+(1,2)_{-1}$\
& $+(2,2)_{0}$ &\
$(SU(3)/U(1))^{3}$ & $(2a,0,0)+(-2a,0,0)$ & $(a,b,c)+(-a,-b,c)$\
& $+(0,2b,0)+(0,-2b,0)$ & $+(-a,b,-c)+(a,-b,-c)$\
& $(0,0,2c)+(0,0,-2c)$ &\
\
[|l l l|]{}\
\
$S/R$ & $SO(6)$ vector & $SO(6)$ spinor\
$G_{2}/SU(3)$ & $3+\overline{3}$ & $1+3$\
$Sp(4)/(SU(2) \times U(1))_{non-max}$ & $1_{2}+1_{-2}+2_{1}+2_{-1}$ & $1_{0}+1_{2}+2_{-1}$\
$SU(3)/U(1) \times U(1)$ & $(a,c)+(b,d)+(a+b,c+d)$ & $(0,0)+(a,c)+(b,d)$\
& $+(-a,-c)+(-b,-d)$ & $+(-a-b,-c-d)$\
& $+(-a-b,-c-d)$ &\
\
[|l|l|]{}\
\
\
$\left[ Q^{a},Q^{b} \right] = 2i f^{abc}Q^{c}$ & $\left[
Q^{a},Q^{\rho} \right] = -(\lambda^{a})^{\rho}_{\sigma}Q^{\sigma}$\
$\left[ Q^{\rho},Q_{\sigma} \right] =
-(\lambda^{a})^{\rho}_{\sigma}Q^{a}$ & $\left[ Q^{\rho},Q^{\sigma}
\right] = 2\sqrt{\frac{2}{3}}\epsilon^{\rho\sigma\tau}Q_{\tau}$\
\
The normalization is $$TrQ^{a}Q^{b}=2\delta^{ab},
TrQ^{\rho}Q_{\sigma}=2\delta^{\rho}_{\sigma} .$$
[|l|l|]{}\
\
\
$\left[ Q^{a},Q^{b} \right]=2if^{abc}Q^{c}$ & $\left[
Q^{\alpha},Q^{\beta}
\right]=2ig^{\alpha\beta\gamma}Q^{\gamma}$\
$\left[ Q^{a},Q^{i\rho}
\right]=-(\lambda^{\alpha})^{\rho}_{\sigma}\delta^{i}_{j}Q^{j\sigma}$ & $\left[ Q^{i\rho},Q^{j\sigma} \right]=\frac{1}{\sqrt{6}}
\epsilon^{\rho\sigma\tau}d^{ijk}Q_{k\tau}$\
$\left[ Q^{i\rho},Q_{j\sigma}
\right]=-(\lambda^{a})^{\rho}_{\sigma}\delta^{i}_{j}Q^{a}+$ & $
\left[ Q^{\alpha},Q^{i\rho}
\right]=(G^{\alpha})^{i}_{j}\delta^{\rho}_{\sigma}Q^{j\sigma}$\
$\frac{1}{6}\delta^{\rho}_{\sigma}(G^{\alpha})^{i}_{j}Q^{\alpha}$ &\
\
The normalization is $$TrQ^{a}Q^{b}=2\delta^{ab},\
TrQ^{\alpha}Q^{\beta}=12\delta^{\alpha\beta},\
TrQ^{i\rho}Q_{j\sigma}=2\delta^{i}_{j}\delta^{\rho}_{\sigma}.$$
[|l|l|l|]{}\
\
\
$\left[Q_{\rho},Q_{\sigma}\right]=
2i\epsilon_{\rho\sigma\tau}Q_{\tau}$ & $\left[Q,Q_{a}\right] =
Q_{a}$ & $\left[Q_{\rho},Q_{a}\right] =
(\tau_{\rho})^{b}_{a}Q_{b}$\
$\left[Q,Q_{+}\right] = 2Q_{+}$ & $\left[Q_{a},Q^{+}\right] = -\sqrt{2}\epsilon_{ab}Q^{b}$ & $\left[Q_{a},Q_{b}\right] = \sqrt{2}\epsilon_{ab}Q_{+}$\
$\left[Q_{a},Q^{b}\right] =
\delta^{a}_{b}Q+(\tau_{\rho})^{b}_{a}Q_{\rho}$ & $\left[Q_{+},Q^{+}\right] = 2Q$ &\
\
The normalization in the above table is given by\
$Tr(Q_{\rho}Q_{\sigma})=2\delta_{\rho\sigma}$, $Tr(Q_{a}Q^{b})=2\delta^{b}_{a}$, $Tr(Q_{+}Q^{+})=2$.\
[|l|l|]{}\
\
\
$\left[G^{\alpha},G^{\beta}\right] =
2ig^{\alpha\beta\gamma}G^{\gamma}$ & $\left[G^{\rho},G^{\sigma}\right] =
2i\epsilon^{\rho\sigma\tau}G^{\tau}$\
$\left[G,G^{a}\right] =
\sqrt{3}G^{a}$ & $\left[G,G^{j}\right] = -\frac{2}{\sqrt{3}}G^{j}$\
$\left[G,G^{aj}\right] = \frac{1}{\sqrt{3}}G^{aj}$ & $\left[G^{\rho},G^{a}\right] = -(\tau^{\rho})^{a}_{b}G^{b}$\
$\left[G^{\alpha},G^{i}\right] = -(G^{\alpha})^{i}_{j}G^{j}$ & $\left[G^{\alpha},G^{ai}\right] = -(G^{\alpha})^{i}_{j}G^{aj}$\
$\left[G^{a},G^{j}\right] = \sqrt{2}G^{aj}$ & $\left[G_{a},G^{bj}\right] = \sqrt{2}\delta^{b}_{a}G^{j}$\
$\left[G^{i},G^{aj}\right] =
-\sqrt{\frac{5}{7}}\epsilon^{ab}d^{ijk}G_{bk}$ & $\left[G^{ai},G^{bj}\right] =
\sqrt{\frac{5}{7}}\epsilon^{ab}d^{ijk}G_{k}$\
$\left[G^{i},G_{aj}\right] = \sqrt{2}\delta^{i}_{j}G_{a}$ & $\left[G^{a},G_{b}\right] =
\sqrt{3}\delta^{a}_{b}-(\tau^{\rho})^{a}_{b}G^{\rho}$\
$\left[G^{i},G_{j}\right] = -\frac{2}{\sqrt{3}}\delta^{i}_{j}G
+(G^{\alpha})^{i}_{j}G^{\alpha}$ & $\left[G^{ai},G_{bj}\right] =
\frac{1}{\sqrt{3}}\delta^{i}_{j}\delta^{a}_{b}G
+\delta^{a}_{b}(G^{\alpha})^{i}_{j}G^{\alpha}
-\delta^{i}_{j}(\tau^{\rho})^{a}_{b}G^{\rho}$\
\
The normalization in the above table is as follows\
$$Tr(G^{\rho}G_{\sigma})=2\delta^{\rho\sigma},
Tr(G^{\alpha}G^{\beta})=12\delta^{\alpha\beta},
Tr(G^{a}G_{b})=2\delta^{a}_{b}$$ $$Tr(GG)=2,
Tr(G^{i}G_{j})=2\delta^{i}_{j},
Tr(G^{ai}G_{bj})=2\delta^{a}_{b}\delta^{i}_{j}$$.\
[|l|l|l|]{}\
\
\
$\left[Q_{1},Q_{0}\right]=\sqrt{3}Q_{1}$ & $\left[Q_{1},Q'_{0}\right]=Q_{1}$ & $\left[Q_{2},Q_{0}\right]=-\sqrt{3}Q_{2}$\
$\left[Q_{2},Q'_{0}\right]=Q_{2}$ & $\left[Q_{3},Q_{0}\right]=0$ & $\left[Q_{3},Q'_{0}\right]=-2Q_{3}$\
$\left[Q_{1},Q^{1}\right]=-\sqrt{3}Q_{0}-Q'_{0}$ & $\left[Q_{2},Q^{2}\right]=\sqrt{3}Q_{0}-Q'_{0}$ & $\left[Q_{3},Q^{3}\right]=2Q'_{0}$\
$\left[Q_{1},Q_{2}\right]=\sqrt{2}Q^{3}$ & $\left[Q_{2},Q_{3}\right]=\sqrt{2}Q^{1}$ & $\left[Q_{3},Q_{1}\right]=\sqrt{2}Q^{2}$\
\
The normalization in the above table is $$Tr(Q_{0}Q_{0})=Tr(Q'_{0}Q'_{0})=Tr(Q_{1}Q^{1})=Tr(Q_{2}Q^{2})=Tr(Q_{3}Q^{3})=2$$
[|l|l|l|]{}\
\
\
$\left[Q_{1},Q_{0}\right]=\sqrt{30}Q_{1}$ & $\left[Q_{1},Q'_{0}\right]=\sqrt{10}Q_{1}$ & $\left[Q_{2},Q_{0}\right]=-\sqrt{30}Q_{2}$\
$\left[Q_{2},Q'_{0}\right]=\sqrt{10}Q_{2}$ & $\left[Q_{3},Q_{0}\right]=0$ & $\left[Q_{3},Q'_{0}\right]=-2\sqrt{10}Q_{3}$\
$\left[Q_{1},Q^{1}\right]=-\sqrt{30}Q_{0}-\sqrt{10}Q'_{0}$ & $\left[Q_{2},Q^{2}\right]=\sqrt{30}Q_{0}-\sqrt{10}Q'_{0}$ & $\left[Q_{3},Q^{3}\right]=2\sqrt{10}Q'_{0}$\
$\left[Q_{1},Q_{2}\right]=\sqrt{20}Q^{3}$ & $\left[Q_{2},Q_{3}\right]=\sqrt{20}Q^{1}$ & $\left[Q_{3},Q_{1}\right]=\sqrt{20}Q^{2}$\
$\left[Q_{1i},Q_{0}\right]=\sqrt{30}Q_{1i}$ & $\left[Q_{1i},Q'_{0}\right]=\sqrt{10}Q_{1i}$ & $\left[Q_{2i},Q_{0}\right]=-\sqrt{30}Q_{2i}$\
$\left[Q_{2i},Q'_{0}\right]=\sqrt{10}Q_{2i}$ & $\left[Q_{3i},Q_{0}\right]=0$ & $\left[Q_{3i},Q'_{0}\right]=-2\sqrt{10}Q_{3i}$\
$\left[Q_{1i},Q_{2j}\right]=\sqrt{20}d_{ijk}Q^{3k}$ & $\left[Q_{2i},Q_{3j}\right]=\sqrt{20}d_{ijk}Q^{1k}$ & $\left[Q_{3i},Q_{1j}\right]=\sqrt{20}d_{ijk}Q^{2k}$\
$\left[Q^{\alpha},Q^{\beta}\right]=2ig^{\alpha\beta\gamma}Q^{\gamma}$ & $\left[Q^{\alpha},Q_{0}\right]=0$ & $\left[Q^{\alpha},Q'_{0}\right]=0$\
$\left[Q^{\alpha},Q_{1i}\right]=-(G^{\alpha})^{j}_{i}Q_{1j}$ & $\left[Q^{\alpha},Q_{2i}\right]=-(G^{\alpha})^{j}_{i}Q_{2j}$ & $\left[Q^{\alpha},Q_{3i}\right]=-(G^{\alpha})^{j}_{i}Q_{3j}$\
\
------------------------------------------------------------------------
Table 8.2
Further non-trivial commutation relations of $E_{8}$
according to the decomposition given in eq.(84)
$\left[Q_{1i},Q^{1j}\right]=-\frac{1}{6}(G^{\alpha})^{j}_{i}Q^{\alpha}
-\sqrt{30}\delta^{j}_{i}Q_{0}-\sqrt{10}\delta^{j}_{i}Q'_{0}$
$\left[Q_{2i},Q^{2j}\right]=-\frac{1}{6}(G^{\alpha})^{j}_{i}Q^{\alpha}
+\sqrt{30}\delta^{j}_{i}Q_{0}-\sqrt{10}\delta^{j}_{i}Q'_{0}$
$\left[Q_{3i},Q^{3j}\right]=-\frac{1}{6}(G^{\alpha})^{j}_{i}Q^{\alpha}
+2\sqrt{10}\delta^{j}_{i}Q'_{0}$
------------------------------------------------------------------------
\
The normalization is $$Tr(Q_{0}Q_{0})=Tr(Q'_{0}Q'_{0})=Tr(Q_{1}Q^{1})=Tr(Q_{2}Q^{2})=Tr(Q_{3}Q^{3})=2.$$ $$Tr(Q_{1i}Q^{1j})=Tr(Q_{2i}Q^{2j})=Tr(Q_{3i}Q^{3j})=2\delta^{j}_{i}.$$ $$Tr(Q^{\alpha}Q^{\beta})=12\delta^{\alpha\beta}.$$
\
Here we give some details related to the calculation of the V operator in the case of $SU(3)/ U(1) \times U(1)$ and the gaugino mass (92).
To calculate the V operator in the case of $SU(3)/U(1) \times
U(1)$ we use the real metric of the coset, $g_{ab}=diag(a,a,b,b,c,c)$ with $a=R_{1}^{2},b=R_{2}^{2},c=R_{3}^{2}$. Using the structure constants of $SU(3)$, $ f_{12}^{\ \ 3}=2 $, $ f_{45}^{\ \
8}=f_{67}^{\ \ 8}=\sqrt{3} $, $ f_{24}^{\ \ 6} =f_{14}^{\ \
7}=f_{25}^{\ \ 7}=-f_{36}^{\ \ 7}=-f_{15}^{\ \ 6}=-f_{34}^{\ \
5}=1$, (where the indices 3 and 8 correspond to the $U(1) \times
U(1)$ and the rest are the coset indices) we calculate the components of the $D_{abc}$:\
$D_{523}
=D_{613}=D_{624}=D_{541}=-D_{514}=-D_{532}=-D_{631}=-D_{624}=\frac{1}{2}(c-a-b).$\
$D_{235}=D_{136}=D_{624}=D_{154}=-D_{145}=-D_{253}=-D_{163}=-D_{264}=\frac{1}{2}(a-b-c).$\
$D_{352}=D_{361}=D_{462}=D_{415}=-D_{451}=-D_{325}=-D_{316}=-D_{
426}=\frac{1}{2}(b-c-a).$\
From the $D$’s we calculate the contorsion tensor $$\Sigma_{abc}=2\tau(D_{abc}+D_{bca}-D_{cba}),$$ and then the tensor $$G_{abc}=D_{abc}+\frac{1}{2}\Sigma_{abc}$$ which is\
$G_{523}=G_{613}=G_{642}=G_{541}=-G_{514}=-G_{532}=-G_{631}=-G_{642}=
\frac{1}{2}[(1-\tau)c-(1+\tau)a-(1+\tau)b].$\
$G_{235}=G_{136}=G_{246}=G_{154}=-G_{145}=-G_{253}=-G_{163}=-G_{264}=
\frac{1}{2}[-(1-\tau)a+(1+\tau)b+(1+\tau)c].$\
$G_{352}=G_{361}=G_{462}=G_{415}=-G_{451}=-G_{325}=-G_{316}=-G_{426}=
\frac{1}{2}[-(1+\tau)a+(1-\tau)b-(1+\tau)c].$\
In addition we need the gamma matrices. In ten dimensions we have $
\{\Gamma^{\mu},\Gamma^{\nu}\} = 2 \eta^{\mu\nu}$ with $\Gamma^{\mu}= \gamma^{\mu}\otimes I_{8}$ and $\{\Gamma^{a},\Gamma^{b}\} = -2g^{ab}$, where $$\Gamma^{a} =
\frac{1}{\sqrt{r_{a}}}\gamma_{5}\otimes\left[\begin{array}{cc}0&\overline{\gamma}^{a}\\
\overline{\gamma}^{a}&0 \end{array} \right]$$ with $a=1,2,3,5,6$ and $$\Gamma^{4}=\frac{1}{\sqrt{r_{4}}}\gamma_{5}\otimes\left[\begin{array}{cc}0&iI_{4}\\
iI_{4}&0 \end{array} \right].$$ In the present case we have $r_{1}=r_{2}=a$, $r_{3}=r_{4}=b$ and $r_{5}=r_{6}=c$. The $\overline{\gamma}^{a}$ matrices are given by $
\overline{\gamma}^{1}=\sigma^{1}\otimes\sigma^{2}$ , $\overline{\gamma}^{2}=\sigma^{2}\otimes\sigma^{2}$, $\overline{\gamma}^{3}=-I_{2}\otimes\sigma^{3}$, $\overline{\gamma}^{5}=\sigma^{3}\otimes\sigma^{2}$, $\overline{\gamma}^{6}=-I_{2}\otimes\sigma^{1}$. Using these matrices we calculate $\Sigma^{ab}=\frac{1}{4}[\Gamma^{a},\Gamma^{b}]$ and then $G_{abc}\Gamma^{a}\Sigma^{bc}$.
[99]{} P. Forgacs and N. S. Manton, Commun. Math. Phys. [**72**]{}, 15(1980); E. Witten, Phys. Rev. Lett. [**38**]{}, 121(1977). D. Kapetanakis and G. Zoupanos, Phys. Rept. [**C 219**]{}, 1(1992). Y. Kubyshin, J. M. Mourao, G. Rudolph and I. P. Volobujev, Lecture notes in Physics, [**Vol. 349**]{}, Springer Verlag, Heidelberg (1989). N. S. Manton, Nucl. Phys. [**B 193**]{}, 502(1981). See e.g. M. B. Green, J. H. Schwarz and E. Witten, Superstring Theory, Cambridge University Press (1987); D. Luest and S. Theisen, Lectures on String Theory, Lecture Notes in Physics, [**Vol. 346**]{}, Springer Verlag, Heidelberg (1989). G. Chapline and R. Slansky, Nucl. Phys. [**B 204**]{}, 461(1982). P. Manousselis and G. Zoupanos, Phys. Lett. [**B 504**]{}, 122(2001); ibid., Phys. Lett. [**B 518**]{}, 171(2001). S. Dimopoulos and H. Georgi, Phys. Lett. [**B 117**]{}, 287(1982); N. Sakai, Z. Phys. [**C 11**]{}, 155(1981). D. Kapetanakis, M. Mondragón, and G. Zoupanos, Zeit. f. Phys. [**C 60**]{}, 181(1993); M. Mondragón, and G. Zoupanos, Nucl. Phys. [**B**]{} (Proc. Suppl.) [**37 C**]{}, 98(1995). J. Kubo, M. Mondragón and G. Zoupanos, Nucl. Phys. [**B 424**]{}, 502(1994); ibid, Phys. Lett. [**B 389**]{}, 523(1996); T. Kobayashi [*et.al.*]{}, Nucl. Phys. [**B 511**]{}, 45(1998); For an extended discussion and a complete list of references see: J. Kubo, M. Mondragón, and G. Zoupanos, Acta Phys. Polon. [**B 27**]{} 3991(1997). See e.g. T. Kobayashi [*et.al.*]{}, Acta Phys. Polon. [**B 30**]{} 2013(1999); ibid., Proc. of HEP99, Tempere 1999, p.804. T. R. Taylor and G. Veneziano, Phys. Lett. [**B 212**]{}, 147(1988). K. R. Dienes, E. Dudas and T. Gherghetta, Nucl. Phys. [**B 537**]{}, 47(1999). T. Kobayashi, J. Kubo, M. Mondragon and G. Zoupanos, Nucl.Phys. [**B 550**]{}, 99(1999); J. Kubo, H. Terao and G. Zoupanos, Nucl. Phys. [**B 574**]{}, 495(2000); ibid., hep-ph/0010069. L. Castellani, Annals Phys. [**287**]{}, 1(2001). D. Luest, Nucl. Phys. [**B 276**]{}, 220(1986); L. Castellani and D. Luest, Nucl. Phys. [**B 296**]{}, 143(1988). A. M. Gavrilik, “Coset-space string compactification leading to 14 subcritical dimensions,” hep-th/9911120. F. Muller-Hoissen and R. Stuckl, Class. Quant. Grav. [**5**]{}, 27(1988); N. A. Batakis, et al. Phys. Lett. [**B 220**]{}, 513(1989). C. Wetterich, Nucl. Phys. [**B 222**]{}, 20 (1985); L. Palla, Z.Phys. [**C 24**]{}, 345(1983); K. Pilch and A. N. Schellekens, J.Math. Phys. [**25**]{}, 3455(1984); P. Forgacs, Z. Horvath and L. Palla, Z. Phys. [**C 30**]{}, 261(1986); K. J. Barnes, P. Forgacs, M. Surridge and G. Zoupanos, Z. Phys. [**C 33**]{}, 427(1987). E. Witten, Phys. Lett. [**B 144**]{}, 351(1984). K. Pilch and A. N. Schellekens, Nucl. Phys. [**B 259**]{}, 673(1985); D. Luest, Nucl. Phys. [**B 276**]{}, 220(1985); D. Kapetanakis and G. Zoupanos, Phys. Lett. [**B 249**]{}, 66(1990). J. Harnad, S. Shnider and L. Vinet, J. Math. Phys. [**20**]{}, 931(1979); [**21**]{}, 2719(1980); J. Harnad, S. Shnider and J. Taffel, Lett. Math. Phys. [**4**]{}, 107(1980). G. Chapline and N. S. Manton, Nucl. Phys. [**B 184**]{}, 391(1981); F. A. Bais, K. J. Barnes, P. Forgacs and G. Zoupanos, Nucl. Phys. [**B 263**]{}, 557(1986); K. Farakos, G. Koutsoumbas, M. Surridge and G. Zoupanos, Nucl. Phys. [**B 291**]{}, 128(1987); ibid., Phys. Lett. [**B 191**]{}, 135(1987); Y. A. Kubyshin, J. M. Mourao, I. P. Volobujev, Int. J. Mod.Phys. [**A 4**]{}, 151(1989). K. Farakos, G. Koutsoumbas, M. Surridge and G. Zoupanos, Nucl.Phys. [**B 291**]{}, 128(1987); ibid., Phys. Lett. [**B 191**]{}, 135(1987). D. Kapetanakis and G. Zoupanos, Phys. Lett. [**B 249**]{}, 73(1990); ibid., Z. Phys. [**C 56**]{}, 91(1992). See e.g. I. Antoniadis and K. Benakli, “Large dimensions and string physics in future colliders", hep-ph/0007226 and references therein; Nima Arkani-Hamed, S. Dimopoulos and G. Dvali, Phys.Lett. [**B 249**]{}, 263(1998); Nima Arkani-Hamed, S. Dimopoulos and G. Dvali, Phys. Rev. [**D 59**]{}, 086004(1999); G. Dvali, S. Randjbar-Daemi and R. Tabbash, “The origin of spontaneous symmetry breaking in theories with large extra dimensions", hep-ph/0102307; Y. A. Kubyshin, “Models with Extra Dimensions and their Phenomenology", hep-ph/0111027. E. Fahri and L. Susskind, Phys. Rept. [**C 74**]{}, 277(1981); C. T. Hill, Proc. of Nagoya Int. Workshop (1996), p. 54, and references therein; ibid., Phys. Lett. [**B 266**]{}, 419(1991); ibid., Nucl. Phys. [**B 345**]{}, 483(1995). W. Marciano, Phys. Rev. [**D 21**]{}, 315(1980); G. Zoupanos, Phys. Lett. [**B 129**]{}, 315(1983); P. Forgacs and G. Zoupanos, Phys. Lett. [**B 148**]{}, 99(1984); D. Luest et al., Nucl.Phys. [**B 268**]{}, 49(1986). G. Triantaphyllou, J. Phys. [**G 26**]{}, 99(2000); G. Triantaphyllou and G. Zoupanos, Phys. Lett. [**B 489**]{}, 420(2000); G. Triantaphyllou, Mod. Phys. Lett. [**A 16**]{}, 53(2001). M. Green and J. H. Schwarz, Phys. Lett. [**B 149**]{}, 117(1984); L. Alvarez-Gaume and E. Witten, Nucl. Phys. [**B 234**]{}, 269(1983). P. Horava and E. Witten, Nucl. Phys. [**B 460**]{}, 506(1996); E. Witten, Nucl. Phys. [**B 471**]{}, 135(1996); for a review see C. Munoz in Proc. of Corfu Summer Institute on EPP 1998, hep-th/9906152. T. W. Kephart and M. T. Vaughn, Annals of Physics [**145**]{}, 162(1983). D. Luest and G. Zoupanos, Phys. Lett. [**B 165**]{}, 309(1985). E. Witten, Phys. Lett. [**B 155**]{}, 151(1985). S. Randjbar-Daemi, A. Salam and J. Strathdee, Nucl. Phys. [**B 242**]{}, 447(1983); Phys. Lett. [**B 124**]{}, 345(1983); P. Forgacs, Z. Horvath and L. Palla, Phys. Lett. [**B 147**]{}, 311(1984); [**B 148**]{}, 99(1984); A. N. Schellekens, Nucl.Phys. [**B 248**]{}, 706(1984); R. Coquereaux and A. Jadczyk, Commun. Math. Phys. [**98**]{}, 79(1985). B. E. Hanlon and G. C. Joshi, Phys. Lett. [**B 298**]{}, 312(1993). N. Kozimirov, V. A. Kuzmin and I. I. Tkachev, Sov. J. Nucl.Phys. [**49**]{}, 164(1989); D. Kapetanakis and G. Zoupanos, Phys.Lett. [**B 232**]{}, 104(1989). G. Zoupanos, Phys. Lett. [**B 201**]{}, 301(1988). Y. Hosotani, Phys. Lett. [**B 126**]{}, 309(1983); [**B 129**]{}, 193(1983); E. Witten, Nucl. Phys. [**B 258**]{}, 75(1985); J. D. Breit, B. A. Ovrut and G. C. Segre, Phys. Lett. [**B 158**]{}, 33(1985); B. Greene, K. Kirlkin and P. J. Miron, Nucl.Phys. [**B 274**]{}, 575(1986); B. Greene, K. Kirlkin, P. J. Miron and G. G. Ross, Nucl. Phys. [**B 278**]{}, 667(1986).
[^1]: The coset space can be considered as a complex three-dimensional space having coordinate indices $a,+$ with $a=1,2$ and metric $g^{1\overline{1}}=g^{2\overline{2}}=\frac{1}{R_{1}^{2}}$ and $g^{+\overline{+}}=\frac{1}{R_{2}^{2}}$. The latter metric has been used to write the potential in the form given in eq.(50).
[^2]: To bring the potential into this form we have used (A.22) of ref.[@Review] and relations (7),(8) of ref.[@Witten2].
[^3]: The complex metric that was used is $g^{1\overline{1}}=\frac{1}{R_{1}^{2}},g^{2\overline{2}}=\frac{1}{R_{2}^{2}},
g^{3\overline{3}}=\frac{1}{R_{3}^{2}}$.
| |
Hey..!!
Inheritance and Variation
Practice
MCQs
Quizzes
About
Leaderboard
Amazon Books
494
MCQs
0
Attempts
0 %
Accuracy
share
Filter by
Type here
Apply
A tall plant with round seeds (TTRR) crossed with a dwarf wrinkle seeded plant (ttrr). F1 has tall plants with roundedseeds.
What is the proportion of dwarf plants with wrinkled seeds in F2 generation
1/2
1/4
Zero
1/16
Two cross between the same pair of genotypes / phenotypes in which the sources of gametes are reversed in one cross are called
Dihybrid cross
test cross
reciprocal cross
Reverse cross
Mendel formulated the law of purity of gametes on the basis of
test cross
Dihybrid cross
Monohybrid cross
back cross
Tt × tt is
back cross
test cross
reciprocal cross
hybridization
Which mutation / variation is not hereditary
Gametic
Germinal
Somatic
Genetic
How many types of gametes will be produced by an organism with AaBBCc / AABcCc genotype
9
4
6
3
A test cross is performed to know
Success of intervarietal and interspecific cross
Linkage between two traits
Genotype of F2 dominants
Number of alleles of a gene
An individual having different alleles of a gene is
Heterozygous
Mosaic
Homozygous
diploid
If red eyed ( dominant) fly is mated with white eyed ( recessive) fly, the ratio of red to white eyed in F2 generation would be
2 : 2
3 : 1
1 : 3
2 : 1
Children of father with 'O' blood group and mother with 'AB' blood group would be
A or B
O
O or AB
Ab
A cross involving parents differing only in one trait is
diploid
Dihybrid
Monohybrid
haploid
A cross AaBb and aabb yields
AABB and aabb
AaBb, Aabb, aaBb and aabb
aabb
Mendel's work was got republished in 'Flora' by
Correns
All the above
Tschermak
De Vries
grain colour of wheat is determined by three pairs ofpolygenes.
A cross was made between AABBCC
(coloured) and aabbcc (light). In F2 generation what percent of progeny would resemble parents
one third
less than 5%
Half
three fourth
Which one is exception to Mendel's principle of dominance
Maize
Garden Pea
Wild Pea
Mirabilis
Which pair of features represents polygenic inheritance
Human height and skin colour
Human eye colour and sickle anaemia
Hair pigments of mouse and tongue rolling in humans
Alleles of gene are found on
Same chromosome
Any chromosomes
homologous chromosomes
Non homologous chromosomes
In
lathyrus odoratus
, cross between two purple flowered plants gives a pink/whiteprogeny. | https://foxoyo.com/topic/inheritance-and-variation |
The Adobe Portable Document Format (PDF) is a file format that represents documents in a manner independent of the creator application and the device used to view or print. One PDF version currently in use is Adobe PDF v 1.7. A PDF document, e.g., in the form of a data file, contains one or more pages that can express static compositions of text, graphics and images, as well as interactive objects such as form fields and annotations. PDF defines several coordinate spaces in which the coordinates specifying text, graphics and image objects are interpreted, including device space and user space. PDF page content is viewed or printed on a raster output device with a built-in coordinate system called device space. To avoid device-dependent effects of specifying coordinates with respect to a single device space, PDF defines a device-independent coordinate system called user space. The user space origin, orientation, and unit size are established for each page of the PDF document from the page's crop box rectangle, intrinsic rotation and resolution. PDF user space is transformed to device space for viewing and printing using a transformation matrix that specifies a linear mapping of two-dimensional coordinates including translation, rotation, reflection, scaling and skewing.
A coordinate P that is expressed with an ordered pair of real numbers (x, y) can also be expressed as the row matrix [x y 1]. The row matrix form facilitates its transformation using a 3-by-3 affine transformation matrix M resulting in a new coordinate P′.
M = [ a b 0 c d 0 e f 1 ] P = [ x y 1 ] P ′ = [ x ′ y ′ 1 ] P ′ = PM P ′ = [ x y 1 ] [ a b 0 c d 0 e f 1 ] x ′ = a × x + c × y + e y ′ = b × c + d × y + f
PDF rectangles are specified using the coordinates for a pair of diagonally opposite corners. PDF rectangle sides are constrained to be perpendicular to the user space coordinate system axes. Consider for example the rectangle 1500 and corresponding render area RA shown in FIG. 15. The rectangle R defining the area RA can also be expressed with an array of offsets {left, bottom, right, top} that can be used to specify corner coordinates of the rectangle.Rbl=[left bottom 1]Rbr=[right bottom 1]Rtl=[left top 1]Rtr=[right top 1]
PDF rectangles are transformed differently than PDF coordinates. A PDF rectangle R transformed with 3-by-3 affine transformation matrix M is expected to produce the smallest rectangle R′ that contains all corner coordinates of R transformed with M. This operation is represented with the function g(R, M) where R is a PDF rectangle and M is a 3-by-3 affine transformation matrix.R′=g(R,M)
Consider an image processing software function library, such as the existing Adobe PDF Library (PDFL), which is a library of functions that is included in the Adobe® PDF Library software development kit (SDK) version 8.1 (PDFL SDK 8.1). Currently the PDFL functions accept, e.g., have an interface for receiving, at least two input parameters to customize how PDF graphics are rendered by one or more of the PDFL functions. The two input are (1) a Render Area RA and (2) a Render Matrix MRM. The Render Area RA is a PDF rectangle, expressed in user space units, that confines rendering to visible graphics within its boundaries. The Render Matrix MRM is a 3-by-3 affine transformation matrix, expressed in user space units, that transforms the user space coordinate system into a device coordinate system for drawing and/or other manipulation operations. A successful render operation quantizes transformed page graphics contained within the two-dimensional rectangular area as a two-dimensional raster image. See for example FIG. 16 which shows rectangle 1600 which corresponds to rectangle 1500 of FIG. 15 after an exemplary rendering to device space. The raster image is segmented into rows of pixel data using its stride, which represents the number of bytes required to encode one row of pixel data. Finally the two-dimensional raster image is streamed into a one-dimensional vector of pixel data bytes as output.
The PDFL requires that inputs to the PDFL functions specify the Render Area and Render Matrix data structures using real numbers represented with 32-bit, fixed point integer values. Unfortunately, limiting the inputs to the use of fixed point integer values limits transformations that can be expressed, as well as the locality of transformed graphics that may be rendered by the PDFL. This is because a 32-bit, fixed point integer is used to represent a rational number, where the most-significant 16 bits are used for the signed, integer part and the least significant 16 bits are used for the unsigned, fractional part. Such a representation restricts legal values to the set X where:X={x:xεx=m+f×2−16,mεfε|m|<215,|f|<216}
For the smallest xεX
x = m + f × 2 - 16 x = ( - 2 15 + 1 ) + ( - 2 16 + 1 ) × 2 - 16 x = ( - 32768 + 1 ) + - 65536 + 1 65536 x = - 32767 - 65535 65536 x ≈ - 32767.99998
For the largest xεX
x = m + f × 2 - 16 x = ( 2 15 - 1 ) + ( 2 16 - 1 ) × 2 - 16 x = ( 32768 - 1 ) + 65536 - 1 65536 x = 32767 + 65535 65536 x ≈ - 32767.99998
A number rε is approximated with x if ∃x:xεX, |r−x|<2−16. The set of Real numbers that can be approximated with X is denoted as set Y.Y={r:rε,|r|<32768}
As a result of the use of 32 bit fixed point integer values by the PDFL, the Render Area and Render Matrix data structures are subject to the domain restriction of Y when the PDFL functions are used.
R A = { x 0 , y 0 , x 1 , y 1 } , where { x 0 , y 0 , x 1 , y 1 } ∈ Y M RM = [ a b 0 c d 0 e f 1 ] , where { a , b , c , d , e , f } ∈ Y
Unfortunately, the 32 bit fixed point constraint limits the usefulness of the available PDF functions particularly in the case where large images are to be processed. The following example demonstrates a specific render request that cannot be directly fulfilled using the PDFL due to the input constraints. Consider a raster display device with a two-dimensional rectangular display area of 1280×1024 pixels whose origin (0,0) is the top-left coordinate and whose unit size is 96 DPI. In this example the PDFL is directed to render the top-right 2″×2″ area of a 127″×90″ PDF page at six times (×6) device magnification. Assume for this example that the PDF page crop box rectangle Rcrop is located at the user space origin, sized identical to the dimensions of the PDF page, the intrinsic page rotation δ is zero degrees clockwise, and the resolution of user space is 72 DPI.Rcrop={x0,y0,x1,y1}Rcrop={0,0,127×72DPI,90×72DPI}Rcrop={0,0,9144,6480}δ=0°
A Render Area RA corresponding to a 2″×2″ area at the top-right corner of a 127″×90″ PDF page will be calculated. See, for example the rectangle 1900 shown in FIG. 19 which shows the render area corresponding to a 2″×2″ area at the top-right corner of a 127″×90″ PDF page in user space.Rcrop={0,0,9144,6480}RA={9144−Δx,6480−Δy,9144,6480}Δx=Δy=2″×72DPI=144RA={9000,6336,9144,6480}
To transform user space to device space for rendering, a Render Matrix MRM is used that (1) flips user space coordinate across the x-axis, such that the top-left corner of the page becomes the coordinate system origin (See FIG. 20 illustrating rectangle 2000), (2) translates the bottom-left corner of the flipped Render Area RA to the coordinate system origin, and (3) scales user space unit size from 72 DPI to 384 DPI. For this example, the Render Matrix can be expressed as the product of the Default Page Matrix, a Flip Matrix, a Translate Matrix and a Scale Matrix.MRM=MDPMMFMTMS
A Default Page Matrix MDPM transforms the user space coordinate system to the rotated, cropped page space coordinate system. For a page with the crop box rectangle Rcrop located at the user space origin and an intrinsic page rotation δ equal to zero degrees clockwise, Default Page Matrix MDPM is the 3-by-3 identity matrix.
M DPM = [ 1 0 0 0 1 0 0 0 1 ]
The Flip Matrix MF transforms the bottom-to-top page coordinate system to the top-to-bottom device coordinate system, as identified from the PDF page crop box rectangle Rcrop. The rectangle Rcrop can be expressed as a 4-by-3 matrix of corner coordinates.
R crop = { 0 , 0 , 9144 , 6480 } R crop ≡ [ x 0 y 0 1 x 0 y 1 1 x 1 y 0 1 x 1 y 1 1 ] ≡ [ 0 0 1 0 6480 1 9144 0 1 9144 6480 1 ]
The PDFL function g(R, M) may be used to transform rectangle Rcrop with the Default Page Matrix MPDM to obtain the crop box rectangle Rcrop,DPM expressed in page space units.
R crop = { 0 , 0 , 9144 , 6480 } R crop , DPM = g ( R crop , M DPM ) R crop , DPM ≡ [ 0 0 1 0 6480 1 9144 0 1 9144 6480 1 ] M DPM R crop , DPM ≡ [ 0 0 1 0 6480 1 9144 0 1 9144 6480 1 ] [ 1 0 0 0 1 0 0 0 1 ] R crop , DPM ≡ [ 0 0 1 0 6480 1 9144 0 1 9144 6480 1 ] R crop , DPM = { 0 , 0 , 9144 , 6480 }
The Flip Matrix reflects and translates the page coordinate system across the x-axis using the top offset of the crop box rectangle, expressed in page space units.
M F = [ 1 0 0 0 - 1 0 0 h 1 ] h = 6480.0
A Translate Matrix MT is used to transform the flipped, page coordinate system to the locality coordinate system, such that the bottom-left corner of the Render Area RA is the new origin. Since the Render Area RA was originally expressed for a non-flipped PDF page area, it must be transformed using the Flip Matrix MF to logically cover the same 2″×2″ corner of the PDF page. The rectangle RA can be expressed as a 4-by-3 matrix of corner coordinates.
R A = { 9000 , 6336 , 9144 , 6480 } R A ≡ [ x 0 y 0 1 x 0 y 1 1 x 1 y 0 1 x 1 y 1 1 ] ≡ [ 9000 6336 1 9000 6480 1 9144 6336 1 9144 6480 1 ]
The PDFL function g(R, M) is used to transform rectangle RA with the Flip Matrix MF to obtain the flipped Render Area RA,F. See, for example, rectangle 20000 of FIG. 20.
R A , F = g ( R A , M F ) R A , F ≡ [ 9000 6336 1 9000 6480 1 9144 6336 1 9144 6480 1 ] M F R A , F ≡ [ 9000 6336 1 9000 6480 1 9144 6336 1 9144 6480 1 ] [ 1 0 0 0 - 1 0 0 6480 1 ] R A , F ≡ [ 9000 144 1 9000 0 1 9144 144 1 9144 0 1 ] R A , F = { 9000 , 0 , 9144 , 144 }
Relative to the flipped Render Area, RA,F, the Translate Matrix is calculated as:
M T = [ 1 0 0 0 1 0 t x t y 1 ] t x = - 9000 t y = 0
The Scale Matrix MS transforms the locality coordinate system unit size to the device coordinate system unit size such that user space resolution, typically 72 DPI, is magnified to the device resolution of 384 DPI.
M s = [ s x 0 0 0 s y 0 0 0 1 ] s x = s y = 96 D P I × 6 72 D P I = 384 72 ≈ 5.33
The effective Render Matrix MRM that transforms the user space coordinate system to the specified device coordinate system is calculated as:
M RM = M DPM M F M T M S M RM = [ 1 0 0 0 1 0 0 0 1 ] [ 1 0 0 0 - 1 0 0 h 1 ] [ 1 0 0 0 1 0 t x t y 1 ] [ s x 0 0 0 s y 0 0 0 1 ] M RM = [ 1 0 0 0 1 0 0 0 1 ] [ 1 0 0 0 - 1 0 0 6480 1 ] [ 1 0 0 0 1 0 - 9000 0 1 ] [ 5.33 0 0 0 5.33 0 0 0 1 ] M RM = [ 5.33 0 0 0 - 5.33 0 - 47970 34538.4 1 ]
The PDFL function g(R, M) is then used to transform rectangle RA with the Render Matrix MRM so that the requested area logically covers the same 2″×2″ locality on the rendered PDF page. See for example FIG. 22 which shows the render area corresponding to a 2″×2″ area at the top right corner of a 127″×90″ PDF page in flipped, translated, scaled page space or device space.
R A , RM = g ( R A , M RM ) R A , RM ≡ [ 9000 6336 1 9000 6480 1 9144 6336 1 9144 6480 1 ] [ 5.33 0 0 0 - 5.33 0 - 47970 34538.4 1 ] R A , RM ≡ [ 0 767.52 1 0 0 1 767.52 767.52 1 767.52 0 1 ] R A , RM = { 0 , 0 , 767.52 , 767.52 }
To directly render PDF page graphics inside the top-right, 2″×2″ locality of a 127″×90″ PDF page at six times (×6) device resolution, the Render Matrix MRM and Render Area RA,RM must be successfully expressed using the syntax required for the interface to the PDFL functions.
M RM = [ 5.33 0 0 0 - 5.33 0 - 47970 34538.4 1 ] R A , RM = { 0 , 0 , 767.52 , 767.52 }
However, the Render Matrix MRM cannot be directly represented using the PDFL interface. The numbers −47970 and −32538.4 cannot be reasonably approximated with X since these values are not contained in the set Y.Y={r: rε,|r|<32768}Y∉{−47970,34538.4}
From the above, it should be appreciated that existing image processing library functions which use 32 bit fixed point values can result in undesirable limitations on the size and/or resolutions of images which are to be processed. While rewriting library functions, e.g., PDFL functions, to use values other than 32 bit fixed point integer values would be one approach to overcoming the limitations discussed above, such an approach would be costly and result in much or all of the existing library functions being discarded.
In view of the above discussions, it should be appreciated that there is a need for methods and apparatus which can be used to render or otherwise process relatively large images and or high resolution images, e.g., images having area or other ranges which require the use of values larger than those which can be expressed using 32 bit fixed integer values. It would be desirable if at least some of the methods and/or apparatus could use or otherwise take advantage of one or more existing library functions and/or hardware which use 32 bit fixed integer values to specify various input and/or output values.
| |
TECHNICAL FIELD
DESCRIPTION OF THE RELATED ART
BRIEF SUMMARY OF THE INVENTION
DETAILED DESCRIPTION OF THE EMBODIMENTS
EXAMPLE 1
Nanoparticle Design & Synthesis
EXAMPLE 2
Nanoparticle & Peptide Characterization
EXAMPLE 3
Electronic Microscopy Analysis
EXAMPLE 4
Filter Design
EXAMPLE 5
2+
5+
2+
2+
6+
2+
2+
The present invention relates to a water filtration apparatus and a method of using the water filtration apparatus, wherein the water filtration apparatus includes a nanoparticle layer which comprises polypeptide-functionalized nanoparticles that are capable of absorbing heavy metals such as Pb, As, Cd, Hg, Cr, Cu, and Zn, as well as organic materials.
The “background” description provided herein is for the purpose of generally presenting the context of the disclosure. Work of the presently named inventors, to the extent it is described in this background section, as well as aspects of the description which may not otherwise qualify as prior art at the time of filing, are neither expressly or impliedly admitted as prior art against the present invention.
Over one billion people worldwide lack access to potable water. Water-borne diseases are one of the leading causes of disease and death in the world. Water systems in developing economies and emerging countries have shortcomings including operational complexity, operation and maintenance costs, expense, lack of portability, and the requirement of chemicals and energy to operate. Industries that locate operations in developing and emerging countries are seeking sustainable cost effective water supply systems to provide water to their facilities and the workers' communities. In developed economies, consumers are seeking a more environmentally sustainable life style which includes using alternative energy and fewer chemicals in products, including water. In dealing with water scarcity, there is an increasing demand for rain water and industrial wastewater filtration and recycling using sustainable systems that do not rely on additional chemical treatment or extensive maintenance.
On the other hand, water can dissolve many different chemical compounds such as nitrates, pesticides, heavy metals, organic materials, and radioactive materials. Among these, heavy metals are considered to be the most dangerous pollutants for water. Heavy metals can stay in a water cycle for a long period of time. The presence of the heavy metals can interfere with biological processes and can cause dangerous diseases. Removal of heavy metals from water demands complex filtration devices and processes. Various approaches have been investigated to effectively remove heavy metals from a water source. However, these approaches generally rely on traditional power sources such as generators or an available power grid, and supply only a portion of the needs for water purification. Other systems that remove heavy metals from a water source and provide drinking water generally use traditional energy sources that produce greenhouse gases and other environmental pollutants. In addition, treatment units that rely on hydrocarbon based power generation increase the risk of contamination of the water source. Other solar powered units use combinations of filtration approaches that require the use of disposable filters and UV oxidation to provide disinfection and heavy metal removal. However, this type of approach is not well suited to developing economies as it requires significant maintenance work. Thus, there is a world-wide need for a sustainable and a portable water filtration system with low cost and low maintenance that can effectively remove pollutants (primarily heavy metals) from water, and without generating other environmental pollutants.
2+
5+
2+
2+
6+
2+
2+
In view of the forgoing, one objective of the present invention is to provide a water filtration apparatus that include a nanoparticle layer, wherein each particle is a polypeptide-functionalized nanoparticle capable of absorbing heavy metals selected from the group consisting of Pb, As, Cd, Hg, Cr, Cu, and Zn, as well as organic materials.
According to a first aspect the present disclosure relates to a water filtration apparatus, including i) a hollow filter cartridge having a water inlet and a water outlet, ii) a zeolite layer located inside said cartridge in between the water inlet and the water outlet, which is configured to reduce a concentration of heavy metals in water, iii) a nanoparticle layer located in between the zeolite layer and the water outlet, which is configured to remove heavy metals and organic compounds in water, iv) an activated carbon layer located in between the zeolite layer and the nanoparticle layer, which is configured to reduce a concentration of organic compounds in water, wherein the nanoparticle layer comprises polypeptide-functionalized nanoparticles.
In one embodiment, the polypeptide-functionalized nanoparticles have a structure of formula (I):
wherein NP is a nanoparticle, L is a linker, and PP is a polypeptide.
In one embodiment, the linker comprises a heterocycle. In another embodiment, the linker comprises a carbocycle. In one embodiment, the linker comprises a triazole. In one embodiment, the linker is bound to the polypeptide via an amide bond.
In one embodiment, the nanoparticle is a silica nanoparticle. In another embodiment, the linker is bound to the silica nanoparticle via a Si—O—Si bond.
In one embodiment, the polypeptide is a block copolymer comprising at least two polymers selected from the group consisting of an alkyl-functionalized glutamine polymer, a phenylalanine polymer, and a carboxylic acid-functionalized glutamine polymer.
In one embodiment, the polypeptide is a diblock copolymer comprising the alkyl-functionalized glutamine polymer and the phenylalanine polymer or the carboxylic acid-functionalized glutamine polymer.
In one embodiment, the alkyl-functionalized glutamine polymer is an octadecyl-functionalized glutamine polymer.
In one embodiment, the polypeptide is a diblock copolymer comprising the phenylalanine polymer and the carboxylic acid-functionalized glutamine polymer.
In one embodiment, the polypeptide is a triblock copolymer comprising the alkyl-functionalized glutamine polymer, the phenylalanine polymer, and the carboxylic acid-functionalized glutamine polymer.
In one embodiment, the polypeptide-functionalized nanoparticles are spherical having a hydrodynamic radius in the range of 5-20 nm.
In one embodiment, the water filtration apparatus further includes a cotton filter pad located in between the water inlet and the zeolite layer, which is configured to remove suspended solids and sediments.
2+
5+
2+
2+
6+
2+
2+
According to a second aspect the present disclosure relates to a method of removing Pb, As, Cd, Hg, Cr, Cu, and/or Zn from a water source with the water filtration apparatus. The method involves passing the water source through the zeolite layer, the activated carbon layer, and the nanoparticle layer of the water filtration apparatus.
According to a third aspect the present disclosure relates to a method of producing a polypeptide-functionalized nanoparticle having a structure of formula (I):
wherein NP is a nanoparticle, L is a linker comprising a triazole, and PP is a polypeptide including at least two polymers selected from the group consisting of an alkyl-functionalized glutamine polymer, a phenylalanine polymer, and a carboxylic acid-functionalized glutamine polymer. The method involves i) treating the polypeptide with an azide-containing reagent to form an azido polypeptide compound, ii) functionalizing a surface of the nanoparticle with an alkynyl reagent to form an alkynyl nanoparticle, iii) coupling the azido polypeptide compound to the alkynyl nanoparticle via an azide-alkyne cycloaddition to form the polypeptide-functionalized nanoparticle.
In one embodiment, the alkyl-functionalized glutamine polymer is present in the polypeptide and is an octadecyl-functionalized glutamine polymer.
In one embodiment, the nanoparticle is a silica nanoparticle. In another embodiment, the linker is bound to the silica nanoparticle via a Si—O—Si bond.
The foregoing paragraphs have been provided by way of general introduction, and are not intended to limit the scope of the following claims. The described embodiments, together with further advantages, will be best understood by reference to the following detailed description taken in conjunction with the accompanying drawings.
Referring now to the drawings, wherein like reference numerals designate identical or corresponding parts throughout the several views.
400
402
412
414
402
According to a first aspect the present disclosure relates to a water filtration apparatus including a hollow filter cartridge that includes a water inlet and a water outlet at opposite ends of said cartridge .
402
402
402
402
402
402
402
402
402
FIG. 4A
FIG. 4F
FIG. 4G
The hollow filter cartridge is a container with an internal cavity that is configured to hold a liquid preferably at elevated pressures, for example, in a preferred embodiment, the cartridge is configured to hold a liquid at a pressure in the range of 1-50 atm, preferably 1-20 atm, more preferably 10-20 atm. The cartridge may be made of alumina, quartz, stainless steel, nickel steel, chromium steel, aluminum, aluminum alloy, copper and copper alloys, titanium, and the like, although the materials used to construct the cartridge are not meant to be limiting and various other materials may also be used. In one embodiment, a portion of an internal surface of the cartridge is coated with a polymeric lining to minimize internal surface oxidation of the cartridge. The polymeric lining may be epoxy or vinyl ester, or preferably a BPA-free polymer such as polyethylene, polypropylene, or polytetrafluoroethylene. In one embodiment, the cartridge is cylindrical having an internal volume in the range of 0.1-10,000 L, preferably 5-5,000 L, or preferably 100-1,000 L, or preferably 500-1,000 L. In one embodiment, the cartridge is rectangular having an internal volume in the range of 0.1-10,000 L, preferably 5-5,000 L, or preferably 100-1,000 L, or preferably 500-1,000 L. The cartridge may also have other geometries including, but not limited to cubic, cylindrical, spherical, oblong, conical, and pyramidal. In a preferred embodiment, the hollow filter cartridge is cylindrical and is vertically oriented (as shown in and ). The hollow filter cartridge may also be horizontally oriented (as shown in ). In another preferred embodiment, the hollow filter cartridge is portable having an internal volume in the range of 0.1-10.0 L, preferably 0.5-8.0 L, more preferably 0.5-5.0 L.
In one embodiment, the cartridges having an internal volume of 500-1000 L or more are used to purify water for large scale purification demands (e.g. for a power plant, a chemical processing plant, a refining plant, or residential water consumption). On the other hand, in one embodiment, the cartridges having an internal volume of 0.5-5 L are used to purify water for small scale purification demands (e.g. a portable purifier that may fit into a backpack for purifying water in the field, or during traveling or hiking, etc.).
412
414
402
412
414
412
414
402
The water inlet and the water outlet are utilized as passages for loading and unloading the cartridge with water. In one embodiment, the water inlet and the water outlet are substantially similar, wherein each is a cylindrical port having an internal diameter in the range of 1-50 mm, preferably 5-20 mm, more preferably 5-10 mm, even more preferably about 5 mm, which is configured to transfer water having a flow rate in the range of 0.1-1,000 L/min, preferably 10-500 L/min, or preferably 10-100 L/min. The water inlet and the water outlet may be secured with threaded fittings, or other means, to the cartridge .
In a preferred embodiment, the cartridge includes a water sprinkler located therein and proximal to the water inlet, wherein the water sprinkler divides a water stream into a plurality of water streams and distributes the plurality of water streams throughout a cross-section of the cartridge. The water sprinkler may be made of glass or metal, and can be used in any shape, preferably disc shape, cylindrical, or spherical. For example, in one embodiment, the water sprinkler has a perforated disc-shape structure. Size of perforations in the water sprinkler may be different ranging from 0.5-20 mm, preferably 1-5 mm, more preferably 2-4 mm. In-situ position and angular direction of the water sprinkler may be adjusted by a mechanical control system attached thereto. Additionally, the water sprinkler may also rotate around its shaft.
400
406
402
412
414
The water filtration apparatus further includes a zeolite layer located inside the hollow filter cartridge in between the water inlet and the water outlet .
4
4
Zeolite particles are alumino-silicate minerals that occur in nature. Elementary building units of zeolite particles are SiOand AlOtetrahedra. Adjacent tetrahedra are linked at their corners via a common oxygen atom, which results in an inorganic macromolecule with a three-dimensional framework. The three-dimensional framework of a zeolite also comprises channels, channel intersections, and/or cages having dimensions in the range of 0.1-10 nm, preferably 0.2-5 nm, more preferably 0.2-2 nm. Water molecules may be present inside these channels, channel intersections, and/or cages.
406
406
406
406
406
3
3
3
3
3
3
3
3
3
2
2
2
2
2
2
2
2
2
The zeolite layer refers to a crystalline structure of the zeolite particles that are deposited on a support material. In one embodiment, a silicon-to-aluminum molar ratio of zeolite particles in the zeolite layer is at least 10, or preferably at least 20, or preferably at least 30, or preferably at least 40, or preferably at least 45, or preferably at least 50, but no more than 100. A higher silicon-to-aluminum molar ratio of zeolite particles provides a larger water flux and a reduced cation rejection rate of the zeolite layer. Conversely, a lower silicon-to-aluminum molar ratio of zeolite particles results in a lower water flux and an increased cation rejection rate of the zeolite layer. In another embodiment, the zeolite particles comprise micro-pores (i.e. pores having an average pore diameter of less than 2 nm) having a specific pore volume in the range of 0.1-0.3 cm/g, preferably 0.1-0.2 cm/g, more preferably 0.15-0.2 cm/g, and meso-pores (i.e. pores having an average pore diameters in the range of 2-50 nm) having a specific pore volume in the range of 0.01-0.15 cm/g, preferably 0.05-0.15 cm/g, more preferably 0.05-0.1 cm/g. In one embodiment, a specific pore volume of macro-pores (i.e. pores having an average pore diameter of above 50 nm) in the zeolite particles is less than 0.2 cm/g, preferably less than 0.1 cm/g, more preferably less than 0.01 cm/g. In one embodiment, a specific surface area of the micro-pores in the zeolite particles is in the range of 100-500 m/g, preferably 300-500 m/g, more preferably about 400 m/g, whereas a specific surface area of the meso-pores in the zeolite particles is in the range of 50-150 m/g, preferably 50-100 m/g, more preferably about 80 m/g. A specific surface area of the macro-pores in the zeolite particles may be in the range of 500-1,000 m/g, preferably 700-1,000 m/g, more preferably about 850 m/g. In another embodiment, an average pore diameter of the micro-pores, the meso-pores, and the macro-pores in the zeolite particles is in the range of 1-10 nm, preferably 2-6 nm, more preferably about 5 nm. In one embodiment, a total acidity of the zeolite particles in the zeolite layer is in the range of 2-10 mmol/g, preferably 5-10 mmol/g, more preferably about 7.5 mmol/g. The zeolite layer may be in the form of pellets having a diameter in the range of 0.5-5 mm, preferably 0.5-1.5 mm, more preferably about 1 mm. The zeolite layer may also be extrudated to have a geometry selected from the group consisting of cylindrical, rectilinear, star-shaped, conical, pyramidal, rectangular, cubical, and ring-shaped.
406
The zeolite particles in the zeolite layer may be one or more selected from the group consisting of a 4-membered ring zeolite, a 6-membered ring zeolite, a 10-membered ring zeolite, and a 12-membered ring zeolite. The zeolite particles in the zeolite layer may have a zeolite with a natrolite framework (e.g. gonnardite, natrolite, mesolite, paranatrolite, scolecite, and tetranatrolite), edingtonite framework (e.g. edingtonite and kalborsite), thomsonite framework, analcime framework (e.g. analcime, leucite, pollucite, and wairakite), phillipsite framework (e.g. harmotome), gismondine framework (e.g. amicite, gismondine, garronite, and gobbinsite), chabazite framework (e.g. chabazite-series, herschelite, willhendersonite, and SSZ-13), faujasite framework (e.g. faujasite-series, Linde type X, and Linde type Y), mordenite framework (e.g. maricopaite and mordenite), heulandite framework (e.g. clinoptilolite and heulandite-series), stilbite framework (e.g. barrerite, stellerite, and stilbite-series), brewsterite framework, or cowlesite/ZSM-5 framework.
406
The zeolite layer may contain pillared zeolites. A pillared zeolite is a type of zeolite, wherein pillars (e.g. silica pillars) are located between two adjacent layers in the zeolite.
406
The zeolite layer may further include crystalline zeolite particles having a high cation exchange capacity and an ability to physically capture micron range particles from water. In one embodiment, the zeolite layer is able to remove particles having a size in the range of 0.5-20 preferably 1-15 μm, more preferably 1-10 μm. The zeolite layer may also remove fluoride as well as limited reactive dechlorination of halogenated organics.
The zeolite particles can be manufactured using any technique that involves depositing the zeolite particles (having desired chemical compositions, pore sizes, and/or particle sizes) on a support material followed by performing a crystal growth reaction to produce the zeolite layer. For example, in one embodiment, a plurality of zeolite particles (or zeolite crystal seeds) are adhered to a support material (e.g. glass beads, etc.) and then treated in a halide solution (e.g. an ammonium halide solution) at a temperature in the range of 50-120° C., preferably 50-100° C., more preferably 80° C. for at least 18 hours, preferably at least 20 hours, more preferably at least 24 hours to help zeolite crystals to grow. Grown zeolite crystals may further be separated from the halide solution by centrifugation at a rotational speed of at least 2,000 rpm, preferably at least 3,000, more preferably at least 5,000 rpm. The grown zeolite crystals may further be calcined at a temperature in the range of 450-650° C., preferably 500-600° C., and dried. Additionally, the zeolite particles may also be manufactured via sol-gel processing techniques or hydrothermal synthesis methods.
406
406
2+
5+
2+
2+
6+
2+
2+
In one embodiment, the zeolite layer reduces a concentration of heavy metals in water. In a preferred embodiment, the heavy metals (which may be in cation form) are selected from the group consisting of Pb, As, Cd, Hg, Cr, Cu, and Zn. Preferably, the concentration of heavy metals in water, after being treated with the zeolite layer , reduces down to less than 5000 ppm, preferably less than 2000 ppm, more preferably less than 1000 ppm. Furthermore, the zeolite layer may also reduce a concentration of cations of metals selected from the group consisting of Mn, Co, Ni, Se, Ag, Sb, and Tl.
406
2+
5+
2+
2+
6+
2+
2+
2+
2+
In one embodiment, a selectivity of the zeolite layer with respect to the heavy metals (i.e. Pb, As, Cd, Hg, Cr, Cu, and Zn) over other dissolved metals, which may be present in water, is at least 85%, preferably at least 90%, more preferably at least 95%. Selectivity with respect a heavy metal (e.g. Pb), as used herein, is a measure of the capability of a given zeolite layer to filter the heavy metal relative to other dissolved metals in water. For example, if selectivity of a zeolite layer with respect to Pb is at least 90%, then at least 90 wt % of Pb cations or Pb-containing compounds is removed from water relative to other dissolved metals, with the zeolite layer.
400
410
406
414
The water filtration apparatus further includes a nanoparticle layer located in between the zeolite layer and the water outlet .
410
100
100
The nanoparticle layer includes polypeptide-functionalized nanoparticles . In a preferred embodiment, the polypeptide-functionalized nanoparticles have a structure of formula (I):
102
104
106
100
wherein NP is a nanoparticle , L is a linker , and PP is a polypeptide . In one embodiment, the polypeptide-functionalized nanoparticles are spherical having a hydrodynamic radius in the range of 5-20 nm, preferably 5-15 nm, more preferably 5-10 nm.
104
114
104
102
118
104
106
116
The linker may comprise a first component , whereby the linker is bound to the nanoparticle , a third component , whereby the linker is bound to the polypeptide , and a second component . The first and the third components are conjugated forming the second component.
114
102
114
104
In one embodiment, the first component includes an amide. In a preferred embodiment, the nanoparticle is a silica nanoparticle, and the first component further includes Si, wherein the first component of the linker is bound to the silica nanoparticle via a Si—O—Si bond, as shown in structure (II):
102
The nanoparticle may be a ceramic nanoparticle, a metallic nanoparticle, a polyhedral oligomeric silsesquioxane, a nano-diamond, a carbon nanotube, a graphene sheet, or a fullerene. In one embodiment, the ceramic nanoparticle is one selected from the group consisting of silicon dioxide, titanium dioxide, zinc oxide, aluminum oxide, cadmium sulfide, zirconium oxide, calcium phosphate, calcium oxide, and a combination thereof.
118
106
In another embodiment, the third component is linked to a terminal nitrogen of the polypeptide though an amide linkage, as shown in structure (II).
In one embodiment, a carbocycle (i.e. the second component of the linker) may be formed as a result of a conjugation of the first and the third components. The carbocycle may be a cycloalkenyl compound as a result of a Diels-Alder reaction. The term “cycloalkenyl” is used herein to mean cyclic radicals, preferably of 6 to 8 carbons, which have at least two bonds. One example of cycloalkenyl compounds includes cyclohexenyl due to a Diels-Alder reaction. Other examples may include, but not limited to cyclopentenyl, cycloheptenyl, cycloctenyl, and the like. Cycloalkenyl compounds are not aromatic. The carbocycle may also be a cycloalkyl including, but not limited to cyclohexyl, cycloheptyl, and cycloctyl.
In another embodiment, a heterocycle may be formed as a result of the conjugation of the first and the third components. The term “heterocycle”, as used herein, refers to a cyclic compound that has atoms of at least two different elements (i.e. heteroatom) as members of its ring. The heterocycle may include a 5-membered ring, a 6-membered ring, or a 7-membered ring. Heteroatoms of the heterocycle may preferably be oxygen, sulfur, and/or nitrogen, even though other elements such as boron, phosphorus, arsenic, antimony, bismuth, silicon, and/or tin may also be present in the ring structure of the heterocycle. The heterocycle may also be a bicyclic or a polycyclic compound.
In one embodiment, the heterocycle is a heteroaryl compound as a result of an alkyne and azide cycloaddition. The term “heteroaryl” as used herein refers to an aromatic cyclic compound that has atoms of at least two different elements (i.e. heteroatom) as members of its ring. Exemplary heteroaryl compounds include, but are not limited to imidazoles, pyrazoles, tetrazoles, pentazoles, oxatetrazoles, and thiatetrazoles. In a preferred embodiment, the heteroaryl compound is a triazole.
The heterocycle may be a heterocycloalkyl compound, which is an aliphatic cyclic compound that has atoms of at least two different elements (i.e. heteroatom) as members of its ring. The heteroatoms may occupy the positions at which the heterocycloalkyl compound is attached to the remainder of the linker (i.e. the first and the third components). Examples of heterocycloalkyl groups include, but are not limited to tetrahydropyridyl, piperidinyl, morpholinyl, tetrahydrofuran, tetrahydrothienyl, piperazinyl, and the like.
The term “polypeptide” refers to a polymer of amino acid residues, wherein the polymer may optionally be conjugated to a moiety that does not consist of amino acids. The term may apply to amino acid polymers in which one or more amino acid residues are an artificial chemical mimetic of a corresponding naturally occurring amino acid, as well as to naturally occurring amino acid polymers and non-naturally occurring amino acid polymers.
106
108
110
112
108
110
112
m
n
m
n
r
In one embodiment, the polypeptide is a block copolymer having a repeating unit including a D-block and an E-block in a repeating sequence of (D)-(E), wherein each of D and E is individually selected from the group consisting of an alkyl-functionalized glutamine polymer , a phenylalanine polymer , and a carboxylic acid-functionalized glutamine polymer , and wherein m and n are repeating numbers in the range of 2-100,000, preferably 1,000-5,000. In another embodiment, the block copolymer has a repeating unit in a repeating sequence of [(D)-(E)], wherein each of D and E is individually selected from the group consisting of an alkyl-functionalized glutamine polymer , a phenylalanine polymer , and a carboxylic acid-functionalized glutamine polymer , wherein m and n are primary repeating numbers in the range of 2-100,000, preferably 1,000-5,000, and wherein r is a secondary repeating number in the range of 1-1,000, preferably 10-500.
106
108
110
112
m
n
p
r
In another embodiment, the polypeptide is a block copolymer having a repeating unit including a D-block, an E-block, and an F-block in a repeating sequence of [(D)-(E)-(F)-], wherein each of D, E, and F is individually selected from the group consisting of an alkyl-functionalized glutamine polymer , a phenylalanine polymer , and a carboxylic acid-functionalized glutamine polymer , wherein m, n, and p are primary repeating numbers in the range of 2-100,000, preferably 1,000-5,000, and wherein r is a secondary repeating number in the range of 1-1,000, preferably 10-500.
108
The term “alkyl-functionalized glutamine polymer” as used herein refers to a polyglutamine (i.e. a sequence of several glutamines bonded together) backbone wherein the glutamine amide side chains are alkyl functionalized. The term “alkyl”, as used herein, refers to a hydrocarbon fragment, preferably having 1 to 30, more preferably 5-25 carbons. Non-limiting examples of such hydrocarbon fragments include methyl, ethyl, propyl, isopropyl, butyl, isobutyl, t-butyl, pentyl, isopentyl, neopentyl, hexyl, isohexyl, methylpentyl, dimethylbutyl, vinyl, allyl, propenyl, butenyl, pentenyl, or hexenyl. The term “alkyl” may also refer to cyclic hydrocarbons. Exemplary cyclic hydrocarbon (i.e. cycloalkyl) include, but are not limited to cyclopropyl, cyclobutyl, cyclopentyl, cyclohexyl, norbornyl, and adamantyl. Branched cycloalkyl, such as exemplary 1-methylcyclopropyl and 2-methycyclopropyl groups, are also included in the definition of cycloalkyl as used in the present disclosure. In a preferred embodiment, the alkyl-functionalized glutamine polymer is an octadecyl-functionalized glutamine polymer (i.e.the alkyl is a straight hydrocarbon fragment having 18 carbon atoms).
In addition, the term “carboxylic acid-functionalized glutamine polymer” as used herein refers to a polymer brush compound with a polyglutamine backbone and carboxy-terminus amino acids being the brushes for the backbone, wherein each carboxy-terminus amino acid includes at least one carboxylic acid, an amine, and a heteroaryl (as described previously). In one embodiment, the carboxy-terminus amino acid has a structure of formula (II):
wherein PG is the polyglutamine.
410
410
2+
5+
2+
2+
6+
2+
2+
2+
2+
2+
2+
2+
2+
2+
2+
2+
2+
In one embodiment, a selectivity of the nanoparticle layer with respect to the heavy metals (i.e. Pb, As, Cd, Hg, Cr, Cu, and Zn) over other dissolved metals, which may be present in water, is at least 95%, preferably at least 97%, more preferably at least 98%, even more preferably at least 99%. Selectivity of a filtration layer has been defined previously. In addition, in another embodiment, a selectivity of the nanoparticle layer with respect to the metals (in free form, in the cation form, or in an ionic salt form) selected from the group consisting of Ba, Sr, Ca, Mg, Mn, Fe, Co, Ni, Pd, Pt, Se, Ag, Sb, and Tl is at least 80%, preferably at least 90%, more preferably at least 95%.
410
2+
2+
2+
2+
6+
6+
6+
6+
2+
2+
2+
2
5+
5+
5+
5+
2+
2+
2+
2+
2+
2+
2+
2+
2+
2+
2+
2+
In a preferred embodiment, an absorbance capacity per 1 gram of the nanoparticle layer is at least 1000 mg Pb, preferably at least 1200 mg Pb, more preferably at least 1500 mg Pb, but no more than 2500 mg Pb; at least 150 mg Cr, preferably at least 200 mg Cr, more preferably at least 250 mg Cr, but no more than 500 mg Cr, at least 700 mg Zn, preferably at least 800 mg Zn, more preferably at least 900 mg Zn, but no more than 1500 mg Zn; at least 200 mg As, preferably at least 300 mg As, more preferably at least 350 mg As, but no more than 800 mg As; at least 600 mg Cd, preferably at least 700 mg Cd, more preferably at least 800 mg Cd, but no more than 1000 mg Cd; at least 350 mg CU, preferably at least 450 mg Cu, more preferably at least 550 mg Cu, but no more than 1000 mg Cu; at least 1000 mg Hg, preferably at least 1300 mg Hg, more preferably at least 1600 mg Hg, but no more than 2500 mg Hg.
400
408
406
410
408
3
3
3
3
2
2
2
3
3
3
2
2
2
The water filtration apparatus further includes an activated carbon layer located in between the zeolite layer and the nanoparticle layer . The activated carbon layer may be divided into a granular activated carbon layer and a powdered activated carbon layer. The granular activated carbon may have attributes, such as a pore volume of 0.5-1.0 cm/g, preferably 0.6-1.0 cm/g, more preferably 0.7-1.0 cm/g, even more preferably 0.8-1.0 cm/g; a specific surface area of 700 to 1500 m/g, preferably 1000 to 1500 m/g, more preferably 1200 to 1500 m/g; and an average pore diameter of 12-30 Å, preferably 12-20 Å, more preferably 12-15 Å. In one embodiment, the granular activated carbon layer allows for a continuous countercurrent operation, thus resulting in a lowering in operation costs. In another embodiment, the granular activated carbon layer is recyclable. In another embodiment, the granular activated carbon layer does not undergo particle coagulation; therefore a chance of clogging is low. The granular activated carbon layer may also remove phenolic compounds, mercury-containing compounds, and/or organic solvents included in water. Further, the granular activated carbon layer may enhance the taste, smell, and turbidity of water by removing or reducing chlorine and/or parasites content. The powdered activated carbon layer may have attributes, such as a pore volume of 0.1-0.5 cm/g, preferably 0.3-0.5 cm/g, more preferably 0.4-0.5 cm/g; a specific surface area of 700 to 1500 m/g, preferably 1000 to 1500 m/g, more preferably 1200 to 1500 m/g; and an average pore diameter of 12-30 Å, preferably 15-30 Å, more preferably 20-30 Å. In one embodiment, the powdered activated carbon layer has a high adsorption speed.
408
In addition to the granular and powdered activated carbon, the activated carbon layer may include another layer of charcoal particles for absorbing organic compounds, for example such as common volatile and semivolatile organic compounds, tannins, fluoride, arsenic, and metals that may be present in water. These organic compounds can adjust the taste, odor, color, and suitability of water for drinking.
408
3
3
2
The activated carbon layer may further include activated carbon having a variety of capabilities to be appropriate for different purposes of water purification. For example, the activated carbon layer may have an iodine number of 900 to 2000 mg/g, preferably 1500 to 2000 mg/g; a pore volume of 0.3 to 0.8 cm/g, preferably 0.4 to 0.6 cm/g; a specific surface area (BET) of 1000 to 2000 m/g, preferably 1500 to 2000 m2/g; a micro-pore size of 12 to 20 Å, preferably 14 to 20 Å, and a meso-pore size of 30 to 40 Å, preferably 30 to 37 Å. The term “iodine number” of an activated carbon is a representative index of the specific surface area of the activated carbon.
408
408
408
In addition, the activated carbon layer may include particles having one or more carboxy groups. Carboxy-functionalized particles may provide a metal adsorbing functionality to the activated carbon layer. “Particles with carboxy groups” as used herein may refer to carboxylic-acid functionalized particles that are dispersed within, without being covalently bonded to, the activated carbon layer . Examples include carboxyl-functionalized graphene oxide, or carboxyl-functionalized carbon nanotubes. Accordingly, the activated carbon layer may include less than 1 vol %, preferably less than 0.5 vol % of carboxy-functionalized particles, with volume percent being relative to the total volume of the activated carbon layer.
408
2+
5+
2+
2+
6+
2+
2+
In one embodiment, a selectivity of the activated carbon layer with respect to the heavy metals (i.e. Pb, As, Cd, Hg, Cr, Cu, and Zn) over other dissolved metals, which may be present in water, is at least 85%, preferably at least 90%, more preferably at least 95%.
406
408
410
400
404
412
406
In addition to the zeolite layer , the activated carbon layer , and the nanoparticle layer , the water filtration apparatus further includes a cotton filter pad located in between the water inlet and the zeolite layer .
404
400
404
404
The cotton filter pad may be at least one layer of a fabric, at least one layer of a cotton balls, or a combination thereof. Alternatively, the water filtration apparatus may include a layer of sand, gravel, coarse silica, and/or ceramic particles having reactive coatings (e.g. calcium hypochloride) to remove suspended solids and sediments. In one embodiment, sands, gravels, coarse silica, and/or ceramic particles are dispersed in the cotton filter pad . A primary objective of the cotton filter pad is to remove large particles, suspended solids, and sediments in water.
400
2+
5+
2+
2+
6+
2+
2+
2+
2+
2+
2+
2+
2+
2+
2+
2+
2+
The water filtration apparatus , which includes at least the zeolite layer, the nanoparticle layer, and the activated carbon layer, is capable of absorbing heavy metals (in free form, in the cation form, or in an ionic salt form) selected from the group consisting of Pb, As, Cd, Hg, Cr, Cu, Zn, Ba, Sr, Ca, Mg, Mn, Fe, Co, Ni, Pd, Pt, Se, Ag, Sb, and Tl. Accordingly, a concentration each of the heavy metals in water that exits the water outlet of the apparatus, per 1 cycle of filtration, is less than 500 ppm, preferably less than 200 ppm, more preferably less than 100 ppm, and even more preferably less than 50 ppm.
Although the preferred configuration of layers is to have the layers in the following order: the zeolite layer, the activated carbon layer, and the nanoparticle layer, in some embodiments, other configurations are possible and may depend on the type of water being filtered. For example, in one embodiment, the nanoparticle layer is disposed between the zeolite layer and the activated carbon layer. In another embodiment, two nanoparticle layers are used, wherein a first nanoparticle layer is disposed between the zeolite layer and the activated carbon layer, whereas a second nanoparticle layer is disposed between the activated carbon layer and the water outlet. The water preferably flows first through the zeolite layer but may alternately flow first through the activated carbon layer. It is preferable that the water not contact the nanoparticle layer until after first contacting the zeolite layer and the activated carbon layer, in any order. Water of sufficient purity with regard to dissolved organics may flow first through the nanoparticle layer then through the zeolite layer and the activated carbon layer in any order.
400
Various other embodiments of the water filtration apparatus relates to layers for filtering fluoride-containing compounds and arsenic-containing compounds in water. For example, in one embodiment, a fluoride filter layer is employed to reduce a concentration of fluorine in water, which includes coarse gravels, zeolite particles, synthetic char, and activated aluminum. The synthetic char refers to a mixture of calcium phosphonate, calcium carbonate, and activated carbon. In another embodiment, the synthetic char is a mixture of about 80 wt % calcium phosphonate, about 10 wt % calcium carbonate, and about 10 wt % activated carbon, with weight percent being relative to the total weight of the mixture.
400
In one embodiment, the water filtration apparatus further includes an arsenic absorbent layer that includes arsenic absorbent particles. The arsenic absorbent particles may be particles selected from the group consisting of iron oxide particles, diatomaceous earth particles, activated alumina particles, iron-enhanced activated alumina particles, aluminum oxide particles, manganese oxide particles, aluminum hydroxide particles, manganese hydroxide particles, iron hydroxide particles, zirconium hydroxide particles, zirconium oxide particles, and titanium dioxide particles. The arsenic absorbent layer may further include gravels, activated carbon particles, zeolite particles, Si fume, and/or the synthetic char (as described).
400
In another embodiment, the water filtration apparatus further includes a layer having crushed andesite, basalt, zeolite, carbon, and shells to improve the taste of water. For example, in an embodiment, such a layer may be a mixture of crushed andesite and basalt, calcium carbonate, zeolite particles, and sands.
In one embodiment, each of the zeolite layer, the nanoparticle layer, and/or the activated carbon layer of the water filtration apparatus may be disposed on a removable tray or a removable grid, and therefore each layer can be removed from the water filtration apparatus.
2+
5+
2+
2+
6+
2+
2+
400
According to a second aspect the present disclosure relates to a method of removing at least one heavy metal selected from the group consisting of Pb, As, Cd, Hg, Cr, Cu, and Zn from a water source via using the water filtration apparatus . The term “removing” as used herein refers to a condition, wherein a concentration of each of the heavy metals is reduced down to a threshold value. The threshold value refers to a safe concentration of a heavy metal in water for a specific purpose. For example, a safe arsenic concentration in drinking water is 10 ppb. Accordingly, removing arsenic from a water source means that a concentration of arsenic in the water source is reduced down to 10 ppb or less.
406
408
410
400
The method involves passing the water source through the zeolite layer , the activated carbon layer , and the nanoparticle layer of the water filtration apparatus . Passing the water source through the zeolite layer, the activated carbon layer, and the nanoparticle layer in one pass in some instance may not reduce the concentration of the heavy metals to the threshold value; therefore, in a preferred embodiment, the method further involves collecting filtered water in a reservoir, and recycling at least a portion of the filtered water to the water filtration apparatus to get a more purified water. Depending on the threshold value, recycling may be performed only once or multiple times. For example, in one embodiment, the water source may be recycled at least three times, preferably at least four times, more preferably at least five times, to reduce an arsenic content of water to 10 ppb, preferably to 8 ppb to be applicable for drinking purposes.
2+
5+
2+
2+
6+
2+
2+
2+
2+
2+
2+
+
2+
2+
2+
2+
2+
The “water source” may refer to water, which has at least one heavy metal (in free form, in the cation form, or in an ionic salt form) selected from the group selected from the group consisting of Pb, As, Cd, Hg, Cr, Cu, Zn, Ba, Sr, Ca, Mg, Mn, Fe, Co, Ni, Pd, Pt, Se, Ag, Sb, and Tl. A concentration of the at least one heavy metal is in the range of 10 ppb to 5,000 ppm, preferably 100 ppb to 1000 ppm. The water sources may include, but are not limited to water present in oceans/seas, bays, lakes, rivers, creeks, as well as underground water resources. The water source may also be a wastewater stream.
412
400
416
414
Passing the water source through layers of the water filtration apparatus may refer to a process whereby water from the water source is brought into contact with layers of the apparatus, and preferably a pressure is applied to the water so as to force the water through the layers (i.e. the zeolite layer, the activated carbon layer, and the nanoparticle layer) to carry out a reverse osmosis. In the embodiment where the apparatus is vertically oriented, passing may not require an external pressure, and thus the required pressure that forces the water through the layers (i.e. the zeolite layer, the activated carbon layer, and the nanoparticle layer) is provided by gravity. However, in another embodiment, a pressure is applied to water. The pressure may be a positive pressure on an inlet side of the apparatus (i.e. a side proximal to the water inlet ) or a negative pressure (i.e. a vacuum) on a permeate side of the apparatus (i.e. a side proximal to the water outlet). In one embodiment, the positive pressure applied to the water is in a range of 200 kPa to 20 MPa, preferably 1.0 MPa to 15 MPa, more preferably 2.0 MPa to 10 MPa. The positive pressure may be provided by a positive displacement pump, and the negative pressure may be produced by a vacuum pump to increase a water flux. By applying the pressure (i.e. the positive pressure and/or the negative pressure), water permeates through the zeolite layer, the activated carbon layer, and the nanoparticle layer of the water filtration apparatus , and purified water (i.e. permeate) is collected via the water outlet . In one embodiment, a valve coupled to the water outlet can be used to control a flow rate of the purified water.
2
2
2
2
Water flux typically has a linear relationship to a differential pressure across the layers of the apparatus. In one embodiment, the water flux is within the range of 1-10 Kg/mper minute, preferably 2-5 Kg/mper minute in the absence of the positive and/or the negative pressure. However, the water flux is within the range of 10-100 Kg/mper minute, preferably 20-50 Kg/mper minute, when a differential pressure in the range of 100-500 psi, preferably 200-500 psi is applied on both sides of the layers (i.e. the zeolite layer, the activated carbon layer, and the nanoparticle layer).
100
According to a third aspect the present disclosure relates to a method of producing a polypeptide-functionalized nanoparticle having a structure of formula (I):
102
104
106
108
110
112
wherein NP is the nanoparticle (as described), L is the linker (as described) that includes a triazole, and PP is the polypeptide (as described) that includes at least two polymers selected from the group consisting of the alkyl-functionalized glutamine polymer , the phenylalanine polymer , and the carboxylic acid-functionalized glutamine polymer .
106
2
2
The polypeptide is first produced by solid phase peptide synthesis, which is known to the people skilled in the art. For example, in one embodiment, the peptides are synthesized using an Fmoc-Gly-2-chlorotrityl resin. Accordingly, a specified amount, i.e., less than 5 gr, preferably less than 1 gr, of the Fmoc-Gly-2-chlorotrityl resin is swollen in a solution of DMF and dichloromethane (the solution preferably has a 1:1 molar ratio of DMF to dichloromethane) for at least 10 minutes, but no more than 20 minutes. Subsequently, at least five equivalents of n-[(9H-fluoren-9-ylmethoxy)carbonyl]-1-alanyl-1-alanine (Fmoc-AA-OH) and at least five equivalents of N,N,N′,N′-tetramethyl-O-(1H-benzotriazol-1-yl)uronium hexafluorophosphate (HBTU) are dissolved in the solution, and further, at least five equivalents, preferably at least six equivalents of N,N-diisopropylethylamine are also added to the solution. The solution is stirred for at least 30 minutes, preferably at least 45 minutes to form the peptides. In one embodiment, the peptides are split from the Fmoc-Gly-2-chlorotrityl resin. Accordingly, the Fmoc-Gly-2-chlorotrityl resin may be treated with a trifluoroacetic acid dissolved in CHCl(having an acid concentration of 3%, preferably 5%) for half an hour, preferably one hour, to split the peptides from the Fmoc-Gly-2-chlorotrityl resin. After that, said resin may be removed via filtration and the peptides may be separated by an organic solvent (e.g. ethanol), followed by a separation and a purification process.
The method involves treating the polypeptide (e.g. the N-terminus) with an azide-containing reagent to form an azido polypeptide compound. In one embodiment, the azide-containing reagent is an acyl azide. In one embodiment, the azide-containing reagent is an azido-alkanoyl halide. In an alternative embodiment, the azide-containing reagent is an azido-alkanoyl chloride. The term “alkanoyl”, as used herein, refers to an alkyl group, preferably having 2 to 18 carbon atoms, that is bound with a double bond to an oxygen atom. Examples of alkanoyl include acetyl, propionyl, butyryl, isobutyryl, pivaloyl, valeryl, hexanoyl, octanoyl, lauroyl, stearoyl. In a preferred embodiment, the azide-containing reagent is 6-azidohexanoyl chloride.
3
3
−
The azido polypeptide compound is a polypeptide compound (as described) which is functionalized with an azide. Azide refers to a linear anion with the formula N, which is a conjugate base of hydrazoic acid (HN).
In one embodiment, treating the polypeptide with the azide-containing reagent may be performed at a temperature in the range of 20-60° C., preferably 20-40° C., and under atmospheric pressure. Furthermore, the polypeptide may be treated in the azide-containing reagent under an inert atmosphere (e.g. in the presence of nitrogen, argon, and/or helium).
102
102
The method further involves functionalizing a surface of the nanoparticle (as described previously) with an alkynyl reagent (e.g. propargylamine) to form an alkynyl nanoparticle. The alkynyl reagent (or an alkyne reagent) refers to an unsaturated hydrocarbon compound that includes at least one carbon-carbon triple bond in its structure. In one embodiment, the nanoparticle is a silica nanoparticle, and the alkynyl reagent, which includes Si, is bound to the silica nanoparticle via a Si—O—Si bond. For example, the alkynyl reagent has a halosilane terminus or a alkoxysilane terminus which can be used to bond to the silica nanoparticle. The alkynyl reagent may further include an amide. Functionalizing the nanoparticles with an alkynyl reagent may be carried out as follows: a predetermined amount of silica nanoparticles is mixed in anhydrous solvent (e.g. toluene and/or DMF) under an inert atmosphere, and the resulting mixture is sonicated for at least 30 min, preferably at least 1 hour. The mixture of the silica nanoparticles in anhydrous solvent is then placed in an oil bath preset at a temperature in the range of 70-120° C., preferably 80-100° C. The alkynyl reagent (e.g. propargylamine) is added to the mixture and stirred for at least 3 hours, preferably at least 6 hours. Next, the mixture is maintained at a temperature in the range of 70-120° C., preferably 80-100° C., for at least 18 hours, preferably at least 20 hours, more preferably at least 24 hours, wherein the alkynyl nanoparticles are formed thereafter. The alkynyl nanoparticles may be separated from the solution by centrifugation under a rotational speed of at least 1000 rpm, preferably 3000 rpm, for at least 30 min, preferably at least 1 hour.
100
The method further involves coupling the azido polypeptide compound to the alkynyl nanoparticle via an azide-alkyne cycloaddition to form the polypeptide-functionalized nanoparticle . The azide-alkyne cycloaddition refers to a cycloaddition reaction of an azide and a terminal or internal alkyne to give a triazole. The azide-alkyne cycloaddition may also be known as “click chemistry” to those skilled in the art. The azide-alkyne cycloaddition may be carried out at a temperature in the range of 0-120° C., preferably 20-100° C., more preferably about 90° C. The azide-alkyne cycloaddition may be carried out at a pH range over 4 to 12. The cycloaddition may be performed in the presence of a catalyst or in the absence of the catalyst, and the catalyst may preferably be a Ru-containing or a Cu-containing catalyst. For example, in one embodiment, the alkynyl nanoparticle is suspended in a suspension solution comprising an organic solvent (e.g. DMF), diisopropylethylamine, copper iodide, and sodium ascorbate, and then the azido polypeptide compound is slowly added to the suspension solution. The suspension solution may be stirred with a stirrer at room temperature for at least 8 hours, preferably at least 10 hours. Next, the nanoparticles are filtered and washed with an imidazole solution, DMF, water, methanol, piperidine, and/or dichloromethane for at least 3 times, preferably at least 5 times. Finally, the nanoparticles may be dried under vacuum.
The examples below are intended to further illustrate protocols for producing the polypeptide-functionalized nanoparticles to be used in the nanoparticle layer of the water filtration apparatus, and are not intended to limit the scope of the claims.
FIG. 1
FIG. 1
FIG. 1
112
110
108
The nanoparticle preparation was based on four main components. The first part (as shown in , No. ) is responsible to chelate all types of heavy metals. The design is based on poly-carboxylate which have high tendency to coordinate around the metal from four places to form a tetra-dente system. The second part is a poly-aromatic (as shown in , No. ). This part of the molecule has good performance to react with aromatic compounds including aromatic pesticides by forming π-π interaction. The third part is a poly-hydrocarbon (as shown in , No. ) that provides a high tendency to form dipole-dipole interactions with aliphatic hydrocarbons and oily materials.
Journal of Hazardous Materials,
FIG. 1
These nanoparticles precursors were prepared by solid phase peptide synthesis. The poly-carboxylate [Mohan, D., Pittman, C. U., J R. 2007, Arsenic removal from water/wastewater using adsorbents—A review, 142, 1-53] was linked to the glutamine by click chemistry, then the sequence was elongated by phenylalanine coupling, then glutamine modified by octadecyl group in the side chain was coupled. After which, the N-terminus is capped by 6-azidohexanoyl chloride. The small peptide was ready to be coupled to surface-modified silica. The coupling of this polymer with the silica was performed using click chemistry to form a peptide-functionalized silica. The structure of the peptide-functionalized silica is shown in .
FIG. 2A
FIG. 2B
318
395
61
60
The peptide was characterized before the coupling with silica via using HPLC. The HPLC chromatogram revealed a high purity of the peptide (as shown in ). In addition, a mass spectrum of the peptide depicted the estimated molecular weight of purified peptide before being coupled to the silica nanoparticle (as shown in ). RP-UPLC analysis of peptide was analyzed using restic C18 column (2.1 mm×100 mm, 2.1 μm) with a linear gradient of 10-90% B over 20 min produce a single peak with retention time 11.75 min (solvent A was water/0.1% TFA and solvent B was acetonitrile/0.1% TFA), CHNO+calc [M+H]+=6027.9732, observed [M+H]+=6027.9697.
The EM image analysis of the sample of modified silica (when diluted in ethanol) demonstrated the presence of spherical nanoparticles. The EM images indicated spherical nanoparticles of 15-20 nm in diameter. Black spots in the center of the spherical nanoparticles indicated the silica nanoparticles, while the external side presented the peptide.
Water Research,
Journal of Hazardous Materials,
Journal of Hazardous Materials,
Journal of Colloid and Interface Science,
Nanoscale,
3
4
2
The filter design is based on a layer of cotton to protect other layers and to filter dissolved and suspended solids and sediments. The second layer was packaged by zeolite is configured to remove and/or to decrease the concentration of heavy metals [Bailey, S. E., Olin, T. J., Bricka, R. M. & Adrian, D. D. 1999, A review of potentially low-cost sorbents for heavy metals, 33, 2469-2479]. The third layer includes activated carbon to remove organic and coloring materials [Mohan, D., Pittman, C. U., Jr. 2007, Arsenic removal from water/wastewater using adsorbents—A review, 142, 1-53; Mohan, D., Pittman, C. U., Jr. 2006, Activated carbons and low cost adsorbents for remediation of tri- and hexavalent chromium from water, 137, 762-811]. Finally, the fourth layer (i.e. the nanoparticle layer) is responsible to chelate all types of heavy metals, organic materials, and oily materials [Wang, J., Zheng, S. Shao, Y., Liu, J., Xu, Z. & Zhu, D. 2010, Amino-functionalized FeOat SiOcore-shell magnetic nanomaterial as a novel adsorbent for aqueous heavy metals removal, 349, 293-299; Zhou, D., Li, Y., Hall, E. A. H., Abell, C. & Klenerman, D. 2011, A chelating dendritic ligand capped quantum dot: preparation, surface passivation, bioconjugation and specific DNA detection, 3, 201-211]. The assembly as described prevents any leaching through the aforementioned layers. Thence, the water fraction will be collected after passing wastewater through layers as analyzed by ICP.
FIG. 5
Water samples were collected from different locations within 10 km from the industrial area. Water samples were quantified by ICP-MS. Average heavy metal concentration prior to any filtration or other treatments is as shown in .
The Industrial water was then passed through the column packaged by 3 layers of zeolite (10 g), activated carbon (10 g) and nanoparticles based on silica (10g). Water fractions were then collected and samples were quantified by ICP-MS. No heavy metals were observed by ICP-MS after passing more than 50 L of industrial water. This showed high capacity of designed column to adsorb heavy metals.
TABLE 1
Absorbance capacity of nanoparticles based
on silica gel for heavy metals (mg/g)
Pb<sup>2 |</sup>
Cr<sup>6 |</sup>
Zn<sup>2 |</sup>
As<sup>5 |</sup>
Cd<sup>2 |</sup>
Cu<sup>2 |</sup>
Hg<sup>2 |</sup>
1450
235
905
360
875
530
1675
The Nanoparticles based on silica gel show very promising absorbance to remove heavy metals and many other organic compounds such as: pesticides, oily materials and waste come from detergents.
BRIEF DESCRIPTION OF THE DRAWINGS
A more complete appreciation of the disclosure and many of the attendant advantages thereof will be readily obtained as the same becomes better understood by reference to the following detailed description when considered in connection with the accompanying drawings, wherein:
FIG. 1
represents an exemplary structure of a polypeptide-functionalized nanoparticle attached to a silica nanoparticle.
FIG. 2A
represents an HPLC chromatogram of the polypeptide-functionalized nanoparticle.
FIG. 2B
represents a mass spectrum of the polypeptide-functionalized nanoparticle.
FIG. 3A
is a TEM (transmission electron microscopy) micrograph of the polypeptide-functionalized nanoparticles.
FIG. 3B
is a magnified image of an individual polypeptide-functionalized nanoparticle. A core of the polypeptide-functionalized nanoparticle is shown as a black core.
FIG. 4A
illustrates a water filtration apparatus.
FIG. 4B
is a magnified image of a cotton filter pad of the water filtration apparatus.
FIG. 4C
is a magnified image of a zeolite layer of the water filtration apparatus.
FIG. 4D
is a magnified image of an activated carbon layer of the water filtration apparatus.
FIG. 4E
is a magnified image of a nanoparticle layer of the water filtration apparatus.
FIG. 4F
is a representation of a vertically oriented water filtration apparatus.
FIG. 4G
is a representation of a horizontally oriented water filtration apparatus.
FIG. 5
represents heavy metal contents of three water samples from three distinct areas. | |
Q:
traverse list in scala and group all the first elements and second elements
This might be pretty simple questions. I have a list named "List1" that contain list of integer pairs as below.
List1 = List((1,2), (3,4), (9,8), (9,10))
Output should be:
r1 = (1,3,9,9) //List((1,2), (3,4), (9,8), (9,10))
r2 = (2,4,8,10) //List((1,2), (3,4), (9,8), (9,10))
array r1(Array[int]) should contains set of all first integers of each pair in the list.
array r2(Array[int]) should contains set of all second integers of each pair
A:
Just use unzip:
scala> List((1,2), (3,4), (9,8), (9,10)).unzip
res0: (List[Int], List[Int]) = (List(1, 3, 9, 9),List(2, 4, 8, 10))
| |
There's been a debate rippling in the world of astrophysics like a cloud of dust and gas swirling through a nebula: What exactly is the relationship between the supermassive black holes at the center of elliptical galaxies and the halo of dark matter that cocoons them?
New research from the Harvard-Smithsonian Center for Astrophysics might now put that debate to rest.
In this controversy, previous theories said that the size of an elliptical galaxy's supermassive black hole was linked directly to the total mass of the stars it contains. More recent research, though, has shown a strong relationship between the black hole and the dark matter that surrounds the galaxy.
To find where the truth lies, the research team studied 3,000 elliptical galaxies -- football-shaped collections of stars, planets, gas and dust that form when two galaxies merge. The researchers used the motions of stars to weigh each galaxy's central black hole. Then they used X-ray measurements of the hot gas surrounding the galaxies to weigh the dark matter halo -- the larger the halo, the more hot gas the galaxy can retain.
What they found is that a stronger relationship does indeed exist between the black holes and the halos than between the black holes and stars.
"This connection is likely to be related to how elliptical galaxies grow," the Harvard-Smithsonian Center said this week about the research. "An elliptical galaxy is formed when smaller galaxies merge, their stars and dark matter mingling and mixing together. Because the dark matter outweighs everything else, it molds the newly formed elliptical galaxy and guides the growth of the central black hole."
Akos Bogdan, lead author of the research that has been accepted for publication in the Astrophysical Journal, said that the dark matter forms a kind of blueprint that the galaxies can follow as they merge, and it determines the size of the new central black hole.
Although scientists still aren't sure about exactly what dark matter is, they can theorize about its existence and action by monitoring its gravitational effects on other objects in the universe.
"In our universe, dark matter outweighs normal matter -- the everyday stuff we see all around us -- by a factor of 6 to 1," the report said. "We know dark matter exists only from its gravitational effects. It holds together galaxies and galaxy clusters. Every galaxy is surrounded by a halo of dark matter that weighs as much as a trillion suns and extends for hundreds of thousands of light-years."
This research takes us a tiny step closer to understanding how this mysterious force that we know little about is a key player in shaping the very fabric of the universe. | https://www.cnet.com/science/intimate-link-established-between-galaxies-dark-matter-halos-and-supermassive-black-holes/ |
Special Containment Procedures: SCP-2767, in its tins, is to be kept within a storage locker at Site 15. SCP-2767 is not to be painted onto the walls of a room any number of rooms simultaneously without permission of the SCP-2767 head of research. Any wall treated with SCP-2767 must be cleared completely within 48 hours of application.
Description: SCP-2767 is an unbranded type of 'Prussian Blue' paint contained across 12 8 unlabelled 1-litre tins. The tins which contain the paint display no anomalous properties and as such the quantity of paint is finite, and any paint produced by colour matching to SCP-2767 is non-anomalous
When a subject is placed within a room painted with SCP-2767, they begin experiencing feelings of claustrophobia, specifically that of the walls being extremely close. This effect occurs regardless of the actual distance between the subject and the walls. During testing, subjects are still able to converse and describe their feelings; however, they are physically incapable of moving from what they perceive as being a tightly enclosed space. The dimensions of this space are invariably (see Incident Report 2767-A) in almost all cases equal to their arm span, taken from the point where the effect initially manifests.
The anomalous effects of SCP-2767 manifest from between 20 minutes to 6 hours after the subject has entered the room. At this point, subjects perceive the walls as shrinking to the aforementioned dimensions instantly. In 90% of cases, subjects with moderate-to-severe diagnosed claustrophobia experience effects within the first hour. 80% of subjects with moderate-to-severe agoraphobia, however, take from three to six hours to be affected. If psychological profiling indicates neither claustro- nor agoraphobia, the effect will initiate at any point between the aforementioned bounds. The effect is not present when more than one person is in the enclosed room. | http://scp-wiki.wikidot.com/scp-2767 |
We know that the diagonals of a parallelogram bisect each other.
In this case, suppose M is the point of intersection of the diagonals AC and BD, then M is the mid point AC as well as BD.
So using the mid point formula, we get, M has coordinates (q+8/2,p+3/2) as the mid point of BD
and also
M has coordinates(9+6/2,4+1/2) as the mid point of AC.
So equating both coordinates, we get,
q+8/2=15/2
and
p+3/2=5/2
So, | https://www.topperlearning.com/answer/paralllelogrm/2kouidss |
The test involves running continuously between two points that are 20 m apart from side to side.
These runs are synchronized with a beep which plays at set intervals. As the test proceeds, the interval between each successive beep increases, forcing the athlete to increase their speed over the course of the test, until it is impossible to keep in sync with the beeps (or, in rare occasions, if the athlete completes the test).
If the person being tested does not make the next interval then the most recent level they completed is their final score. The recording is structured into 21 'levels', each of which lasts around 62 seconds. Usually, the interval of beeps is calculated as requiring a speed at the start of 8.5 km/h, increasing by 0.5 km/h with each level thereafter.
The highest level attained before failing to keep up is recorded as the score for that test.
Features
1) Results can be emailed from the App.
2) Results are in csv format and can be post processed using Microsoft Excel
3) Built in graphing feature. | http://www.apppicker.com/apps/741653888/shootfit |
Tags: Dissertation Publishing ProquestResearch Papers On EngineeringWriting A Dissertation MethodologyBest Custom Term PapersRustling Of Leaves EssayWrite Essay Work Cited PagePaper Walls The Wyman ThesisBusiness Essay Ethical Problem
The main objective of this website is to provide quality study material to all students (from 1st to 12th class of any board) irrespective of their background as our motto is “Education for Everyone”.Creative Essay writing is now recognized as one of the most beneficial activity for kids for their overall personality development.Good friends are very rare in these fast changing days. She is a wonderful person to go to if you need advice or some kind of guidance in life. I hardly ever spoke up or let myself be recognized.
It allowed me to open up and get to know a lot of people I had once overlooked as potential friends. We went to all the high school football games, ran track, and went to the movies together. Friendship will end if the friends are not kind and tolerant to each other.
We had a dance class with each other and the teacher wanted to split us up in case of any fighting that may have occurred. Sunita and I even worked for a while together at Applebee’s. The aim of friendship must be to serve more than to be serve. This process of give and take should be a selfless process. A false friend always tries to take advantage of friendship. Shakespeare says: I have many friends, but I like Rajeev most.
It’s like she knew all the answers for all my questions.
Whether it had to do with boys, school work, sports, or even just things running through my mind, she always solved my problems.
She showed me how to be more outgoing and to voice my opinion when it was necessary.
Business Correspondence And Report Writing Pdf - Essay On My Best Friend In English For Class 6
When I become comfortable around a group of people, I usually tend to talk a lot more. I like to see people have a good time, and when others are laughing, I usually am too. Secondly a friend should not always find fault with his friend. I know that I can call her anytime and she will be there for me. A friend is a person whom one likes, respects and meets often, friend-ship is the feeling that joins the hearts of two friends. To this day, she is my best friend in the entire world. I knew all my secrets would be safe with her and that nobody would find out unless I told them. She would assure me that she would not tell anyone if I didn’t want her to. Right from our early childhood, we played together and enjoyed each other’s company. He is most obedient to his parents and does not like to make them angry in any case. I always had a hard time learning how to trust people. The few in who we discover the affinity, we make friends with and they carve out an abiding place in our hearts. He is the single child of his parents and hence the apple of their eye. All the teachers are proud of his abilities as there is not a single question which he cannot answer or a single sum which he is not able to solve. He won many prizes in debates, competitions and quiz programmes. She has taught me how to trust people, how to help me with my problems, and how to open up as an individual. She made me go up to random people and just strike up a conversation with them. Sometimes that was hard and a little embarrassing, but in the end it ended up helping me. I could always go to her and let her know if I had done something wrong. The one person I found that I could trust with everything was my sister Sunita.
Comments Essay On My Best Friend In English For Class 6
-
Essay on “My Best Friend” Complete Essay for Class 10.
May 20, 2016. Essay on “My Best Friend” Complete Essay for Class 10, Class 12 and Graduation and other. He is a pastmaster of English language.…
-
Essay my best friend - Write My Custom Paper.
Essay on best Give information write a albert einstein in english in solidum team. My Best Friend Essay for Class 1, 2, 3, 4, 5, 6, 7, 8, 9 and 10. Essay.…
-
Best friends LearnEnglish Kids British Council
Please remember to write in English so that everyone can understand. We can't. Well, my name is Muskan and my best friend name is Marwa. We have been.…
-
My Best Friend Class3 English Essay Paragraph
My Best Friend, English Essay for Class 3 by Arked Educational Services.…
-
Lines on My Best Friend in English for Children and Students
Jun 15, 2019. You can use these these lines in your essay and paragraph writing in your. 6 He is a good student and his performance in the class is also.…
-
My best friend essay for grade 11 Baltimore School of The Bible
Describe what should i studied in english 50 words. Descriptive essay for kids on my best friend is essay for th class. Essay on my best friend for grade 6.…
-
My Best Friend - Essay
My Best Friend – Essay. Article shared. To get a true friend is rare achievement now a days. Someone is. We both are class fellows for last four years. Suresh.…
-
My Best Friend Essay - 698 Words Bartleby
Free Essay My Best Friend Have you ever had someone in your life who helped you figure out who you. This was going to be our last night together, Cesli and I. Cesli Crum was my best friend that I met in third grade. 1369 Words 6 Pages.…
-
Essay on my best friend - Wolf Group
My Best Friend Essay for Class 1, 2, 3, 4, 5, 6, 7, 8, 9 and 10. In Hindi English How To Write An Essay Cheap letter writing service My best friend essay in hindi.…
-
Essay on My Best Friend for Students & Children 500 Words. | https://helpina-vgp.ru/essay-on-my-best-friend-in-english-for-class-6-3615.html |
3D Cube 2048 is a browser-based puzzle game in the genre of 2048. The goal of the game is to complete all the levels by adding the same cubes from the proposed ones and transforming them.
Kuidas mängida
How to Play: 1. Tap on the screen and move the cube. 2. Stop tapping the screen to release the cube. 3. Connect the cubes together to create the one that will lead you to victory. The game also has hints for convenience. Have a nice game! | https://yandex.ee/games/app/181447 |
Why you Need to Include Images on Social Media
A picture is worth a thousand words. A quote worth a thousand articles on this very topic. There is no better way to describe the importance of visual imagery when trying to purvey anything from an event to a simple meal. Visual imagery is the very heart and soul of entertainment and it is easy to see why.
People like to be able to see clearly what the author of a work had in mind when they created it. While an eloquent description of a meal can arouse the senses in many ways, it will always fail to arouse the sense of sight. | https://www.seoshark.com.au/blog/page/44/ |
Long before the crack of dawn two ambitious young athletes rolled out of bed, ready for their big challenge!
Here they are with their Good Will “jackets” – looking pretty chipper after 3 days of rest and good food. The shirts served their purpose and were unceremoniously discarded before the second mile. It does help you keep up with each other in the crowd!
The race was hard, there was also a stiff wind blowing and fighting it took a lot of energy. It made us glad that we weren’t pushing Lance in the stroller, it would have been like heaving a sailboat up wind! The weather turned out warm for Dec (75º) which we enjoyed but the heat really took its toll on some of the runners. We made sure we stayed properly hydrated by making use of the aid stations along the route and now we both have an aversion to yellow gatorade.
Our run went pretty good over all. I had a rough time around mile 19 but Brian cheered me on…that and the help of some marathon goo (yuck). We walked and jogged for a few more miles while Brian tried to distract me by any means he could think of – sweet guy. When we hit the last mile I knew I had to run to the end and so we did. I quoted scripture in my mind to keep the focus off of my legs and we crossed the finish line right at our time goal! I was happy, and beat, and wanted to cry and was proud of Brian all at the same time.
Not quite so chipper but still smiling after 5:04 hours of running! The new marathon alumni pose at the finish line.
Ice packs all around! | http://www.cahills.us/2008/12/15/a-walk-in-the-park/ |
15. Stay Outside of Your Comfort Zone
The concept of a comfort zone is a useful one. Your comfort zone encompasses everything you already know, can do readily, people and places you know, and such. When we are within our comfort zone we are, as it goes, comfortable. It is a nice place to be, except when you are trying to develop.
Outside of your comfort zone is everything you do not feel comfortable doing, techniques you do not happily use, aesthetics you do not like, as well everything you do not already know, understand and have integrated into yourself.
If you are interested in growth and development, whether artistically, photographically, in business or in your personal life, you need to step outside of your comfort zone. By stepping outside our comfort zone we expose ourselves to new things, new ideas and new experiences and eventually we make them our own. This expands our comfort zone, which is what growth is about. You have pushed your own boundaries, expanding them in the process.
Now there is an interesting thing about comfort zones. Step too far outside of it and you risk the 'Oh my God, I can't deal with this' response. How far varies from person to person and time and situation and so you must know yourself. The OMG reaction sets you back, people retreat back into their comfort zone and sometimes don't step out again for some time.
So what you want to do is get yourself out of your comfort zone as far as you can without provoking the OMG response. After you've been out there for a while you will find your zone has expanded and you can push further.
Photographically and artistically, our comfort zone includes the tools we regularly use, the techniques we use, the sub- ject matter we like to shoot and the places we shoot. Pushing our envelope can be trying a new lens or filter, a new processing technique, using different software, shooting in a new way, tackling new subject matter (always been scared of shooting people, for example), trying some new ideas on composition or image design, etc.
Note that just because you bring an artistic style or technique within your com- fort zone this doesn't mean that you will necessarily like it or want to use it in your own work. What it means is that the blanket fear and hate of it will tone down and you'll see it as a valid approach, just perhaps not for you, at least at the present. Always leave open the possibility of change. | http://www.technomagickal.com/photo-wisdom/p22.html |
Kar: So, I figured it out, why hot dogs come in packages of ten and hot dog buns come in packages of eight. See, the thing is, life doesn't always work out according to plan so be happy with what you've got, because you can always get a hot dog.
When the monk is first teaching Kar to fight, the monk has Kar in a hold where Kar's arm is on the monk's shoulder, in the back and forth shots, Kar is holding the monk's shoulder and has his hand stretched out over the monk's shoulder. | https://www.moviemistakes.com/film3162/deliberate |
Learn more:
Unsupported browser
You are using an outdated browser. Please upgrade your browser to improve your experience.
Money manager is a tool designed to help you manage your daily expenses. Its principle is to use the electromagnetic induction to induct the total amount of the money in your wallet, and then display the value on the LCD panel. All you have to do is gripping it to your wallet, and then setting initial value according to your daily consumption levels. Besides, the green indicator bar intuitively tells you the percentage of the remaining money. | https://ifworlddesignguide.com/entry/59919-money-manager |
It is impossible to tell where the theatre starts and ends in Matthew Lenton’s brilliant, dense and lethally of-the-moment The Destroyed Room, at the Traverse to Saturday before touring to London “and beyond”.
It is a piece in which Lenton takes his fascination with voyeurism and observation up into even greater flights of detail, a script that emerges out of natural debate, a play that borrows from a photograph that borrows from a painting that, itself, borrows from a piece of theatre.
It is true that three actors sit around, speaking in their own voices in a debate that is being recorded and screened on stage, live. Elicia Daly, Barnaby Power and Pauline Goldsmith touch on life and dependants, fear and bravery, the desire for action and the inability to act.
At some point they stop being the actors and it becomes a performance – it is unclear when. Their moment starts with a question, a secret, new question every night, with which one of their number opens the debate. For a few minutes their is true spontaneity, audience interaction even, until the script slips into action, unnoticed.
It all takes its inspiration from Jeff Wall’s 1978 photograph of a destroyed room, itself a reference to Eugene Delacroix’s painting of 1827, Death of Sardanapalus. Inspired by Byron’s play of the same name, the painting shows the death of the Assyrian king Sardanapalus, who ordered the killing of his servants, the destruction of his palace and his own death after hearing of the defeat of his armies in battle.
This seems to be a crucial factor in the understanding of the whole piece, so much so that one of the camera operators delivers a little speech about Jeff Wall’s photo, with images of the two pictures, before welcoming the actors to the stage.
It makes sense of the subjects to which the debate turns: Isis and the refugee crisis in Europe, as the debate becomes all the more heated and begins to break down. And when it finally does so, long-term collaborator Kai Fisher’s beautifully dark set and lighting is used to throw a forensic flashlight on the room and ask: What happened here?
There is a second major frame of reference going on: to do with the way we see the atrocities of Isis – the beheadings and filmed deaths – and the tragedies of the fleeing refugees – the children’s bodies washed up on the beaches of the Mediterranean.
And in this, Lenton is following a line that was seen in his 2009 play Interiors, which was viewed through a glass with a voice describing the action, and then in the 2012 EIF production of Wonderland, in which he examined the nature of the voyeur in pornography.
The subject here is what you might call death porn. And the debate, realistic that it is, never really comes to a conclusion but wanders back and forth, contradicting itself and in the process revealing much more about our relationship with and fascination with filmed death than any formal decision-making debate ever could.
There is a trio of very fine performances from Daly, Power and Goldsmith and solid work from all the creative team – Mark Melville on sound, Jessica Brettle on costume, Fisher, Lenton, the on-stage camera technicians and the company who created the script through improvisation.
But this isn’t just about the look and technical elements of the show, even if they do delight, it is about getting under the skin of an audience, of niggling around and asking where they really stand.
Running time 1 hour 15 minutes (no interval)
Traverse Theatre, 10 Cambridge Street, EH1 2ED
Wednesday 9 – Saturday 12 March 2016
Evenings: 7.30pm.
Tickets and details: http://www.traverse.co.uk/
Battersea Arts Centre, Lavender Hill, London SW11 5TN
Wednesday 27 April – Saturday 14 May 2016
Evenings daily (not Sun & Mon 1 May): 7.30pm. | http://www.alledinburghtheatre.com/the-destroyed-room-vanishing-point-tour-2016-review/ |
Is there a Place for Artificial Intelligence in Education?
July 26, 2019
I don’t think it’s hyperbole to state the future of the planet rests on the shoulders of an educated population. The late Kofi Annan, who served as Secretary General of the United Nations, once stated, “Knowledge is power. Information is liberating. Education is the premise of progress, in every society, in every family.” As the world becomes more technologically advanced, education needs to keep pace. One would think the educational community would welcome new technologies to help strengthen academic endeavors. Nevertheless, there are discussions about what role technology should play in the educational arena. Sebastien Turbot (@sturbot), Executive Director at New Cities Foundation, writes, “With children increasingly using tablets and coding becoming part of national curricula around the world, technology is becoming an integral part of classrooms, just like chalk and blackboards.” At the same time, Nellie Bowles (@NellieBowles) writes, “A wariness that has been slowly brewing is turning into a regionwide consensus: The benefits of screens as a learning tool are overblown, and the risks for addiction and stunting development seem high.” The value of computers in the hands of students is not the only question being discussed within educational circles. People are also pondering what role technologies like artificial intelligence (AI) should play.
AI in the classroom
Turbot reports, “[A study published] by Pearson deciphers how artificial intelligence will positively transform education in the coming years. Per the report’s authors, ‘The future offers the potential of even greater tools and supports. Imagine lifelong learning companions powered by AI that can accompany and support individual learners throughout their studies — in and beyond school — or new forms of assessment that measure learning while it is taking place, shaping the learning experience in real time.” Most articles I’ve read believe AI will play an important and positive role in education in the years ahead. Matthew Lynch, an educational consultant and owner of Lynch Consulting Group, LLC, writes, “Artificial intelligence has already transformed the face of learning in a major way. It’s continuously opening new doors that lead to increased productivity, more successful students, and a lower cost for the public-school system.” He admits the application of AI in education remains in its nascent stage, but he suggests three ways AI can help advance education. They are:
Helping teachers take a break from menial tasks. “The fact is,” Lynch writes, “that teachers spend a large percentage of their off-hours attempting to keep up with the piles of student papers. Grades have to be submitted in order to determine how students are keeping up with the coursework. Generating new lessons and materials often falls by the wayside during the struggle to stay on top of this never-ending chore. Artificial intelligence can take over some of these grading tasks and simplify the process. Even essays may soon be graded using artificial intelligence. This can save valuable time and give educators the space they need to create better lessons for their classroom.” Having AI take over grading tasks of subjective material is controversial because humans remain a better judge of creativity than machines. Nevertheless, Lynch’s point that teachers need a break is an important one. Teagan Carlson, a former high school language arts teacher, reports, “A recent survey conducted by the Nevada State Education Association revealed that half of the teachers polled are considering leaving the profession in part because they’re overworked. Three-quarters of the teachers polled responded that they don’t have enough time to prepare for their jobs during the workday and 21 percent of them spend more than 15 hours outside of their workday to prep.” Finding ways for AI to relieve teachers of some activities could be helpful.
Helping set more realistic student goals. “One of the major assets of artificial intelligence,” Lynch writes, “is the ability to personalize a student’s academic needs. Every child will learn at a different pace and in a different way. Artificial intelligence can easily encompass all of these learning styles and provide a customized plan for each student. Along with this plan, the programs can help to craft more specific and realistic goals for students.” This approach is often referred to as blended learning. Elizabeth Mann Levesque (@elizkmann), a former Nonresident Fellow at the Brookings Institution, explains, “Blended learning, defined as ‘the strategic integration of in-person learning with technology to enable real-time data use, personalized instruction, and mastery-based progression,’ uses emerging technology to help teachers personalize education for individual students. This approach is generally known as personalized learning. Studies have found that personalized learning is a promising approach, although implementation challenges remain.”
Identifying curriculum gaps. According to Lynch, “Most teachers have a difficult time uncovering which areas of their lessons aren’t comprehensive enough. Oftentimes, students walk away from the lessons with many asking the exact same questions again and again. It’s clear that some aspects of a given course may need to change to accommodate student learning. AI could help educators to spot these gaps more clearly.”
Jenny Anderson (@jandersonQZ) points out another way AI can be used to improve teaching: Evaluating teaching methods. She explains, “There is a long-standing, red-hot debate in educational circles about the most effective way to teach kids. Some favor more traditional teacher-directed methods, with the teacher presenting materials and responding to questions about it. Others advocate for inquiry-based learning — where students drive their own learning through discovery and exploration, working with peers and developing their own ideas — arguing it results in deeper, and more meaningful learning. … McKinsey applied machine learning to the world’s largest student database to try and come up with a more scientific answer. The bottom line: A mixture of the two methods is best, but between the two, teacher-directed came out stronger.”
Concluding thoughts
With AI permeating so many aspects of our lives, it’s probably inevitable that AI will penetrate the educational sector as well. Per Thomas Arnett, an author at the Christensen Institute, observes, “Rather than seeing technological progress as a threat, teachers and education leaders should take advantage of the many ways technology can enhance their work.” That won’t be as easy as Arnett makes it sound. Kristin Houser explains, “Convincing parents, teachers, and students to embrace AI in education will be the real challenge. Some may be biased against the technology for fear it will leave them unemployed, while others may have a hard time shaking thoughts of the doomsday scenarios posited by tech luminaries such as Elon Musk and Stephen Hawking.” The fact remains, AI can help personalize education in ways teachers cannot simply because teachers have a finite amount of time to deal individually with students. An AI system has no such time constraints. Turbot admits, “AI and ed-tech are not a panacea for systemic challenges. AI may not be end up being the next giant leap in education and they will of course bring their own set of problems and disadvantages. But let’s not ignore their inherent strengths that could help address the glaring gaps in teaching and learning, that we have been struggling to fill for decades.”
Footnotes
Sebastien Turbot, “Artificial Intelligence In Education: Don’t Ignore It, Harness It!” Forbes, 22 August 2017.
Nellie Bowles, “A Dark Consensus About Screens and Kids Begins to Emerge in Silicon Valley,” The New York Times, 26 October 2018. For a further discussion, see: Stephen DeAngelis, “Mobile Technology in Education: Is the Devil in the Screen rather than the Details?” Enterra Insights, 11 January 2019.
Matthew Lynch, “How Artificial Intelligence Is Already Transforming Education,” Education Week, 14 March 2018.
Teagan Carlson, “Here’s Why Teachers Adopt New Tech — and Why They Don’t,” EdSurge, 29 may 2019.
Elizabeth Mann Levesque, “The role of AI in education and the changing US workforce,” The Brookings Institution, 18 October 2018.
Jenny Anderson, “McKinsey used machine learning to discover the best way to teach science,” Quartz, 5 October 2017.
Turbot, op. cit.
Kristin Houser, “The Solution to Our Education Crisis Might be AI,” Futurism, 11 December 2017. | https://www.enterrasolutions.com/blog/is-there-a-place-for-artificial-intelligence-in-education/ |
Assessing risk of hospital readmissions for improving medical practice.
We compare statistical approaches for predicting the likelihood that individual patients will require readmission to hospital within 30 days of their discharge and for setting quality-control standards in that regard. Logistic regression, neural networks and decision trees are found to have comparable discriminating power when applied to cases that were not used to calibrate the respective models. Significant factors for predicting likelihood of readmission are the patient's medical condition upon admission and discharge, length (days) of the hospital visit, care rendered during the hospital stay, size and role of the medical facility, the type of medical insurance, and the environment into which the patient is discharged. Separately constructed models for major medical specialties (Surgery/Gynecology, Cardiorespiratory, Cardiovascular, Neurology, and Medicine) can improve the ability to identify high-risk patients for possible intervention, while consolidated models (with indicator variables for the specialties) can serve well for assessing overall quality of care.
| |
It’s no secret that I don’t like seeing food go to waste. Last week when I made the Rosemary Roasted Potatoes, I had only used half of a 5 lb. bag of red potatoes. Since the Rosemary Roasted Potatoes were so delicious and they disappeared so quickly, I decided to roast the rest but with some different seasonings. A quick look in my refrigerator (for more unused “leftover” ingredients) had me set on parmesan and garlic roasted potatoes. I must say, with their crispy parmesan crust, these potatoes disappeared even faster than the last batch!
Total Recipe cost: $2.99
Servings Per Recipe: 5
Cost per serving: $0.60
Prep time: 10 min. Cook time: 40 min. Total: 50 min.
|INGREDIENTS||COST|
|2.5 lbs.||red potatoes||$1.74|
|1/4 cup||parsley, chopped||$0.25|
|2 Tbsp||olive oil||$0.21|
|1/2 cup||grated parmesan||$0.64|
|1/2 tsp||garlic powder||$0.05|
|to taste||salt and pepper||nada|
|TOTAL||$2.99|
STEP 1: Preheat your oven to 400 degrees. Wash the potatoes well and cut into 1 inch cubes. Try to make the size of the cubes as uniform as possible so they cook at the same rate.
STEP 2: Either in a large bowl or on your roasting pan, add the oil, parsley, garlic, parmesan, salt and pepper to the potatoes. Toss to coat well. Spread the potatoes out evenly in a single layer on the pan. If the potatoes are crowded or sit in layers they will tend to steam rather than roast, preventing the edges from crisping.
STEP 3: Put the pan in the oven to roast. Stir the potatoes after about 20 minutes to expose more edges to the heat. Be sure to spread the potatoes out in a single layer after stirring. The potatoes are done when they are golden brown and crisp on the edges and corners. Mine took about 45 minutes to get to this point.
NOTE: As I was eating these yummy parmesan potatoes, I thought about how great they would be with a fried egg and ketchup for breakfast! These would also make great oven fries, just cut the potatoes into sticks instead of cubes.
Related Posts
Reader Interactions
13 Comments
-
These were very good and easy to make. I also appreciate the exact instructions so I got perfect potatoes the first try.
-
Really yummy! Used them with potatoes from my organic box. Sadly, none left over for breakfast-maybe next time. Thank you for the recipe!
-
So, cooking spray would have been a great use of a few cents, but, alas, I did not use any. Delicious, nonetheless!
-
same anon that made your drop biscuit recipe! these were FANTASTIC! i added in a little chopped fresh dill and they came out amazingly crunchy and full of flavor.
-
Ashlee – Yes, you can use dried or try another dried herb like rosemary or oregano.
-
In an effort to avoid going to the grocery store, could I use dried parsley?
-
Did you use the parmesan in the can?
-
I made these a little while ago and they were delicious. So easy to make and so good – I just dumped them all into a huge bowl and kept snacking on them all day!
-
we took our leftovers and put them in a cheese and sausage omelet and it was amazing!
-
I had a BBQ this past weekend and tested out this recipe. It was amazing and everyone loved it. I LOVE potatoes so I’m definitely going to do these ones again!
-
As far as preparing the potatoes, using a zip-top bag is super easy because you can easily toss the potatoes and then pour onto the pan.
-
these were delicious! definitely making them again!
-
Sweet Potatoes are very good roasted with a bit of seasoning too. | http://www.budgetbytes.com/2009/11/parmesan-roasted-potatoes/ |
We are looking to hire a cabinet maker with installation experience. This position involves a mix of fabrication and installation of custom displays. We utilize various materials from wood to acrylic to create one of kind displays. Experience with graphic substrates and materials is helpful.
The ideal candidate would have a minimum of 3 years experience in the woodworking or display industry. They would also have the ability to take field measurements, read blue prints, build, laminate and finish projects independently. Experience working with a flatbed CNC router is helpful. We work in a creative atmosphere with designers, graphic technicians, project managers and other craftsmen. A valid drivers license and transportation is required. | https://www.getwoodworkingjobs.com/custom-fabrication-installation-windsor-mill-maryland-206953912.htm |
Lithium (from Greek: λίθος, romanized: lithos, lit. 'stone') is a chemical element with the symbol Li and atomic number 3. It is a soft, silvery-white alkali metal. Under standard conditions, it is the least dense metal and the least dense solid element. Like all alkali metals, lithium is highly reactive and flammable, and must be stored in vacuum, inert atmosphere, or inert liquid such as purified kerosene or mineral oil. When cut, it exhibits a metallic luster, but moist air corrodes it quickly to a dull silvery gray, then black tarnish. It never occurs freely in nature, but only in (usually ionic) compounds, such as pegmatitic minerals, which were once the main source of lithium. Due to its solubility as an ion, it is present in ocean water and is commonly obtained from brines. Lithium metal is isolated electrolytically from a mixture of lithium chloride and potassium chloride.
The nucleus of the lithium atom verges on instability, since the two stable lithium isotopes found in nature have among the lowest binding energies per nucleon of all stable nuclides. Because of its relative nuclear instability, lithium is less common in the solar system than 25 of the first 32 chemical elements even though its nuclei are very light: it is an exception to the trend that heavier nuclei are less common. For related reasons, lithium has important uses in nuclear physics. The transmutation of lithium atoms to helium in 1932 was the first fully man-made nuclear reaction, and lithium deuteride serves as a fusion fuel in staged thermonuclear weapons.
Lithium and its compounds have several industrial applications, including heat-resistant glass and ceramics, lithium grease lubricants, flux additives for iron, steel and aluminium production, lithium batteries, and lithium-ion batteries. These uses consume more than three-quarters of lithium production.
Lithium is present in biological systems in trace amounts; its functions are uncertain. Lithium salts have proven to be useful as a mood stabilizer and antidepressant in the treatment of mental illness such as bipolar disorder. | http://shortpedia.net/view_html.php?sq=afghanistan&lang=en&q=Lithium |
There are many other possibilities, but the sealed end is the most common.
The idea that Rall discovered is that if the dendrites were related in a particular fashion, then the whole thing could be collapsed to a single cylindrical cable. This is called the equivalent cylinder. Consider the tree shown in the figure 4. Suppose that the branches, 0,1 and 2 have the same membrane resistivities, RM and RA. Assume that the daughter branches, 1 and 2, have the same electrotonic length, that is, their physical length divided by their space constants (which of course depend on their diameters) are all the same. (For example, if both have equal diameters and are the same physical length.) Also, assume that the two have the same end conditions, eg sealed. We want to know if it is possible to combine the branches of the dendrite into a single equivalent cylinder. The key is that we must avoid impedence mismatches. Thus, to combine the dendrites, 1 and 2 with 0, we require:
In the above figure, we depict a dendritic tree consisting of several branches with their lengths and diameters in microns. (a) Can they be reduced to an equivalent cylinder (b) What is the electrotonic length (c) What is the input conductance. Assume sealed ends for all terminal dendrites and assume that and that
Answer.
da3/2 + db3/2+dc3/2 = 1+1+1 = 3 = 2.083/2=dd3/2
dd3/2+de3/2 = 3+3 = 6 = 3.33/2 = df3/2so the 3/2 rule is obeyed. Clearly a,b,c are all the same electrotonic length. The space constants are:
Thus, the total electrotonic length of abc with d is
which are close enough to be considered equal (2% difference). Thus, we can combine the whole thing into an equivalent cylinder. The total electrotonic length is then: | http://www.math.pitt.edu/~bard/bardware/classes/passive2/node14.html |
Rachel Décoste is a writer, educator, social policy expert, and Diversity & Inclusion consultant from Ottawa, Canada. Her op-eds have been published over 170 times in The Huffington Post, the Ottawa Citizen, Le Droit, and many others. She is primarily focused on immigration, integration, racial diversity, and multiculturalism. In 2010, she was named in Ottawa's Top 50 Personalities in Ottawa Life Magazine.
During our wide-ranging conversation, we discuss a variety of topics around her experience as a Canadian-born person of colour who, at every turn, gets asked, "Where are you really from?" We delve into what it takes to build a sense of identity as a Black Canadian and tying this journey of self-discovery within the broader pan-African context.
Like many descendants of enslaved Africans, Rachel had been unable to pinpoint her origins. During the 400-year history that started with a slave voyage over the Middle Passage, enslavers did not regularly keep records of where each African person was taken captive. The few records that did exist were destroyed.
This made it challenging for Afro-descendants like Rachel to trace the areas in African that they descended from. Since tests like AncestryDNA® have become widely available, new possibilities exist in helping to ease the process for these searches.
Inspired by her DNA results, Rachel visited five countries in as many months, with each place holding the key to a different part of her lineage. From Senegal to Ivory Coast to Bénin, Rachel made connections with her people and her history, finding delight in the commonalities between the locals of her ancestral home and herself while also paying respect to her ancestors.
Rachel chronicles her journey in an engaging audiobook, titled The Year of Return: A Black Woman’s African Homecoming – where she shares her family history discoveries and vividly describes the epic odyssey she embarked on to Africa. Listeners of this podcast can obtain it at a 50% discount until the end of 2021 by using the code afrotoronto (all lowercase) on her website YearOfReturnBook.com.
This is the story of Rachel's journey to trace back her ancestral lineage in Africa.
This podcast episode is sponsored by Ancestry.ca. A global leader in helping everyday people trace their family history and genetic lineage, Ancestry is making their AncestryDNA test available to Canadians of all backgrounds to explore their roots and make new discoveries through the power of DNA.
Visit Ancestry.ca to obtain the AncestryDNA® at-home test. From a simple saliva sample, the DNA test gives you an online ethnicity estimate that helps you understand through your DNA where your ancestors may have originated from across the world. AncestryDNA can also connect you with living relatives around the world from its DNA network of 18 million people. | https://afrotoronto.com/content/articles/45-commentaries/2130-where-are-you-from-a-black-canadian-s-search-for-identity-through-discovering-her-african-roots |
It’s been a long, cold rainy winter this year in California. But it could get worse — much worse. The USGS warned this past week that California is in the eye of a winter storm weather pattern that could be more disastrous than a major earthquake.
They’re calling it the ARkstorm Scenario — a reference, one might guess, to a storm of biblical proportions. As envisioned by the models, the ARKstorm would produce a month long of precipitation on the order of what’s seen every 500 to 1,000 years, causing the flood evacuation of 1.5 million people and property damage worth $725 billion.
Discovery News had a nice summary of the science behind the rainfall pattern, known as an “atmospheric river.”
The last time California experienced such a flood was during the winter of 1861-62, when nearly 40 days of rain flooded out the Sacramento and San Joaquin Valley for 300 miles. Homes and bridges were swept away and one-quarter of the state’s cattle drowned. In the worst places, like Sacramento, water reached 15 feet high, according to this harrowing account in the New York Times.
The Bay Area has the potential to be highly impacted, although not to the degree of inland communities. Still, the UGSG predicted “serious flooding” and hurricane-force winds of 125 mph.
California is, no doubt, wholly unprepared to deal with this kind of flooding. And maybe this other “Big One” is just another disaster that’s hard to worry about, let alone wrap our minds around. Still, apocalypse is always worth a moment of imagination and a reminder that the ground we stand on is not so solid after all. | https://wayoutwestnews.com/2011/01/16/a-storm-of-biblical-proportions/ |
In my previous blog I looked at what can distinguish between a grant and a contract, and what indicators can be used when considering what form of income a charity has received.
In this blog, I provide some examples on the differences in accounting treatments.
Different accounting treatments: Examples
Example A – a charity provides after-school sports/social/arts clubs in disadvantaged areas. A donor (let’s assume an institution such as a foundation) likes what the charity does and offers the charity funds to cover the costs of running these clubs. The foundation is not receiving anything in return (other than knowing they have helped a worthy cause) and so this is a grant.
Example B – now let’s suppose that the foundation offers the funds to the charity on the condition that the charity runs a singing club at a specific school. In this case the donor is specifying on what the funds should be spent (known as a ‘performance condition’) and so the charity is only entitled to this income when it runs the singing club in this particular school. Whilst the donor is specifying the use of the funds, they are not receiving anything in return (again other than knowing they have helped a worthy cause) and so this is still a grant.
Example C – what happens if, instead of a foundation, it is a local authority that asks the charity to run sports clubs at schools in their area? Well the fact that it is a local authority doesn’t mean that this is not a grant, and it being for specific schools means it is restricted grant – but not necessarily a contract.
Example D – however, if the local authority was responsible for providing these clubs and asked the charity to run the clubs in return for a fee, then under these circumstances the local authority is contracting the charity to carry out specific services in return for consideration. Therefore, this would be income received by the charity under a contract.
Recognising the income
So how would the income be recognised?
The income recognition criteria are:
1. there is evidence that the charity is entitled to the gift,
2. the receipt is probable, and
3. the amount can be measured reliably.
If, using the examples A and B, the foundation wrote to the charity confirming £x would be given towards the costs of running the clubs and this would be paid upon acknowledgment of the charity of this award, then the charity would recognise all of the income at this point.
If the grant is paid in monthly instalments over the next 12 months, or if some of the expenditure falls in the next accounting period of the charity, neither of these are sufficient to prevent the charity recognising the full amount as income. This is because the 3 recognition criteria above are met. The charity recognises the full amount upfront.
In example B, the funds are given for a specific club in a specific school. So whilst the charity can meet points 2 and 3 above, it does not have entitlement until it runs the specific club and the specific school. The income is therefore deferred until this is the case. When recognised, the grant is treated as a restricted fund since it was given for specific circumstances.
For example C, the circumstances are the same as for example B so the above points also apply.
Example D related to fees received under a contract. In this case, the income is recognised to the extent that the services detailed in contract have been provided. Therefore, an appropriate basis for determining stage of completion should be used. An appropriate basis here would be looking at the number of clubs run compared to the total number that will be provided under the entire contract. So say the contract specified 5 clubs a week at 10 schools for a 12 week term, this would be 600 sessions (5 clubs x 10 schools x 12 weeks); after the first 4 weeks of the contract, the charity is required to run 5 clubs at 10 schools over 4 weeks – a total of 200 sessions. If all 200 have been run, the charity is entitled to 200/600 of the total fees due under the contract. Any monies received in excess of this amount, are deferred.
Example E – let’s take this to an extreme and say that the foundation in example A notifies the charity on the last day of the charity’s accounting period (201X); and that the contract in example D is signed on and effective from the last day of the charity’s accounting period (201X). Let us also say that the total amount being given to the charity is £500,000 and this is paid on the last day of 201X. In both cases, the first club is run on the first day on the charity’s next accounting period (201Y).
In this case, there are no conditions attached to the grant award in Example A, and the charity meets the 3 income recognition criteria (it has been informed of the award, has no reason to suspect this will not be received, and the amount is known). Therefore, the charity recognises all £500,000 of income in 201X– even though the funds will be used to run the clubs in the next accounting period (201Y).
Whereas, for the contract in Example D, the charity is not entitled to the income until 201Y when the first club is run, but it has received the cash. Therefore, the £500,000 is treated as deferred income and so income recognised in 201X is £nil.
Why it matters
I have used the extreme circumstances in these examples, but this illustrates how different the accounting treatment can be. For those agreements that fall within the grey area, treating something as a contract when it should be treated as a grant could mean that income is significantly understated in a charity’s accounts. This could be the difference between being subject to audit or not, and may distort figures provided as part of funding applications. Each grant/contract should be reviewed individually to ensure the correct treatment.
Hopefully this provides some guidance when reviewing grant agreements and contracts, but please do get in touch if you require assistance with prepare your charity’s accounts. | https://www.goodmanjones.com/blog/2017/03/grant-or-contract-accounting-treatment/ |
Generally, the first word in the compound noun tells us what kind of person or thing it is or what purpose he, she, or it serves, while the second word defines the person or object, telling us who or what it is. For example: water + bottle = water bottle (a bottle used for water Compound nouns are of three kinds: open, hyphenated, and closed. As the names imply, open compounds are written as separate words, hyphenated compounds are written with one or more hyphens, and closed compounds are written as a single word. Many compounds begin as open, progress to hyphenated, and finish as closed When two words are used together to yield a new meaning, a compound is formed. Compound words can be written in three ways: as open compounds (spelled as two words, e.g., ice cream), closed compounds (joined to form a single word, e.g., doorknob), or hyphenated compounds (two words joined by a hyphen, e.g., long-term)
1 Answer1. There are no apparent limits imposed on how many words a compound noun is allowed to contain in the English language. Common sense and good judgment are really your only friends here. As long as the thing that you are writing is readable and understandable, you are fine. Though, most typically that number lies somewhere between 2 and 4 A compound noun consists of two words that are put together to create a single noun. One of the words is usually a noun; however, the other word can be a noun, an adjective or a preposition. Combined, the meaning may or may not relate directly back to the root noun One-word Compound Nouns. The ws consists of 27 pictures and two columns with words. The studnets have to join the words in both columns to form one-word compound words. Then they have to put the name under the correct picture. The key to the exercise is provided. You can encourage students to add more words to the ws Spaced Compound Nouns. Compound nouns are a controversial part of the English language. Some compound nouns are not joined into one word but just placed in a particular word order. If misused within a sentence these words do not typically go together. Bus Stop. I am going to catch the bus at the Bus Stop. Full Moon. It is going to be a Full.
Compound nouns tend to have more stress on the first word. In the phrase pink ball, both words are equally stressed (as you know, adjectives and nouns are always stressed). In the compound noun golf ball, the first word is stressed more (even though both words are nouns, and nouns are always stressed) Compound nouns are nouns made up of two or more words. The words that make up a compound noun are words that can stand alone. Compound nouns can be combined in three different ways: One single word, two separate words, or hyphenated words A compound noun refers to a noun that is composed of two or more words. These compound words work together as a single unit to name a place, a person, or a thing. Compound nouns are usually created by joining two nouns or an adjective and a noun together
Compound Noun Definition. A compound noun is a noun that is made up of two words that,. · Try to make the last word of the compound noun as the base word. Example: 1. car battery- the battery of a car 2. Battery car-A car that runs on battery . The compound noun structure is extremely varied in the types of meaning relations it can indicate. It can be used to indicate:. Some compound nouns were always the one-word version (e.g., keyboard) Some two-word ones have transitioned to a one-word version (e.g., snow man to snowman) Some two-word ones are transitioning to a one-word version (e.g., eye opener to eyeopener). Some two-word ones are not transitioning (e.g., peace pipe
. The two words work together to create a single noun. To check the spelling of a compound noun, look it up in the dictionary. When a compound noun is a single word, make it plural by adding s to the end A compound noun is a combina... Sound like a native English speaker with this quick ESL pronunciation speaking lesson to learn how to pronounce compound nouns. A compound noun is a combina. There are 27 pictures & two lists of words that the students have to join to form one-word compound nouns. Then they write the name under the correct picture. Hope it's useful! - ESL worksheets. Article by iSLCollective. 3 Compound nouns with two or more words. Compound nouns are also formed with nouns, adjectives, prepositions and verbs placed before a noun. Again, these are treated as one idea, taking the place in the sentence of a noun, but it is important to pay attention to the order of words within a compound noun. The first word, whether a noun, adjective. When a noun + noun compound is short, and established in the English language and pronounced with equal stress on both nouns, the styling is likely to be open (bean sprouts, fuel cell, fire drill).Many short noun + noun compounds, however, that begin as temporary open ones and have the first word accented tend to become solid (database, football, paycheck, hairbrush); this is also the case for.
Updated August 13, 2018. In English grammar, a compound noun (or nominal compound) is a construction made up of two or more nouns that function as a single noun. With somewhat arbitrary spelling rules, compound nouns can be written as separate words like tomato juice, as words linked by hyphens like sister-in-law or as one word like schoolteacher Compound nouns (one-word compound nouns) + key worksheet. 27 pictures and two separate lists of words. The students have to match the words from both lists to make one-word compound nouns and write the name under the correct picture. (This is my 3rd ws dealing with compound words). key included. Hope it´s useful Choose whether each compound noun should be one word or two words: 1. My mother grows all her plants in a (green house, greenhouse). 2. Don't forget to put the dirty laundry in the (washing machine, washingmachine). 3. He got a (haircut, hair cut) last week. 4. You'll need to update your (software, soft ware). 5 There are some compounded nouns whose origins are unclear (such as bonfire, marshall, etc) so they called as amalgamated compound. Generally, one word of the compound nouns contain noun whereas other word may be an adverb, verb, adjective, preposition, or gerund. First word becomes modifying word or add meaning to second one (main word)
Compound Words - ตอนที่ 1 (Compound Nouns) Compound words คือ คำประสมที่เกิดจากการนำเอาคำสองคำซึ่งอาจจะมีหน้าที่เหมือนกันหรือต่างกันมารวมกันเป็นคำเดียวกัน หลักในการ. Compounding words are formed when two or more lexemes combine into a single new word. Compound words may be written as one word or as two words joined with a hyphen. For example: noun-noun compound: note + book → notebook. adjective-noun compound: blue + berry → blueberry. verb-noun compound: work + room → workroom B1 Grammar Compound Nouns exercise show how they are made by combining two or more words (typically nouns) to create one new word such as wise men, gingerbread and gift-giving. This helps students to learn and practise English grammar at CEF level B1 and the Cambridge Assessment English B1 Preliminary examination A compound noun is a noun that is made with two or more words. A compound noun is usually [noun + noun] or [adjective + noun], but there are other combinations (see below). It is important to understand and recognize compound nouns. Each compound noun acts as a single unit and can be modified by adjectives and other nouns. There are three forms. Compound nouns can be open (two words), hyphenated or closed (one word). Exam paper is a compound noun. @ If the two words are separated they cannot possibly form a compound. A compound word is always a single word (sometimes hyphenated). Railway station is not a compound noun, but a syntactic construction consisting of railway (modifier.
A compound word is two words put together to make a new word. In English there are thousands of compound nouns, so it is good to know a few basic things about them. Compound noun spellingThe first thing to know is that some compound nouns are written with one word (closed compounds), such as sunset Compound nouns are sometimes one word, like toothpaste, haircut, or bedroom. These are often referred to as closed or solid compound nouns. Sometimes compound nouns are connected with a hyphen: dry-cleaning, daughter-in-law, and well-being are some examples of hyphenated compound nouns. Sometimes compound nouns appear as two separate words. You can do this grammar quiz online or print it on paper. It tests what you learned on the Compound Nouns page. Using compound nouns, can you shorten the following phrases? 1. a room for stores. a storeroom. a storesroom. a) a storeroom b) a storesroom. 2. a tape for measuring up to 300 cms Compound nouns 1. A COMPOUND NOUN consists of two or more words used together to name one person, place, or thing. 2. Check out the chart below: WHAT TYPE / WHAT PURPOSE WHAT / WHO police man boy friend water tank dining table bed room 3. The two parts may be written in a number of ways: 1. As one word Two or More words based as a unit is called Compound Words. Compound words will meet noun with verb, noun with gerund, noun with noun, gerund with noun, noun with adjective and preposition with noun, joining two words together to make one, examples of acronyms, list of compound words as one word
Compound words This work is maid to make compound words. First write the new word in English, then drag the corresponded picture ID: 11652 Language: English School subject: English as a Second Language (ESL) Grade/level: elementary Age: 7-11 Main content: Compound nouns Other contents: Add to my workbooks (50) Embed in my website or blog Add to. Compound noun word stress. A compound noun is created when two or more words (often an initial noun or an adjective that modifies a final noun) are joined together or used adjacent to one another so often that the combination of words is interpreted as a single noun Compound nouns classifying types. In such kinds of compound nouns, the first words answer the question. For example, a factory worker, seat belt, taxi driver, bookseller. Compound nouns referring to containers. These compound nouns describe that the second item is designed to contain the first one. For example, coffee cup, sugar pot, teapot etc Compound nouns can be made up of two or more other words, but each compound has a single meaning.They may or may not be hyphenated, and they may be written with a space between words—especially if one of the words has more than one syllable, as in living room.In that regard, it's necessary to avoid the over-simplification of saying that two single-syllable words are written together as one.
. Compound words are formed when two or more words are put together to form a new word with a new meaning. Example: rail + road = railroad. Some more examples. 1. basket + ball = basketball. 2. dog + house = doghouse. 3. rain + drop = raindrop. 4. cup + cake = cupcake. 5. paint + brush = paintbrush In English, words can be combined to form compound nouns. A compound noun is a noun made of two or more words. Each compound noun works as a single noun. There are many compound nouns in English, so it is important that you learn and understand them. In this lesson you will learn about different types and forms of compound nouns, and how to make them plural 3,833 Downloads. Grammar Revision Sheet. By SuzaneHarb. Nouns, common and proper nouns, compound nouns, singular and plural, pronouns, adjectives, verbs. 3,134 Downloads. Two-word compound words (-ing ) By gloenglish. 27 pictures and two lists of words. The students have to match both lists to make two-word nouns
A Combined Compound Noun is a noun in which multiple words are combined into a single word. It is also called solid or closed compound noun. Some examples are sunrise, baseball, wallpaper. A Hyphenated Compound Noun is a noun that uses hyphens to bring together multiple words. Some examples are blue-green, mother-of-pearl, rent-a-car Grade/level: grade 1 Age: 5-8 Main content: Compound words Other contents: compound words Add to my workbooks (284) Download file pdf Add to Google Classroom Add to Microsoft Teams Share through Whatsap The Compound Noun: Like English, German also offers the possibility of combining of words, especially nouns. The resulting noun chains in English typically feature spaces or hyphens between the different elements, while German ones normally appear as one word. The German penchant for creating complex compound nouns has long been the stuff of. A compound noun by adding 2 words together to make one word. Download the list of compound nouns in PDF format This is a large list of compound nouns and can be download by clicking on this link
Compound nouns - Easy Learning Grammar. A compound noun is a noun that is formed from two or more words. The meaning of the whole compound is often different from the meaning of the two words on their own. Compound nouns are very common. The main noun is normally the last one. teapot. headache Compound Nouns - English Online with free exercises, Teaching English. Menu. Englisch-hilfen.de/ Compounds in English - Exercise 1. Task No. 7413. Match the nouns on the left with the nouns on the right and find compounds. Do you need help? English Vocabulary Lists Compound nouns, prepositions and phrasal verbs. 1. English: compund nouns prepositions phrasal verbs By nicole gatt 2. compound nouns 3. What is a Compound Noun? o Compound nouns are words for people, animals, places, things, or ideas, made up of two or more words
Compound nouns. Hyphenate compound nouns when one of the words is abbreviated. Examples e-book e-commerce. Exception email. Compound numerals and fractions. Hyphenate compound numerals and fractions. Examples a twenty-fifth anniversary one-third of the page. En dashes in compound adjectives. Use an en dash (-) instead of a hyphen in a. You can use compound nouns from today's program and practice and/or compound nouns that . Compound words are two words combined together to make one new word! Cane Fires: The Anti-Japanese Movement in Hawaii, 1865-1945. Duffy talks about using texts and finding unknown words and breaking them down to get a meaning.. airmail 18. Jig + saw. Creating compound nouns is one of the primary ways, after adding suffixes, to increase the amount of vocabulary in the language. The formation of new words is particularly useful to the development of terminologie tecnico-scientifiche (scientific and technical terminology)
Compound nouns 2. Choose the odd one out. Type the answer in the box. 1 HEAD. light pool quarters line. 2 CARD. money birthday business credit. 3 TRAFFIC. lights jam warden rush Compound nouns are formed by combining two words together to make a new word with a different meaning, often with completely unrelated meanings! There are different types of compound nouns. Some compound nouns are one word (teabag, snowman, football), some are two words (apple tree, water park), and some are hyphenated (self-control, half-brother) Please follow the list for expressions and detailed examples about Compound Nouns; Compound noun is already a noun. Compound noun is made with 2, 3 or more words. In general compound noun forms are that; Noun + Noun Adjective + Noun But there are other combinations. Examples; Noun + noun; Bus stop; Is this the bus stop for the Angels Street A compound noun is a noun phrase made up of two nouns, e.g. bus driver, in which the first noun acts as a sort of adjective, a classifying adjective, for the second one, but without really describing it, for instance, the difference between, for instance, a black bird and a blackbird.. It can be made up of two or more other words, but with a single meaning Choose whether each compound noun should be one word or two words: 1. I really like your (hairstyle, hair style). 2. Did you buy some (toothpaste, tooth paste)? 3. Write the answers in your (notebook, note book). 4. She goes to (high school, highschool). 5. She buys and sells (realestate, real estate). 6. I need to go to the (post office.
(grammar) a noun, an adjective or a verb made of two or more words or parts of words, written as one or more words, or joined by a hyphen. Travel agent, dark-haired and bathroom are all compounds. Most compound nouns form their plurals in the usual way. Topics Language b A compound word is a word that is made from two other words put together, for example, lumber plus yard = lumberyard. English has thousands of compound nouns, but there are also some compound adjectives, adverbs and verbs. Here are a few examples: Adjective: childlike, postwar, secondhand, lifelike, monthlong, citywide, overanxious Adverb: henceforth, anyway, overall These words are formed by either adding a hyphen or just using the two words as a single term. Compound words can be written in three ways: (1) Open compounds. An open compound word is created in cases when the modifying adjective is used with its noun to create a new noun. They are spelled as two words. example: ice cream, living room, dinning. compound words or compound nouns. an hour ago by . Joseph Manso. 89% average accuracy. 1 plays. K - 5th grade . English, Fun. 0. Save. Share. Copy and Edit. Edit. Super resource. With Super, get unlimited access to this resource and over 100,000 other Super resources. Thank you for being Super. Get unlimited access to this and over 100,000.
Combined, they create a new word for undomesticated forms of life, which is the essence of a compound noun definition. Wildlife is an example of what is a compound noun in the closed form. Compound nouns can also appear in hyphenated form or open form (as two separate words). Some words even use more than one form, depending on the context Compound words. Grade/level: elementary. by sannap. English Year 6 Unit 1 Page 6 Compound Noun. Grade/level: Year 6. by TeacherHanaDaisy. Sports Compound Nouns. Grade/level: Intermediate. by Lukasredson The word compound suggests two words that, together, have one meaning. The two words have become a kind of phrase or expression that acts like a noun. Often, two nouns make up a compound. For. Year 1 (Ages 5-6) Literacy: Compound Words Video Lesson -. PlanIt Y1 Term 3B Look-Cover-Write-Check Spelling Practice Booklet. PlanIt Y1 Term 3B Look-Cover-Write-Check Spelling Practice Booklet -. Year 2 Vocabulary Grammar and Punctuation Terminology Display Posters. Year 2 Vocabulary Grammar and Punctuation Terminology Display Posters -. 5
Joining two or more small words togther to make a new larger one is how compound words are made. Three types of compound word. When compound words have spaces between them they are called open compound nouns: child care, work day, and time saver.. When compound words are joined with no space they are called closed compound words: skateboard, football and airport Compound words vs portmanteau words. So, is a compound word the same as a portmanteau*? Nope. Here's why. Sometimes called a blend, a portmanteau is a new word that's formed from part of one word and part of another, like this:. Brunch = breakfast + lunch. Sitcom = situation + comedy. Smog = smoke + fog *According to the OED, Lewis Carroll was the first to use the term portmanteau in this. A compound noun is a noun made by putting two or more words together to act as one noun. These nouns can be written as one word (as in fireworks and waistline), as hyphenated words (as in well.
A compound noun is two or more words combined to make a single noun that names a person, place, thing, or idea. It can be one single word, two words, or words connected by hyphens. My mother-in-law took me to the swimming pool after a dessert of strawberry shortcakes. Examples: One word: shortcake. Two word: swimming pool In turn, this language gets passed along and circulated to what it is today. In terms of compound words, they've transformed into various types. For example, compound words can have hyphens, spaces or no spaces at all between the two words. There are 3 types of compound words: Hyphenated Compound Words. Closed Compound Words. Open Compound Words
ESL Compound Nouns Game - Matching and Speaking Activity - Elementary (A1-A2) - 25 minutes. In this free compound nouns game, students combine words together to form compound nouns. Give each group of three a set of dominoes. The students shuffle the dominoes and deal out five each, leaving the rest in a pile face down Plural in compound nouns Exam in mind Level B1/ B2 1. As a rule in compounds it is the second component that takes the plural form: housewives, tooth-brushes, boy-scouts, maid-servants. 2. Compounds in -ful have the plural ending at the end of the word: handfuls, spoonfuls, mouthfuls, (though spoonsful and mouthsful are also possible). 3. Compounds in which the first component is man or woman. Rule 1 advises hyphenating two or more words acting as a single idea when they come before a noun (late-arriving train, ne'er-do-well teenager, one-of-a-kind invention). Exceptions to this rule are compound modifiers that include adverbs such as much and very as well as any -ly adverb ( much maligned administrator, very good cake, easily. The Texas Law Review Manual of Style says this about creating a compound word: When two or more words are combined to form a modifier immediately preceding a noun, join the words by hyphens if doing so will significantly aid the reader in recognizing the compound adjective (20). The if clause in that sentence is the tricky part Compound Nouns and Adjectives in Bangla: Some Empirical Observations Niladri Sekhar Dash Linguistic Research Unit, Indian Statistical Institute, Kolkata Email: [email protected] 1. Introduction Compounding is one of the most fertile means of word formation in Bangla. It is noted that new compound words are quite often generated in Bangla by way. | https://leutennahodou.com/grammar/compound-noun/-aq65j5027-vi |
Tricks, mistakes, and surprises await data scientists at every stage of the entire process. In this article, we will focus on the challenges that may be encountered during the Model Training phase. If you are curious about the challenges at other stages of a Machine Learning project, you can find them on our blog.
We have already defined the Machine Learning problem, a suitable dataset has already been selected, it has been cleaned up and prepared for further work, and the ML model’s design is ready. All that remains is to train and tune the model before implementing this in a production environment.
Well, let’s move on to an overview of the most common and frequent nuances that get in the way of making a good model.
Poor Documentation and Record-Keeping
Throughout the project, the involved employees should maintain documentation of their work. This should include the sequence of actions, the train of thought, the results obtained, and the tools used. The structure and chronology of records must be respected.
This general recommendation applies to most steps, but it is especially important at the stage of training the ML model. Here you should keep a record of all experiments and note significant details.
Lack of such documentation leads to repeated work to recover the details. This is especially true of teamwork where colleagues do not need to retell the progress every time. Employees can refer to an always up-to-date source of information and not run experiments twice. Also, sloppy or too short documentation can confuse new team members. It will take them a long time to build a complete understanding of the work done and the current situation.
In addition to successful results and improvements in your work, keep records of failed experiments. The absence of such details can send new colleagues along the path already traveled and time will again be wasted. Experiments without progress are also part of improving skills. It will also allow at the end of the project to make a qualitative analysis of the work.
Model versioning records should be clear and concise. Believe me, the names temp_01, temp_02, and temp_02+ are not a good idea.
Inaccuracies in Classification
Particular attention should be paid to the results of training the Machine Learning model if it is a classification problem. You might think you made a great model, but it might not be so. Most often, such difficulties arise with unbalanced classes. Let’s take a closer look at this.
Imagine you are dealing with binary classification. The dataset consists of 95% of one class. A classifier marks all observations as first class. So there are no right labels for the second class but the accuracy equals 95% and is still “good enough”. But the model does not perform well. Accuracy is not the best metric for classification, much less when classes are unbalanced. Such cases should be monitored closely.
It is better to evaluate a trained Machine Learning model using other metrics. For example, precision and recall are good options. Additionally, F-score is a metric calculated using precision and recall. These methods of measuring model accuracy take into account the predictions within each class. For clarity and analysis of the binary classification results, use the confusion matrix. Another tool for assessing the quality of a classification problem with two classes is the ROC AUC. Examining the area under the curve will also provide insight into how well a trained model is performing.
Let’s say you’ve already figured out that the model provides poor results due to a large class imbalance. Then how do you train the model to take into account the specifics of the dataset? In this case, individual classes can be prioritized. Then the model will pay more attention to samples with the problem class. This step is not always a panacea, but it often helps to improve the classification.
But bad results don’t always happen due to model training. Sometimes this is a signal to return to the stage of data preparation or model selection.
Selection of Hyperparameters
When a data scientist has the perfect data set and is confident in the choice of an algorithm, it still does not guarantee a great result. Even the most efficient algorithm will perform poorly if the hyperparameters are incorrectly selected. Model hyperparameters determine its performance. The ideal set for tuning the algorithm is selected experimentally. This is often done through manual trial and error. But this is not a very reliable and long-term way to achieve the best result.
There are no universal hyperparameters that will fit all datasets, as they are different. There are automatic services and tools to simplify model customization. They combine different hyperparameter variations and measure the performance of each set. Thus, the best result will be with the most appropriate hyperparameters. For example, such a built-in Python tool is GridSearchCV. Similar cloud-based tools from Azure, Google, and AWS are also available. You need to minimize manual hyperparameter tuning to maximize productivity.
But if manual tuning takes place, then it is important not to make another mistake. Here we are talking about the simultaneous change of hyperparameters. If you select several hyperparameters at once, then you can miss important shifts in the behavior of the model. Sometimes even decreasing the parameter by one hundredth can lead to important signals in the change in the performance of the model. Therefore, it is important to select the hyperparameters in turn and monitor the results.
Reusing the Same Test Set
Traditionally, test datasets are used to validate a trained model. Sometimes it happens that the accuracy of both the training and the test set is good enough. But as soon as the model deals with a new dataset, performance decreases. There may be several reasons, let’s discuss the one related to model training.
The results are tailored to specific data when using the same dataset in the same places to tune hyperparameters and other parameters of the algorithm. It is important to modify datasets to see the real picture of the model’s performance. If it is possible to increase the training and test sets over time, then this must be done. Thus, the algorithm will be generalized and more flexible.
In the case when the data is limited, you can use other methods. For example, a cross-validation approach can be used. It helps to evaluate the model using different test sets while using the same data set. It is also a good approach to shuffle the data when split into train and test each time before a new experiment. This will allow you to have infinitely many different test data sets for control.
Insufficient Infrastructure Capacity
The big challenge in training a machine learning model can be the existing infrastructure. It is necessary to consider in advance the throughput of the system before starting experiments.
There are two challenges. The first is the size and complexity of the dataset. If it is very large and the infrastructure is weak, then the time for model training can reach hours or even days. This is not always productive and limited in opportunities. Unforeseen lengthy training can shift the existing project timeline. The second is not an optimally selected algorithm. Some models process large datasets much slower than others. Therefore, it is important to consider the speed of the machine learning model in the previous step of choosing an algorithm. In order not to have time issues at the training stage.
It is a good habit to keep track of the model training time. As a result, you will have a processing time as an additional parameter for evaluating the algorithm in addition to its metrics.
Still, if the model is chosen and the dataset is very large, how to avoid extended training times? You can try to take only part of the dataset. It is important to note that the sub-dataset must be representative. This should be selected in all appropriate proportions with the master dataset.
Conclusion
We looked at a few challenges that data scientists often come across in the training phase of a machine learning model. These are not the only problems that can be encountered but they are the most common ones. But if you follow some of the rules above, then the machine learning project will become more effective and less resource-intensive. Feel free to experiment during the training phase of the model, but consider your capabilities and resources. Also, keep taking notes throughout the project, even if they involve failed experiments. Always consider the specifics of your dataset. And then step by step you will have a good project.
The next big step is the Operationalization Phase. You can follow the link for interesting details of this stage. | https://data-science-ua.com/blog/top-5-challenges-of-the-model-training-phase-in-an-ml-project-train-hard-fight-easy/ |
---
abstract: 'Let $G=(V,E)$ be a graph with the vertex-set $V$ and the edge set $E$. Let $N(v)$ denotes the set of neighbors of the vertex $v$ of $G.$ The graph $G$ is called a $vd$-$graph$, if for every pare of distinct vertices $v,w \in V$ we have $N(v)\neq N(w).$ In this paper, we present a method for finding automorphism groups of connected bipartite $vd$-graphs. Then, by our method, we determine automorphism groups of some classes of connected bipartite $vd$-graphs, including a class of graphs which are derived from Grassmann graphs. In particular, we show that if $G$ is a connected non-bipartite $vd$-graph such that for a fixed positive integer $a_0$ we have $c(v,w)=|N(v)\cap N(w)|=a_0$, when $v,w$ are adjacent, whereas $c(v,w) \neq a_0$, when $v,w$ are not adjacent, then the automorphism group of the bipartite double of $G$ is isomorphic with the group $Aut(G) \times \mathbb{Z}_2$. A graph $G$ is called a $stable$ graph, if $Aut(B(G)) \cong Aut(G) \times \mathbb{Z}_2$, where $B(G) $ is the bipartite double of $G$. Finally, we show that the Johnson graph $J(n,k)$ is a stable graph.'
author:
- |
S.Morteza Mirafzal\
Department of Mathematics\
Lorestan University, Khorramabad, Iran\
\
E-mail:[email protected]\
E-mail: [email protected]
date:
-
-
title: 'On the automorphism groups of connected bipartite $vd$-graphs '
---
[^1]
[^2]
[^3]
Introduction
============
In this paper, a graph $G=(V,E)$ is considered as an undirected simple finite graph, where $V=V(G)$ is the vertex-set and $E=E(G)$ is the edge-set. For the terminology and notation not defined here, we follow $[1,2,4,7]$.
Let $G=(U \cup W,E), $ $U \cap W= \emptyset$ be a bipartite graph. It is quite possible that we wish to construct by vertices in $U$ some other graphs which are related to $G$ in some aspects. For instance there are cases in which we can construct a graph $G_1=(U,E_1)$ such that we have $Aut(G) \cong Aut(G_1)$, where $Aut(X)$ is the automorphism group of the graph $X$. For example note to the following cases. (i) Let $ n \geq 3 $ be an integer and $ [n] = \{1,2,..., n \}$. Let $ k$ be an integer such that $1\leq k <\frac{n}{2}$. The graph $B(n,k)$ which has been introduced in \[15\] is a graph with the vertex-set $V=\{v \ | \ v \subset [n] , | v | \in \{ k,k+1 \} \} $ and the edge-set $ E= \{ \{ v , w \} \ | \ v , w \in V , v \subset w $ or $ w \subset v \} $. It is clear that the graph $B(n,k)$ is a bipartite graph with the vertex-set $V=V_1 \cup V_2$, where $V_1=\{ v \subset [n] | \ |v| =k \}$ and $V_2=\{ v \subset [n] | \ |v| =k+1 \}$. This graph has some interesting properties which are investigated recently \[11,15,16,19\]. Let $G=B(n,k)$ and let $G_1=(V_1,E_1)$ be the Johnson graph $J(n,k)$ which can be constructed on the vertex-set $V_1$. It has been proved that if $n\neq 2k+1$, then $Aut(G) \cong Aut(G_1)$, and if $n=2k+1$, then $ Aut(G) \cong Aut(G_1) \times \mathbb{Z}_2$ \[15\].
\(ii) Let $n$ and $k$ be integers with $n>2k, k\geq1$. Let $[n] = \{1, 2, ... , n\} $ and $V$ be the set of all $k$-subsets and $(n-k)$-subsets of $[n]$. The $bipartite\ Kneser\ graph$ $H(n, k)$ has $V$ as its vertex-set, and two vertices $v, w$ are adjacent if and only if $v \subset w$ or $w\subset v$. It is clear that $H(n, k)$ is a bipartite graph. In fact, if $V_1=\{ v \subset [n] | \ |v| =k \}$ and $V_2=\{ v \subset [n] | \ |v| =n-k \}$, then $\{ V_1, V_2\}$ is a partition of $V(H(n ,k))$ and every edge of $H(n, k)$ has a vertex in $V_1$ and a vertex in $V_2$ and $| V_1 |=| V_2 |$. Let $G=H(n,k)$ and let $G_1=(V_1,E_1)$ be the Johnson graph $J(n,k)$ which can be constructed on the vertex-set $V_1$. It has been proved that $ Aut(G) \cong Aut(G_1) \times \mathbb{Z}_2$ \[17\]. (iii) Let $n, k$ and $l$ be integers with $0 < k < l < n $. The $set$-$inclusion$ $graph$ $G(n, k, l)$ is the graph whose vertex-set consists of all $k$-subsets and $l$-subsets of $[n] = \{1, 2, ... , n\} $, where two distinct vertices are adjacent if one of them is contained in another. It is clear that the graph $G(n, k, l)$ is a bipartite graph with the vertex-set $V=V_1 \cup V_2$, where $V_1=\{ v \subset [n] | \ |v| =k \}$ and $V_2=\{ v \subset [n] | \ |v| =l \}$. It is easy to show that $G(n, k, l) \cong G(n,n-k,n-l)$, hence we assume that $k+l \leq n$. It is clear that if $l=k+1$, then $G(n, k, l)=B(n,k)$, where $B(n,k)$ is the graph which is defined in (i). Also, if $l=n-k$ then $G(n, k, l)=H(n,k)$, where $H(n,k)$ is the graph which is defined in (ii). Let $G=G(n, k, l)$ and let $G_1=(V_1,E_1)$ be the Johnson graph $J(n,k)$ which can be constructed on the vertex-set $V_1$. It has been proved that if $n\neq k+l$, then $Aut(G) \cong Aut(G_1)$, and if $n=k+l$, then $ Aut(G) \cong Aut(G_1) \times \mathbb{Z}_2$ \[9\]. In this paper we generalize this results to some other classes of bipartite graphs. In fact, we state some accessible conditions such that if for a bipartite graph $G=(V,E)=(U\cup W,E)$ these conditions hold, then we can determine the automorphism group of the graph $G$. Also, we determine the automorphism group of a class of graphs which are derived from Grassmann graphs. In particular, we determine automorphism groups of bipartite double of some classes of graphs. In fact, we show that if $G$ is a non-bipartite connected $vd$-graph, and $a_0$ is a positive integer such that $c(v,w)=|N(v)\cap N(w)|=a_0$, when $v,w$ are adjacent, whereas $c(v,w) \neq a_0 $ when $v,w$ are not adjacent, then the automorphism group of the bipartite double of $G$ is isomorphic with $Aut(G) \times \mathbb{Z}_2$. Finally, we show that if $G=J(n,k)$ is a Johnson graph, then $Aut(G \times K_2)$ is isomorphic with the group $Aut(G) \times \mathbb{Z}_2$. In other words, we show that Johnson graphs are stable graphs.
Preliminaries
=============
The graphs $G_1 = (V_1,E_1)$ and $G_2 =
(V_2,E_2)$ are called $isomorphic$, if there is a bijection $\alpha
: V_1 \longrightarrow V_2 $ such that $\{a,b\} \in E_1$ if and only if $\{\alpha(a),\alpha(b)\} \in E_2$ for all $a,b \in V_1$. In such a case the bijection $\alpha$ is called an $isomorphism$. An $automorphism$ of a graph $G $ is an isomorphism of $G
$ with itself. The set of automorphisms of $\Gamma$ with the operation of composition of functions is a group, called the automorphism group of $G$ and denoted by $ Aut(G)$.
The group of all permutations of a set $V$ is denoted by $Sym(V)$ or just $Sym(n)$ when $|V| =n $. A $permutation$ $group$ $\Gamma$ on $V$ is a subgroup of $Sym(V).$ In this case we say that $\Gamma$ $acts$ on $V$. If $\Gamma$ acts on $V$, we say that $\Gamma$ is $transitive$ on $V$ (or $\Gamma$ acts $transitively$ on $V$), when there is just one orbit. This means that given any two elements $u$ and $v$ of $V$, there is an element $ \beta $ of $G$ such that $\beta (u)= v
$. If $X$ is a graph with vertex-set $V$, then we can view each automorphism of $X$ as a permutation on $V$, and so $Aut(X) = \Gamma$ is a permutation group on $V$.
A graph $G$ is called $vertex$-$transitive$, if $Aut(G)$ acts transitively on $V(\Gamma)$. We say that $G$ is $edge$-$transitive$ if the group $Aut(G)$ acts transitively on the edge set $E$, namely, for any $\{x, y\} , \{v, w\} \in E(G)$, there is some $\pi$ in $Aut(G)$, such that $\pi(\{x, y\}) = \{v, w\}$. We say that $G$ is $symmetric$ (or $arc$-$transitive$) if, for all vertices $u, v, x, y$ of $G$ such that $u$ and $v$ are adjacent, and also, $x$ and $y$ are adjacent, there is an automorphism $\pi$ in $Aut(G)$ such that $\pi(u)=x$ and $\pi(v)=y$. We say that $G$ is $distance$-$transitive$ if for all vertices $u, v, x, y$ of $G$ such that $d(u, v)=d(x, y)$, where $d(u, v)$ denotes the distance between the vertices $u$ and $v$ in $G$, there is an automorphism $\pi$ in $Aut(\Gamma)$ such that $\pi(u)=x$ and $\pi(v)=y.$\
Let $n,k \in \mathbb{ N}$ with $ k < n, $ and let $[n]=\{1,...,n\}$. The $Johnson\ graph$ $J(n,k)$ is defined as the graph whose vertex set is $V=\{v\mid v\subseteq [n], |v|=k\}$ and two vertices $v$,$w $ are adjacent if and only if $|v\cap w|=k-1$. The Johnson graph $J(n,k)$ is a distance-transitive graph \[2\]. It is easy to show that the set $H= \{ f_\theta \ | \ \theta \in$ S$ym([n]) \} $, $f_\theta (\{x_1, ..., x_k \}) = \{ \theta (x_1), ..., \theta (x_k) \} $, is a subgroup of $ Aut( J(n,k) ). $ It has been shown that $Aut(J(n,k)) \cong$ S$ym([n])$, if $ n\neq 2k, $ and $Aut(J(n,k)) \cong$ S$ym([n]) \times \mathbb{Z}_2$, if $ n=2k$, where $\mathbb{Z}_2$ is the cyclic group of order 2 \[10,18\].
The group $\Gamma$ is called a semidirect product of $ N $ by $Q$, denoted by $ \Gamma=N \rtimes Q $, if $\Gamma$ contains subgroups $ N $ and $ Q $ such that, (i)$
N \unlhd \Gamma $ ($N$ is a normal subgroup of $\Gamma$); (ii) $ NQ = \Gamma $; (iii) $N \cap Q =1 $. Although, in most situations it is difficult to determine the automorphism group of a graph $G$, there are various papers in the literature, and some of the recent works include \[5,6,10,14,15,17,18,22\].
Main Results
============
The proof of the following lemma, however is easy but its result is necessary for proving the results of our work.
Let $G= (U \cup W,E)$, $U \cap W=\emptyset $ be a connected bipartite graph. Let $ f$ be an automorphism of $G $. If for a fixed vertex $u_0 \in U $, we have $ f(u_0) \in U$, then $f(U) =U$ and $f(W) =W$. Also, if for a fixed vertex $w_0 \in W $, we have $ f(w_0) \in W$, then $f(W) = W$ and $f(W) =U$.
It is sufficient to show that if $ u \in U $ then $f(u) \in U$. We know that if $ u \in U$, then $d_G (u_0, u) = d(u_0, u)$, the distance between $ v$ and $ w$ in the graph $ G$, is an even integer. Assume $ d(u_0, u) =2l$, $ 0\leq 2l \leq D$, where $ D$ is the diameter of $G $. We prove by induction on $ l$, that $ f(u) \in U$. If $ l=0$, then $ d(u_0, u) =0$, thus $u=u_0$, and hence $f(u)=f(u_0) \in U$. Suppose that if $ u \in U $ and $ d(u_0, u)= 2(k-1)$, then $ f(u) \in U$. Assume $ u \in U$ and $d(u, u_0)=2k $. Then, there is a vertex $ u_1 \in G $ such that $d(u_0, u_1)=2k-2=2(k-1)$ and $ d(u, u_1)=2$. We know (by the induction assumption) that $ f(u_1) \in U$ and since $ d(f(u),f(u_1))=2$, therefore $f(u) \in U $. Now, it follows that $ f(U)=U$ and consequently $ f(W)=W$.
Let $G= (U \cup W,E)$, $U \cap W=\emptyset $ be a connected bipartite graph. If $f$ is an automorphism of the graph $G$, then $ f(U)=U$ and $f(W) =W$, or $ f(U) = W $ and $f(W) = U$.
Let $G=(V,E)$ be a graph with the vertex-set $V$ and the edge-set $E$. Let $N(v)$ denotes the set of neighbors of the vertex $v$ of $G.$ We say that $G$ is a $vd$-graph, if for every pare of distinct vertices $x,y \in V$ we have $N(x)\neq N(y)$ ($vd$ is an abbreviation for vertex-determining.
From Definition 3.3. it follows that the cycle $C_n$ is a $vd$-graph, but the complete bipartite graph $K_{m,n}$ is not a $vd$-graph, when $m \neq 1$.
Let $G= (U \cup W,E)$, $U \cap W=\emptyset $ be a bipartite $vd$-graph. If $f$ is an automorphism of $ G$ such that $f(u)=u$ for every $u\in U$, then $f$ is the identity automorphism of $ G$.
Let $ w\in W$ be an arbitrary vertex. Since $f$ is an automorphism of the graph $G$, then for the set $N(w)= \{ u | u\in U, u \leftrightarrow w \}$, we have $f(N(w))= \{ f(u) | u\in U, u \leftrightarrow w \}=N(f(w))$. On the other hand, since for every $u\in U$, $f(u)=u$, then we have $f(N(w))=N(w) $, and therefore $N(f(w))=N(w) $. Now since $G$ is a $vd$-graph we must have $f(w)=w$. Therefore, for every vertex $x$ in $V(G)$ we have $f(x)=x$ and thus $f$ is the identity automorphism of the graph $G$.
Let $G= (U \cup W,E)$, $U \cap W=\emptyset $ be a bipartite graph. We can construct various graphs on the set $U$. We show that some of these graphs can help us in finding the automorphism group of the graph $G$.
Let $G= (U \cup W,E)$, $U \cap W=\emptyset $ be a bipartite graph. Let $G_1=(U,E_1)$ be a graph with the vertex-set $U $ such that the following conditions hold;\
(i) every automorphism of the graph $G_1$ can be uniquely extended to an automorphism of the graph $G$. In other words, if $f$ is an automorphism of the graph $G_1$, then there is a unique automorphism $e_f$ in the automorphism group of $G$ such that ${(e_f)|}_U=f$, where ${(e_f)|}_U$ is the restriction of the automorphism $ e_f $ to the set $U$.\
(ii) If $f \in Aut(G)$ is such that $f(U)=U$, then the restriction of $f$ to $U$ is an automorphism of the graph $G_1$. In other words, if $f \in Aut(G)$ is such that $f(U)=U$ then $f|_U \in Aut(G_1).$\
When such a graph $G_1$ exists, then we say that the graph $G_1$ is an $attached$ graph to the graph $G$.
Let $G= (U \cup W,E)$, $U \cap W=\emptyset $ be a bipartite $vd$-graph, and $G_1=(U,E_1)$ be a graph. If $f \in Aut(G_1)$ can be extended to an automorphism $g$ of the graph $G$, then $g$ is unique. In fact if $g$ and $h$ are extensions of the automorphism $f \in Aut(G_1)$ to automorphisms of $G$, then $i=gh^{-1}$ is an automorphism of the graph $G$ such that the restriction of $i$ to the set $U$ is the identity automorphism. Hence by Lemma 3.4. the automorphism $i$ is the identity automorphism of the graph $G$, and therefore $g=h$. Hence, according to Definition 3.5. the graph $G_1$ is an attached graph to the graph $G$, if and only if every automorphism of $G_1$ can be extended to an automorphism of $G$ and every automorphism of $G$ which fixes $U$ set-wise is an automorphism of $G_1$.
Let $G=H(n,k)=(V_1 \cup V_2, E)$ be the bipartite Kneser graph which is defined in (ii) of the introduction of the present paper. Let $G_1=(V_1,E_1)$ be the Johnson graph which can be constructed on the vertex $V_1$. It can be shown that the graph $G_1$ is an attached graph to the graph $G$ \[17\].
In the next theorem, we show that if $G=(U \cup W,E)$, $U \cap W= \emptyset$ is a connected bipartite $vd$-graph with $G_1=(U,E_1)$ as an attached graph to $G$, then we can determine the automorphism group of the graph $G$, provided the automorphism group of the graph $G_1 $ has been determined.
Let $G=(U \cup W,E)$, $U \cap W= \emptyset$, be a connected bipartite $vd$-graph such that $G_1=(U,E_1)$ is an attached graph to $G$. If $f\in Aut(G_1)$ then we let $e_f$ be its unique extension to $Aut(G).$ It is easy to see that $E_{G_1}=\{e_f | f \in Aut(G_1) \}$, with the operation of composition, is a group. Moreover, it is easy to see that $E_{G_1}$ and $Aut(G_1)$ are isomorphic (as abstract groups).\
For the bipartite graph $G=(U \cup W,E)$ we let $S(U)= \{ f \in Aut(G) | f(U)=U \}$=${Aut(G)}_U$, the stabilizer subgroup of the set $U$ in the group $Aut(G)$. Next theorem shows that when $G_1=(U,E_1)$ is an attached graph to $G$, then $S(U)$ is a familiar group.
Let $G=(U \cup W,E)$, $U \cap W= \emptyset$ be a connected bipartite $vd$-graph such that $G_1=(U,E_1)$ is an attached graph to $G$. Then $S(U) \cong Aut(G_1)$, where $S(U)= \{ f \in Aut(G) | f(U)=U \}.$
Let $f$ be an automorphism of the graph $G_1$, then by definition of the graph $G_1$ we deduce that $ e_f $ is an automorphism of the graph $G$ such that $ e_f(U)=U $. Hence, we have $E_{G_1} \leq S(U), $ where $E_{G_1}$ is the group which is defined preceding of this theorem.\
On the other hand, if $g \in S(U)$, then $g(U)=U$, thus by the definition of the graph $G_1$, the restriction of $g$ to $U$ is an automorphism of the graph $G_1$. In other words, $h=g|_{U} \in Aut(G_1)$. Therefore by definition 3.5. there is an automorphism $e_h$ of the graph $G$ such that $e_h(u)=g(u)$ for every $u \in U.$ Now by Remark 3.6 we deduce that $g=e_h \in E_{G_1}$. Hence we have $ S(U) \leq E_{G_1}.$ We now deduce that $S(U)=E_{G_1}.$ Now, since $E_{G_1} \cong Aut(G_1)$, we conclude that $S(U) \cong Aut(G_1).$
Let $G=(U \cup W,E)$, $U \cap W= \emptyset$ be a connected bipartite graph. It is quite possible that $G$ be such that $f(U)=U, $ for every automorphism of the graph $G.$ For example if $|U| \neq |W|$, or $U$ contains a vertex of degree $d$, but $W$ does not contain a vertex of degree $d$, then we have $f(U)=U $ for every automorphism $f$ of the graph $G.$ In such a case we have $Aut(G)=S(U)$, and hence by Proposition 3.8. we have the following theorem.
Let $G=(U \cup W,E)$, $U \cap W= \emptyset$ be a connected bipartite $vd$-graph such that $G_1=(U,E_1)$ is an attached graph to $G$. If $Aut(G)=S(U)$, then $ Aut(G) \cong Aut(G_1)$.
Let $G=(U \cup W,E)$, $U \cap W= \emptyset$ be a connected bipartite $vd$-graph. Concerning the automorphism group of $G$, we can more say even if $|U|=|W|.$ When $|U|=|W|$ then there is a bijection $\theta : U \rightarrow W.$ Then ${\theta}^{-1}\cup \theta= t$ is a permutation on the vertex-set of the graph $G$ such that $t(U)=W$ and $t(W)=U$. In the following theorem, we show that if the graph $G$ has an attached graph $G_1=(U,E_1)$, and if such a permutation $t$ is an automorphism of the graph $G$, then the automorphism group of the graph $G$ is a familiar group.
Let $G=(U \cup W,E)$, $U \cap W= \emptyset$ be a connected bipartite $vd$-graph such that $G_1=(U,E_1)$ is an attached graph to $G$ and $|U|=|W|$. Suppose that there is an automorphism $t$ of the graph $G$ such that $t(U)=W$. Then $Aut(G)=Aut(G_1)\rtimes H$ where $H=<t>$ is the subgroup generated by $t$ in the group $Aut(G)$.
Let $S(U)= \{ f \in Aut(G) | f(U)=U \}.$ It is clear that $S(U)$ is a subgroup of $Aut(G).$ Let $g \in Aut(G)$ be such that $g(U) \neq U$, then by Lemma 3.1. we have $g(U)=W$, and hence $tg(U)=t(W)=U$. Therefore, $tg \in S(U)$, and hence there is an element $h \in S(U)$ such that $tg=h$. Thus, $g=t^{-1}h \in <t, S(U)>$, where $<t, S(U)>=K$ is the subgroup of $Aut(G)$ which is generated by $t$ and $S(U)$. This follows that $Aut(G) \leq K$. Since $K \leq Aut(G)$, hence we deduce that $K=Aut(G)$. If $f$ is an arbitrary element in the subgroup $S(U)$ of $K$, then we have $(t^{-1}ft)(U)=(t^{-1}f)(W)= t^{-1}(f(W))= (t^{-1})(W)=U$, hence $t^{-1}ft \in S(U).$ We now deduce that $S(U)$ is a normal subgroup of the group $K$. Therefore $K=<t, S(U)>=S(U)\rtimes<t>=S(U) \rtimes H, $ where $H=<t>$. We have seen in Proposition 3.8. that $S(U) \cong Aut(G_1)$, hence we conclude that, $K=Aut(G)\cong Aut(G_1) \rtimes H$.
In the sequel, we will see how Theorem 3.9. and Theorem 3.10. can help us in determining the automorphism groups of some classes of bipartite graphs.
$${\bf Some\ Applications}$$ Let $G=(U \cup W)=G(n,k,l)$ be the bipartite graph which is defined in (iii) of the introduction of the present paper. Then $U=\{ v \subset [n] | \ |v| =k \}$ and $W=\{ v \subset [n] | \ |v| =l \}$. It is easy to show that $G$ is a connected $vd$-graph. Let $G_1=(U,E_1)$ be the Johnson graph which can be constructed on the set $U$. By a proof exactly similar to what is appeared in \[15,17\] and later \[9\], it can be shown that $G_1$ is an attached graph to $G$. We know that $Aut(G_1)=H=\{f_{\theta} | \theta \in Sym([n])$}, where $ f_{\theta}(v)=\{ \theta(x)| x\in v \} $, for every $v\in U$. Because $k<l$ and $k+l\leq n$ implies that $k<\frac{n}{2}$. When $k+l=n$, then the mapping $t: V(G) \rightarrow V(G)$, defined by the rule $t(v)=v^c$, where $v^c$ is the complement of the set $v$ in the set $[n]= \{1,2,3,...,n \}$, is an automorphism of $G$. It is clear that $t(U)=W $ and $t(W)=U$. Moreover, $t$ is of order 2, hence $<t>\cong \mathbb{Z}_2$. It is easy to show that if $f \in H$ then $ft=tf$ \[15,18\]. We now, by Theorem 3.9. and Theorem 3.10. obtain the following theorem which has been given in \[9\].
Let $n,k$ and $l$ be integers with $1 \leq k < l \leq n-1$ and $G=G(n,k,l)$. If $n \neq k+l$, then $Aut(G) \cong$ $Sym([n]) $, and if $n=k+l, $ then\
$Aut(G)=H \rtimes <t> \cong H \times <t> \cong Sym([n]) \times \mathbb{Z}_2$, where $H$ and $t$ are the group and automorphism which are defined preceding of this theorem.
We now consider a class of graphs which are in some combinatorial aspects similar to Johnson graphs.
Let $p$ be a positive prime integer and $q=p^m$ where $m$ is a positive integer. Let $n,k$ be positive integers with $k <n$. Let $V(q,n)$ be a vector space of dimension $n$ over the finite field $\mathbb{F}_q.$ Let $V_k$ be the family of all subspaces of $V(q,n)$ of dimension $k$. Every element of $V_k$ is also called a $k$-subspace. The Grassmann graph $G(q,n,k)$ is the graph with the vertex-set $V_{k}$, in which two vertices $u$ and $w$ are adjacent if and only if $\dim(u\cap w)=k-1$.
Note that if $k = 1$, we have a complete graph, so we shall assume that $k >1 $. It is clear that the number of vertices of the Grassmann graph $G(q,n,k)$, that is, $|V_k|$, is the Gaussian binomial coefficient, $${n\brack k}_q= \dfrac{(q^{n}-1)(q^n-q)\cdots (q^{n}-q^{k-1})}{(q^{k}-1)(q^k-q)\cdots (q^k-q^{k-1})} =\dfrac{(q^{n}-1)\cdots (q^{n-k+1}-1)}{(q^{k}-1)\cdots (q-1)}.$$
Noting that ${n\brack k}_q={n\brack n-k}_q$, it follows that $|V_k|=|V_{n-k}|$. It is easy to show that if $1 \leq i < j \leq \frac{n}{2}$, then $|V_i| < |V_j|$. Let $( , )$ be any nondegenerate symmetric bilinear form on $V(q,n)$. For each $X \subset V(q,n)$ we let $X^{\perp}=\{ w \in V(q,n) | (x,w)=0, $ for every $ x \in X \}.$ It can be shown that if $v$ is a subspace of $V(q,n)$ then $v^{\perp}$ is also a subspace of $V(q,n)$ and $dim(v^{\perp})=n-dim(v)$. It can be shown that $G(n,q,k) \cong G(n,q,n-k)$ \[2\], hence in the sequel we assume that $k \leq \frac{n}{2}$.
It is easy to see that the distance between two vertices $v$ and $w$ in this graph is $k-dim(v\cap w)$. The Grassmann graph is a distance-regular graph of diameter $k$ \[2\]. Let $K$ be a field, and $V(n)$ be a vector space of dimension $n$ over the field $K$. Let $\tau : K\longrightarrow K$ be a field automorphism. A semilinear operator on $V(n)$ is a mapping $f : V(n)\longrightarrow V(n)$ such that,
$f(c_{1}v_{1} + c_{2}v_{2}) = \tau (c_{1})f(v_{1}) + \tau (c_{2})f(v_{2})\ (c_{1}, c_{2}\in K, \ and \ v_{1}, v_{2}\in V(n))$.
A semilinear operator $f : V(n)\longrightarrow V(n)$ is a semilinear automorphism if it is a bijection. Let $\Gamma L_n (K)$ be the group of semilinear automorphisms on $V(n)$. Note that this group contains $A(V(n))$, where $A(V(n))$ is the group of non singular linear mappings on the space $V(n)$. Also, this group contains a normal subgroup isomorphic to $K^{*}$, namely, the group $Z= \{ kI_{V(n)} | k \in K \}$, where $I_{V(n)}$ is the identity mapping on $V(n)$. We denote the quotient group $\frac{\Gamma L_n (K)}{Z}$ by $P \Gamma L_n (K) $.
Note that if $(a+Z) \in P \Gamma L_n (K)$ and $x$ is an $m$-subspace of $V(n)$, then $ (a+Z)(x)=\{ a(u) | u \in x \}$ is an $m$-subspace of $V(n)$. In the sequel, we also denote $(a+Z) \in P \Gamma L_n (K)$ by $a$. Now, if $a \in P \Gamma L_n (\mathbb{F}_q)$, it is easy to see that the mapping, $f_a : V_k \longrightarrow V_k$, defined by the rule, $f_a(v)=a(v)$ is an automorphism of the Grassmann graph $G=G(q,n,k)$. Therefore if we let, $$A=\{ f_a | a \in P \Gamma L_n (\mathbb{F}_q) \} \ \ \ \ \ \ (1)$$ then $A$ is a group isomorphic to the group $P \Gamma L_n (\mathbb{F}_q))$ (as abstract groups), and we have $A \leq Aut(G)$.
When $n=2k$, then the Grassmann graph $G=G(q,n,k)$ has some other automorphisms. In fact if $n=2k$, then the mapping $\theta : V_k \longrightarrow V_k$, which is defined by this rule $\theta(v)=v^{\perp}$, for every $k$-subspace of $V(2k)$, is an automorphism of the graph $G=G(q,2k,k)$. Hence $M=<A,\theta> \leq Aut(G)$. It is easy to see that $A$ is a normal subgroup of the group $M$. Therefore $M=A\rtimes <\theta>$. Note that the order of $\theta$ is 2 and hence $<\theta> \cong \mathbb{Z}_2$. Concerning the automorphism groups of Grassmann graphs, from a known fact which is appeared in \[3\], we have the following result \[2\].
Let $G$ be the Grassmann graph $G=G(q,n,k)$, where $n >3$ and $k \leq \frac{n}{2}$. If $n \neq 2k$, then we have, $Aut(G)=A \cong P \Gamma L_n (\mathbb{F}_q)$, and if $n=2k$, then we have $Aut(G) =<A, \theta> \cong A\rtimes <\theta> \cong P \Gamma L_n (\mathbb{F}_q) \rtimes \mathbb{Z}_2$, where $A$ is the group which is defined in $(1)$ and $\theta$ is the mapping which is defined preceding of this theorem.
We now proceed to determine the automorphism group of a class of bipartite graphs which are similar in some aspects to graphs $B(n,k)$
Let $n,k$ be positive integers such that $n\geq 3$, $k \leq n-1 $. Let $q$ be a power of a prime and $\mathbb{F}_q$ be the finite field of order $q$. Let $V(q,n)$ be a vector space of dimension $n$ over $\mathbb{F}_q$. We define the graph $S(q,n,k)$ as a graph with the vertex-set $V=V_k \cup V_{k+1}$, in which two vertices $v$ and $w$ are adjacent whenever $v$ is a subspace of $w$ or $w$ is a subspace of $v$, where $V_k$ and $V_{k+1}$ are subspaces in $V(q,n)$ of dimension $k$ and $k+1$, respectively.
When $n=2k+1$, then the graph $S(q,n,k)$ is known as a doubled Grassmann graph \[2\]. Noting that ${n\brack k}_q={n\brack n-k}_q$, it is easy to show that $S(n,q,k) \cong S(n,q,n-k-1)$, hence in the sequel we assume $k \leq \frac{n}{2}$. It can be shown that the graph $S(q,n,k)$ is a connected bipartite $vd$-graph. We formally state and prove this fact.
The graph $G=S(q,n,k)$ which is defined in Definition $3.14.$ is a connected bipartite vd-graph.
It is clear that the graph $G=S(q,n,k)$ is a bipartite graph with partition $V_k \cup V_{k+1}$. It is easy to show that $G$ is a $vd$-graph. We now, show that $G$ is a connected graph. It is sufficient to show that if $v_1,v_2$ are two vertices in $V_{k}$, then there is a path in $G$ between $v_1$ and $v_2$. Let $dim(v_1 \cap v_2)=k-j$, $1 \leq j \leq k$. We prove our assert by induction on $j$. If $j=1$, then $u=v_1+v_2$ is a subspace of $V(n,q)$ of dimension $k+k-(k-1)=k+1$, which contains both of $v_1$ and $v_2$. Hence, $u \in V_{k+1}$ is adjacent to both of the vertices $v_1$ and $v_2$. Thus, if $j=1$, then there is a path between $v_1$ and $v_2$ in the graph $G$. Assume when $j=i$, $0 < i <k$, then there is a path in $G$ between $v_1$ and $v_2$. We now assume $j=i+1$. Let $v_1 \cap v_2=w$, and let $B=\{ b_1,...,b_{k-i-1} \}$ be a basis for the subspace $w$ in the space $V(q,n)$. We can extend $B$ to bases $B_1$ and $B_2$ for the subspaces $v_1$ and $v_2$, respectively. Let $B_1= \{ b_1,...,b_{k-i-1}, c_1,...,c_{i+1} \}$ be a basis for $v_1$ and $B_2= \{ b_1,...,b_{k-i-1}, d_1,...,d_{i+1} \}$ be a basis for $v_2$. Consider the subspace $s=<b_1,...,b_{k-i-1}, c_1,d_2,...,d_{i+1}>$. Then $s$ is a $k$-subspace of the space $V(q,n)$ such that $dim(s \cap v_2)=k-1$ and $dim(s \cap v_1)=k-i$. Hence by the induction assumption, there is a path $P_1$ between vertices $v_2$ and $s$, and a path $P_2$ between vertices $s$ and $v_1$. We now conclude that there is a path in the graph $G$ between vertices $v_1$ and $v_2$.
Let $G=S(q,n,k)$ be the graph which is defined in definition $3.14.$ If $n\neq 2k+1$, then we have $Aut(G) \cong P \Gamma L_n (\mathbb{F}_q)$. If $n=2k+1$, then $Aut(G) \cong P \Gamma L_n (\mathbb{F}_q) \rtimes \mathbb{Z}_2 $.
From Proposition 3.15. it follows that the graph $G=S(q,n,k)$ is a connected bipartite $vd$-graph with the vertex-set $V_k \cup V_{k+1}$, $V_k \cap V_{k+1}= \emptyset$. Let $G_1=G(q,n,k)=(V_k,E)$ be the Grassmann graph with the vertex-set $V_k$. We show that $G_1$ is an attached graph to the graph $G$.\
Firstly, the condition (i) of Definition 3.5. holds, because $k < \frac{n}{2}$ and every automorphism of the Grassmann graph $G(q,n,r)$ is of the form $f_a$, $a \in P \Gamma L_n (\mathbb{F}_q)$, is an automorphism of the graph $G(q,n,s)$ when $r,s < \frac{n}{2}$. Also, note that if $X,Y$ are subspaces of $V(q,n)$ such that $X \leq Y$, then $f_a(X) \leq f_a(Y)$.\
Now, suppose that $f$ is an automorphism of the graph $G$ such that $f(V_k)=V_k$. We show that the restriction of $f$ to the set $V_k$, namely $g=f|_{V_k}$, is an automorphism of the graph $G$. It is trivial that $g$ is a permutation of the vertex-set $V_k$. Let $v$ and $w$ be adjacent vertices in the graph $G_1$. We show that $g(v)$ and $ g(w)$ are adjacent in the graph $G_1$. We assert that there is exactly one vertex $u$ in the graph $G$ such that $u$ is adjacent to the both of the vertices $v$ and $w$. If the vertex $u$ is adjacent to both of the vertices $v$ and $w$, then $v$ and $w$ are $k$-subspaces of the $(k+1)$-space $u$. Hence $u$ contains the space $v+w$. Since $dim(v+w)$=$dim(v)+dim(w)-dim(v\cap w)=k+k-(k-1)=k+1$, thus $u=v+w$. In other words, the vertex $u=v+w$ is the unique vertex in the graph $G$ such that $u$ is adjacent to both of the vertices $v$ and $w$. Also, note that our discussion shows that if $x,y \in V_k$ are such that $dim(x \cap y)\neq (k-1)$, then $x$ and $y$ have no a common neighbor in the graph $G$.
Now since, the vertices $v$ and $w$ have exactly 1 common neighbor in the graph $G$, therefore $f(v)=g(v)$ and $f(w)=g(w)$ have exactly 1 common neighbor in the graph $G$. This follows that $dim(g(v) \cap g(w))=k-1$, and hence $g(v)$ and $g(w)$ are adjacent vertices in the Grassmann graph $G_1$.\
We now conclude that the graph $G_1$ is an attached graph to the graph $G$. There are two possible cases, that is (1) $2k+1 \neq n$, or (2) $2k+1 = n$.\
(1) Let $2k+1 \neq n$. Noting that ${n\brack k}_q < {n\brack k+1}_q$, it follows that $|V_k| \neq |V_{k+1}|$. Therefore by Corollary 3.8. and Theorem 3.14. we have $Aut(\Gamma) \cong Aut(G) \cong P \Gamma L_n (\mathbb{F}_q)$.\
(2) If $2k+1=n$, since ${n\brack k}_q={n\brack k+1}_q$, then $|V_k| = |V_{k+1}|$. Hence, the mapping $\theta : V(G) \longrightarrow V(G)$ defined by the rule $\theta(v) =v^{\perp}$ is an automorphism of the graph $G$ of order 2 such that $\theta(V_k)=V_{k+1}$. Hence, by Theorem $3.10.$ and Theorem $3.14$ we have, $Aut(G) \cong Aut(G_1) \rtimes <\theta> \cong P\Gamma L_n (\mathbb{F}_q) \rtimes \mathbb{Z}_2$.
We now show another application of Theorem 3.10. in determining the automorphism groups of some classes of graphs which are important in algebraic graph theory.
If $ G_1, G_2 $ are graphs, then their direct product (or tensor product) is the graph $ G_1 \times G_2 $ with vertex set $ \{( v_1,v_2) \ | \ v_1 \in G_1, v_2 \in G_2\} $, and for which vertices $( v_1,v_2)$ and $ ( w_1,w_2) $ are adjacent precisely if $ v_1$ is adjacent to $w_1$ in $G_1$ and $ v_2$ is adjacent to $w_2$ in $G_2$. It can be shown that the direct product is commutative and associative \[8\]. The following theorem, first proved by Weichsel (1962), characterizes connectedness in direct products of two factors.
$[8]$ Suppose $G_1$ and $G_2$ are connected nontrivial graphs. If at least one of $G_1$ or $G_2$ has an odd cycle, then $ G_1 \times G_2 $ is connected. If both $G_1$ and $G_2 $ are bipartite, then $ G_1 \times G_2 $ has exactly two components.
Thus, if one of the graphs $G_1$ or $G_2$ is a connected non-bipartite graph, then the graph $ G_1 \times G_2 $ is a connected graph. If $K_2$ is the complete graph on the set $\{ 0,1 \}$, then the direct product $B(G)=G \times K_2$ is a bipartite graph, and is called the bipartite double of $G$ (or the bipartite double cover of $G$). Then, $$V(B(G))=\{(v,i)| v\in V(G), i \in \{ 0,1 \} \},$$ and two vertices $(x,a)$ and $(y,b)$ are adjacent in the graph $B(G)$, if and only if $a \neq b$ and $x$ is adjacent to $y$ in the graph $G$. The notion of the bipartite double of $G$ has many applications in algebraic graph theory \[2\].
Consider the bipartite double of $G$, namely, the graph $B(G)= G \times K_2 .$ It is easy to see that the group $Aut(B(G))$ contains the group $Aut(G) \times \mathbb{Z}_2$ as a subgroup. In fact, if for $g \in Aut(G)$, we define the mapping $e_g$ by the rule $e_g(v,i)=(g(v),i)$, $i\in \{0,1\}, v\in V(G)$, then $e_g \in Aut(B(G))$. It is aesy to see that $H=\{e_g | g \in Aut(G) \} \cong Aut(G)$ is a subgroup of $Aut(B(G))$. Let $t$ be the mapping defined on $V(B(G))$ by the rule $t(v,i)=(v,i^c), $ where $i^c=1$ if $i=0$ and $i^c=0$ if $i=1$. It is clear that $t$ is an automorphism of the graph $B(G).$ Hence, $<H,t> \leq Aut(B(G)).$ Notinf that for every $e_g \in H$ we have $e_gt=te_g, $ we deduce that $<H,t> \cong H \times <t>$. We now conclude that $Aut(G) \times \mathbb{Z}_2 \cong H \times \mathbb{Z}_2 \leq Aut(B(G)). $ Let $G$ be a graph. $G$ is called a $stable$ graph when we have $Aut(B(G)) \cong Aut(G) \times \mathbb{Z}_2$. Concerning the notion and some properties of stable graphs, see \[12,13,20,21\].
Let $n,k \in \mathbb{ N}$ with $ k < \frac{n}{2} $ and Let $[n]=\{1,...,n\}$. The $Kneser$ $graph$ $K(n,k)$ is defined as the graph whose vertex set is $V=\{v\mid v\subseteq [n], |v|=k\}$ and two vertices $v$,$w$ are adjacent if and only if $|v\cap w|$=0. It is easy to see that if $H(n,k)$ is a bipartite Kneser graph, then $H(n,k) \cong K(n,k) \times K_2$. Now, it follows from Theorem 3.11 (or \[17\]) that Kneser graphs are stable graphs.
The next theorem provides a sufficient condition such that, when a connected non-bipartite $vd$-graph $G$, satisfies this condition, then $G$ is a stable graph.
Let $G=(V,E)$ be a connected non-bipartite $vd$-graph. Let $v,w \in V$ be arbitrary. Let $c(v,w)$ be the number of common neighbors of $v$ and $w$ in the graph $G$. Let $a_0 >0$ be a fixed integer. If $c(v,w)=a_0$, when $v$ and $w$ are adjacent and $c(v,w) \neq a_0$ when $v$ and $w$ are non-adjacent, then we have, $$Aut(G \times K_2) = Aut(B(G)) \cong Aut(G) \times \mathbb{Z}_2,$$ in other words, $G$ is a stable graph.
Note that the graph $G \times K_2$ is a bipartite graph with the vertex set, $V=U \cup W$, where $U= \{ (v,0) | v\in V(G) \}$ and $W= \{ (v,1) | v\in V(G) \}.$ Since $G$ is a $vd$-graph, then the graph $G \times K_2$ is a $vd$-graph. In fact if the vertices $x,y \in V$ are such that $N(x)=N(y)$, then $x,y \in U$ or $x,y \in W. $ Without loss of generality, we can assume that $x,y \in U.$ Let $x=(u_1,0)$ and $y=(u_2,0)$. Let $N(x)=\{(v_1,1), (v_2,1), ..., (v_m,1) \}$ and $N(y)=\{(t_1,1), (t_2,1), ..., (t_p,1) \}$, where $v_is$ and $t_js$ are in $V(G). $ Thus $m=p$ and $N(u_1)=\{u_1,...,u_m \}$=$\{t_1,...,t_m \}=N(u_2).$ Now since $G$ is a $vd$-graph, this follows that $u_1=u_2$ and therefore $x=y$.\
Let $G_1=(U,E_1)$ be the graph with vertex-set $U$ in which two vertices $(v,0)$ and $(w,0)$ are adjacent if and only if $v_1$ and $v_2$ are adjacent in the graph $G$. It is clear that $G_1 \cong G$. Therefore we have $Aut(G_1) \cong Aut(G).$ For every $f \in Aut(G)$ we let, $$d_f : U \rightarrow U, \ d_f(v,0)=(f(v),0), \ for\ every\ (v,0) \in U,$$ then $d_f$ is an automorphism of the graph $G_1.$ If we let $A=\{d_f | f \in Aut(G) \}$, then $A$ with the operation of composition is a group, and it is easy to see that $A \cong Aut(G_1)$ (as abstract groups). We now assert that the graph $G_1$ is an attached graph to the bipartite graph $B=G \times K_2$. Let $g \in Aut(B)$ be such that $g(U)=U.$ We assert that $h=g|_U$, the restriction of $g$ to $U$, is an automorphism of the graph $G_1$. It is clear that $h$ is a permutation of $U$. Let $(v,0)$ and $(w,0)$ be adjacent vertices in $G_1$. Then $v,w$ are adjacent in the graph $G$. Hence there are vertices $u_1,...,u_{a_0}$ in the graph $G$ such that the set of common neighbor(s) of $v$ and $w$ in $G$ is $\{u_1,...,u_{a_0} \}$. Noting that $(x,1)$ is a common neighbor of $(v,0)$ and $(w,0)$ in the graph $B$ if and only if $x$ is a common neighbor of $v,w$ in the graph $G$, we deduce that the set $\{ (u_1,1),...,(u_{a_0},1) \}$ is the set of common neighbor(s) of $(v,0)$ and $(w,0)$ in the graph $B$. Since $g$ is an automorphism of the graph $B$, hence $g(v,0)$ and $g(w,0)$ have $a_0$ common neighbor(s) in the graph $B$. Note that if $d_{G_1}(g(v,0),g(w,0)) >2 $, then these vertices have no common neighbor in the graph $B$. Also, if $d_{G_1}(g(v,0),g(w,0)) =2, $ then $d_G(v,w)=2$, hence $v,w$ have $ c(v,w)\neq a_0$ common neighbor(s) in the graph $G$, hence $(v,0)$ and $(w,0)$ have $c(v,w)$ common neighbor(s) in the graph $B$, and therefore $g(v,0),g(w,0)$ have $ c(v,w) \neq a_0$ common neighbor(s) in the graph $B$. We now deduce that $d_{G_1}(g(v,0),g(w,0)) =1$. This follows that $h=g|_U$ is an automorphism of the graph $G_1$. Thus, the condition (ii) of Difition 3.5. holds for the graph $G_1.$
Now, suppose that $\phi$ is an automorphism of the graph $G_1.$ Then there is an automorphism $f$ of the graph $G$ such that $\phi = d_f.$ Now, we define the mapping $e_{\phi}$ on the set $V(B)$ by the following rule;
$$(*)\ \ \ \ e_{\phi}(v,i) = \begin{cases}
(f(v),0), \ if \ i=0 \\ (f(v),1), \ if \ i=1 \\
\end{cases}$$ It is easy to see that $e_{\phi}$ is an extension of the automorphism $\phi$ to an automorphism of the graph $B$. We now deduce that the graph $G_1$ is an attached graph to the graph $B$. On the other hand, it is easy to see that the mapping $t : V(B) \rightarrow V(B)$, which is defined by the rule, $$(**) \ \ \ \ t(v,i) = \begin{cases}
(v,0), \ if \ i=1 \\ (v,1), \ if \ i=0 \\
\end{cases}$$ is an automorphism of the graph $B$ of order 2. Hence, $<t> \cong \mathbb{Z}_2. $ Also, it is easy to see that for every automorphism $\phi$ of the graph $G_1$ we have $te_{\phi}=e_{\phi}t$. We now conclude by Theorem 3.10. that, $$Aut(G \times K_2)=Aut(B) \cong Aut(G_1) \rtimes <t> \cong Aut(G) \times <t> \cong Aut(G) \times \mathbb{Z}_2.$$
As an application of Theorem 3.18. we show that the Johnson graph $J(n,k)$ is a stable graph. Since $J(n,k) \cong J(n,n-k)$, in the sequel we assume that $k \leq \frac{n}{2}$.
Let $n,k$ be positive integers with $k \leq \frac{n}{2}$. If $n\neq 6$, then the Johnson graph $J(n,k)$ is a stable graph.
We know that the vertex set of the graph $J(n,k)$ is the set of $k$-subsets of $[n]=\{ 1,2,3,...,n \}$, in which two vertices $v$ and $w$ are adjacent if and only if $|v\cap w|=k-1$. If $k=1$, then $J(n,k) \cong K_n$, the complete graph on $n$ vertices. It is easy to see that if $X=K_n$ then the bipartite double of $X$, is isomorphic with the bipartite Kneser graph $H(n,1)$. From Corollary 3.11. we know that $Aut(H(n,1)) \cong Sym([n]) \times \mathbb{Z}_2 \cong Aut(K_n) \times \mathbb{Z}_2$. Hence the Johnson graph $J(n,k)$ is a stable graph when $k=1$. We now assume that $k \geq 2$. We let $G=J(n,k)$. It is easy to see that $G$ is a $vd$-graph. It can be shown that if $v,w$ are vertices in $G$, then $d(v,w)=k-|v \cap w|$ \[2\]. Hence, $G$ is a connected graph. It is easy to see that the girth of the Johnson graph $J(n,k)$ is 3. Therefore, $G$ is a non-bipartite graph. It is clear that when $d(v,w) \geq 3$, then $v,w$ have no common vertices. We now consider 2 other possible cases, that is, (i) $d(v,w)=2$ or (ii) $d(v,w)=1$. Let $c(v,w)$ denotes the number of common neighbors of $v,w$ in $G$. In the sequel, we show that if $d(v,w)=2$, then $c(v,w)=4$, and if $d(v,w)=1$, then $c(v,w)=n-2$.
\(i) If $d(v,w)=2$, then $|v\cap w|=k-2$. Let $v \cap w=u$. Then $v=u \cup \{i_1,i_2 \}$, $w=u \cup \{j_1,j_2 \}$, where $i_1,i_2,j_1,j_2 \in [n]$, $\{i_1,i_2 \} \cap \{j_1,j_2 \}=\emptyset$. Let $x \in V(G)$. It is easy to see that if $|x\cap u| < k-2$, then $x$ can not be a common neighbor of $v,w$. Hence, if $x$ is a common vertex of $v,w$ then $x$ is of the form $x=u\cup \{r,s \}$, where $r \in \{ i_1,i_2 \}$ and $s \in \{ j_1,j_2 \}$. We now deduce that the number of common neighbors of $v,w$ in the graph $G$ is 4. (ii) We now assume that $d(v,w)=1$. Then $|v\cap w|=k-1$. Let $v \cap w=u$. Then $v=u \cup \{r\}$, $w=u \cup \{s \}$, where $r,s \in [n]$, $r \neq s$. Let $x \in V(G)$. It is easy to see that if $|x\cap u| < k-2$, then $x$ can not be a common neighbor of $v,w$. Hence, if $x$ is a common neighbor of $v,w$ then $|x \cap u|=k-1$ or $|x \cap u|=k-2$. In the first step, we assume that $|x \cap u|=k-1$. Then $x$ is of the form $x=u\cup \{ y\}$, where $y \in [n]-(v\cup w)$. Since, $|v\cup w|=k+1$, then the number of such xs is $n-k-1$.We now assume that $|x \cap u|=k-2$. Hence, $x$ is of the form $x=t \cup \{ r,s \}$, where $t$ is a $(k-2)$-subset of the $(k-1)$-set $u$. Therefore the number of such xs is ${k-1} \choose{k-2}$=$k-1$.Our argument follows that if $v$ and $w$ are adjacent, then we have $c(v,w)=n-k-1+k-1=n-2$. Noting that $n-2 \neq 4$, we conclude from Theorem 3.18 that the Johnson graph $J(n,k)$ is a stable graph, when $n \neq 6$.
Although, Theorem 3.19. does not say anything about the stability of the Johnson graph $J(6,k)$, we show by the next result that this graph is a stable graph.
The Johnson graph $J(6,k)$ is a stable graph.
When $k=1$ the assertion is true, hence we assume that $k\in \{ 2,3 \}$. In the first step we show that the Johnson graph $J(6,2)$ is a stable graph. Let $B=J(6,2) \times K_2$. We show that $Aut(B) \cong Sym[6] \times \mathbb{Z}_2$, where $[6]= \{ 1,2,...,6 \}$. It is clear that $B$ is a bipartite $vd$-graph. Let $V=V(B)$ be the vertex-set of the graph $B$. Then $V=V_0 \cup V_1$, where $V_i= \{(v,i) | v\subset [6], |v|=2 \}$, $i \in \{ 0,1\}$. Let $G_1=(V_0,E_1)$ be the graph with the vertex-set $V_0$, in which two vertices $(v,0),(w,0)$ are adjacent whenever $|v \cap w|=1$. It is clear that, $G_1$ is isomorphic with the Johnson graph $J(6,2)$. Hence, we have $Aut(G_1) \cong Sym([6])$. We show that $G_1$ is an attached graph to the graph $B$. By what is seen in (\*) of the proof of Theorem 3.18. it is clear that if $h$ is an automorphism of the graph $G_1$, then $h$ can be extended to an automorphism $e_h$ of the graph $B$. Thus, the condition (i) of Definition 3.5. holds for the graph $G_1$.
Let $a=(v,0)$ and $b=(w,0)$ be two adjacent vertices in the graph $G_1$, that is, $|v\cap w|=1$. Let $N(a,b)$ denotes the set of common neighbors of $a$ and $b$ in the graph $B$. Let $X(a,b)=\{a,b\}\cup N(a,b) \cup t(N(a,b))$, where $t$ is the automorphism of the graph $B$, defined by the rule $t(v,i)=(v,i^c)$, $i^c \in \{ 0,1 \}, i^c\neq i$. Let $<X(a,b)>$ be the subgraph induced by the set $X(a,b)$ in the graph $B$. It can be shown that if $a,b$ are adjacent vertices in $G_1$, that is $|v\cap w|=1$, then $<X(a,b)>$ has a vertex of degree $0$. On the other hand, when $a,b$ are not adjacent vertices in $G_1$, that is $|v\cap w|=0$, then $<X(a,b)>$ has no vertices of order $0$. In the sequel of the proof, we let $\{ x,y \}=xy$. For example, let $r=(12,0)$ and $s=(13,0)$ be two adjacent vertices of $G_1$. Then $X(r,s)=\{(12,0),(13,0),(14,1),(15,1),(16,1),(23,1),(14,0),(15,0),(16,0),(23,0) \}$. Now, in the graph $<X(r,s)>$ the vertex $(23,0)$ is a vertex of degree 0. Whereas, if we let $r=(12,0)$, $u=(34,0)$, then $r,u$ are not adjacent in the graph $G_1$. Then $X(r,u)=\{ (12,0),(34,0),(13,1),(14,1),(23,1),(24,1),(13,0),(14,0),(23,0),(24,0)\}$. Now, it is clear that the graph $<X(r,u)>$ has no vertices of degree 0.\
Note that the graph $G_1$ is isomorphic with the Johnson graph $J(6,2)$, hence $G_1$ is a distance-transitive graph. Now if $c,d$ be two adjacent vertices in the graph $G_1$ then there is an automorphism $f$ in $Aut(G_1)$ such that $f(r)=c$ and $f(s)=d$. Let $e_f$ be the extension of $f$ to an automorphism of the graph $B$. Therefore, $<X(c,d)>=<X(e_f(r),e_f(s))>=e_f(<X(r,s)>)$ has a vertex of degree $0$. This argument also shows that if $p,q$ are non-adjacent vertices in the graph $G_1$, then $<X(p,q)>$ has no vertices of degree $0$.
Now, let $g$ be an automorphism of the graph $B$ such that $g(V_0)=V_0$. We show that $g|_{V_0}$ is an automorphism of the graph $G_1$. Let $a=(v,0)$ and $b=(w,0)$ be two vertices of the graph $G_1$, that is, $|v\cap w|=1$. Then $<X(a,b)>$ has a vertex of degree 0. Hence, $g(<X(a,b)>)=<X(g(a),g(b))>$ has a vertex of degree 0. Then $g(a)$ and $g(b)$ are adjacent in the graph $G_1$. We now deduce that if $g$ is an automorphism of the graph $B$ such that $g(V_0)=V_0$, then $g|_{V_0}$ is an automorphism of the graph $G_1$. Therefore, the condition (ii) of Definition 3.5. holds for the graph $G_1$. Therefore, $G_1$ is an attached graph to graph $B$. Note that $t$ is an automorphism of the graph $B$ of order 2 such that $t(V_0)=V_1$ and $t(V_1)=V_0$. Also, we have $tf=ft, f\in Aut(G_1)$. We now, conclude by Theorem 3.10. that; $$Aut(B) \cong Aut(G_1)\rtimes <t> \cong Aut(G_1)\times <t> \cong Aut(G)\times \mathbb{Z}_2 \cong Sym([6])\times \mathbb{Z}_2$$ Therefore, the graph $G=J(n,6)$ is a stable graph. By a similar argument, we can show that the graph $J(6,3)$ is a stable graph.
Combining Theorem 3.19. and Proposition 3.20. we obtain the following result.
The Johnson graph $J(n,k)$ is a stable graph.
Conclusion
==========
In this paper, we gave a method for finding the automorphism groups of connected bipartite $vd$-graphs (Theorem 3.9. and Theorem 3.10). Then by our method, we explicitly determined the automorphism groups of some classes of bipartite $vd$-graphs, including the graph $S(q,n,k)$ which is a derived graph from the Grassman graph $G(q,n,k)$ (Theorem 3.16). Also, we provided a sufficient ascertainable condition such that, when a connected non-bipartite $vd$-graph $G$ satisfies this condition, then $G$ is a stable graph (Theorem 3.18). Finally, we showed that the Johnson graph $J(n,k)$ is a stable graph (Theorem 3.21).
Biggs N.L, Algebraic Graph Theory 1993 (Second edition), Cambridge Mathematical Library (Cambridge University Press; Cambridge).
Brouwer A.E, Cohen A.M, Neumaier A, Distance-Regular Graphs, Springer-Verlag, New York, 1989.
Chow W.L, On the geometry of algebraic homogeneous spaces, Ann. of Math. (2) 50 (1949) 32-67.
Dixon J.D, Mortimer B, Permutation Groups, Graduate Texts in Mathematics, New York, Springer-Verlag (1996).
A. Ganesan, Automorphism groups of Cayley graphs generated by connected transposition sets, Discrete Math. 313, 2482-2485 (2013).
Ganesan A, Automorphism group of the complete transposition graph, J Algebr Comb, DOI 10.1007/s10801-015-0602-5, (2015).
Godsil C, Royle G, Algebraic Graph Theory, 2001, Springer.
Hammack R, Imrich W, Klavzar S, Handbook of Product Graphs, second edition, CRC press 2011.
Huang X, Huang Q, Wang J. The spectrum and automorphism group of the set-inclusion graph, http://arxiv.org/abs/1809.00889v3.
Jones G.A, Automorphisms and regular embeddings of merged Johnson graphs, European Journal of Combinatorics, 26:417-435, 2005.
Lu L, Huang Q, Distance eigenvalues of $B(n, k)$, Linear and Multilinear Algebra, https://doi.org/10.1080/03081087.2019.1659221.
D. Marušič, R. Scapellato and N. Zagaglia Salvi, A characterization of particular symmetric (0, 1) matrices, Linear Algebra Appl. 119 (1989), 153-162. D. Marušič, R. Scapellato and N. Zagaglia Salvi, Generalized Cayley graphs, Discrete Math. 102 (1992), no. 3, 279-285.
Mirafzal S.M, Some other algebraic properties of folded hypercubes, Ars Comb. 124, 153-159 (2016).
Mirafzal S.M, Cayley properties of the line graphs induced by consecutive layers of the hypercube, Arxive: 1711.02701v5, submitted.
Mirafzal S.M, A new class of integral graphs constructed from the hypercube, Linear Algebra Appl. 558 (2018) 186-194.
Mirafzal S.M, The automorphism group of the bipartite Kneser graph, Proceedings-Mathematical Sciences, (2019), doi.org/10.1007/s12044-019-0477-9.
Mirafzal S.M, Ziaee M, Some algebraic aspects of enhanced Johnson graphs, Acta Math. Univ. Comenianae, 88(2) (2019), 257-266.
Mirafzal S.M, Heidari A, Johnson graphs are panconnected, Proceedings-Mathematical Sciences, (2019), https://doi.org/10.1007/s12044-019-0527-3.
Qin Y.L, Xia B, Zhou, S, Stability of circulant graphs, Arxiv:1802.04921v2, 2018.
D. Surowski, Automorphism groups of certain unstable graphs, Math. Slovaca 53 (2003), no. 3, 215-232.
Wang Y.I, Feng Y.Q, Zhou J.X, Automorphism Group of the Varietal Hypercube Graph, Graphs and Combinatorics 2017; https://doi.org/10.1007/s00373-017-1827-y.
[^1]: 2010 *Mathematics Subject Classification*: 05C25
[^2]: *Keywords*: automorphism group, bipartite double of a graph, Grassmann graph, stable graph, Johnson graph.
[^3]: *Date*:
| |
Facing adversity for the first time during a season in which they dominated county competition, Rosspoint coach Johnny Simpson wasn’t sure how his Lady Cats would react as they faced a three-point deficit with four minutes to play in the fifth- and sixth-grade county championship game Thursday at Harlan County High School against a James A. Cawood squad that had battled from the opening tip.
“You don’t know that to expect with 10- and 11-year old kids. The closest game we had in the county was 10 or 11 points,” Simpson said. “That’s 100 percent on JACES. They packed it in and we didn’t hit a lot of jump shots. I told my girls the whole game to drive. The last quarter we finally did.”
The Lady Cats answered the challenge with defense and clutch free-throw shooting, hitting nine of 12 at the line in the final four minutes to rally for a 35-30 win over the Trojanettes.
“That’s big. Reagan Clem and Shasta Brackett stepped up big, and Jaylee Cochran hit one also,” Simpson said.
Down 22-18 going into the fourth quarter, the Lady Cats picked up the pressure on defense by going to a man-to-man that limited the Trojanettes to eight points.
“I had man-to-man in my back pocket. We did a good job with it. I tell them you will play that a lot in high school, so you need to know,” Simpson said. “They did a good job switching and sagging off. I don’t know where our minds were at for three quarters, but they stepped up in the fourth quarter. This is a good group of girls. They haven’t even scraped the surface of their potential.”
Brackett, a sixth-grade center, scored 12 points to lead the 21-2 Lady Cats. Clem, a fifth-grade guard, added nine points.
The 12-4 Trojanettes were led by sixth-grade guard Carmen Thomas with 12 points. Taylynn Napier, a fourth-grade guard, added eight points.
Rosspoint defeated JACES three times during the regular season, all by double digits, including a 46-16 win on Oct. 26.
“Carmen is a good kid who has always been great on defense but is starting to become more of a weapon on offense,” Simpson said. “I thought Taylynn Napier killed us early shooting 3s. JACES played awesome, and this is the best game we’ve been involved in all year. I’m just glad to come out on top.”
———
Black Mountain solidified its case as the county’ s most improved team, capturing third place with a 33-26 win over Wallins after entering the tournament as the fifth seed.
Jayla Dillman scored 15 points and Madalyn Bennett added 13 for Black Mountain.
Kylie Runions led Wallins with 12 points.
———
Rosspoint (35) — Reagan Clem 9, Jaycee Simpson 3, Lauren Lewis 3, Jaylee Cochran 8, Shasta Brackett 12.
James A. Cawood (30) — Carmen Thomas 14, Taylynn Napier 8, Lia-Kate Carter 6, Addy Davis 1, Haley Berry 1.
———
Black Mountain (33) — Madalyn Bennett 13, Jayla Dillman 15, Carly Turner 3, Kelsie Middleton 2.
Wallins (26) — Kayleigh Templeton 9, Kylie Runions 12, Brooklyn Haywood 2, Addison Day 3. | https://harlancountysports.com/6482/showcase/lady-cats-answer-challenge-by-rallying-past-jaces-in-county-finals/ |
---
abstract: 'We prove the total positivity of the Narayana triangles of type $A$ and type $B$, and thus affirmatively confirm a conjecture of Chen, Liang and Wang and a conjecture of Pan and Zeng. We also prove the strict total positivity of the Narayana squares of type $A$ and type $B$.'
address:
- 'School of Mathematical Sciences, Dalian University of Technology, Dalian 116024, P.R. China'
- 'Center for Combinatorics, LPMC, Nankai University, Tianjin 300071, P.R. China'
author:
- Yi Wang
- 'Arthur L.B. Yang'
title: Total positivity of Narayana matrices
---
Totally positive matrices, the Narayana triangle of type $A$, the Narayana triangle of type $B$, the Narayana square of type $A$, the Narayana square of type $B$\
*AMS Classification 2010:* 05A10 ,05A20
Introduction
============
Let $M$ be a (finite or infinite) matrix of real numbers. We say that $M$ is [*totally positive*]{} (TP) if all its minors are nonnegative, and we say that it is [*strictly totally positive*]{} (STP) if all its minors are positive. Total positivity is an important and powerful concept and arises often in analysis, algebra, statistics and probability, as well as in combinatorics. See [@And87; @Bre95; @Bre96; @CLW-EuJC; @CLW-LAA; @FJ11; @Kar68; @Pin10] for instance.
Let $C(n,k)=\binom{n}{k}$. It is well known [@Kar68 P. 137] that the Pascal triangle $$P=\left[C(n,k)\right]_{n,k\geq 0}
=\left[\begin{array}{rrrrrr}
1 & & & & & \\
1 & 1 & & & & \\
1 & 2 & 1 & & & \\
1 & 3 & 3 & 1 & & \\
1 & 4 & 6 & 4 & 1 & \\
\vdots & & & & & \ddots \\
\end{array}\right]$$ is totally positive. Let $$P^\c=\left[C({n+k},{k})\right]_{n,k\geq 0}=
\left[\begin{array}{ccccc}
1 & 1 & 1 & 1 & \cdots\\
1 & 2 & 3 & 4 & \\
1 & 3 & 6 & 10 & \\
1 & 4 & 10 & 20 & \\
\vdots & & & & \ddots \\
\end{array}\right]$$ be the Pascal square. Then $P^\c=PP^T$ by the Vandermonde convolution formula $$\binom{n+k}{k}=\sum_{i}\binom{n}{i}\binom{k}{i}.$$ Note that the transpose and the product of matrices preserve total positivity. Hence $P^\c$ is also TP.
The main objective of this note is to prove the following two conjectures on the total positivity of the Narayana triangles. Let $NA(n,k)=\frac{1}{k+1}\binom{n+1}{k}\binom{n}{k}$, which are commonly known as the Narayana numbers. Let $$N_A=\left[NA(n,k)\right]_{n,k\geq 0}
=\left[\begin{array}{rrrrrr}
1 & & & & & \\
1 & 1 & & & & \\
1 & 3 & 1 & & & \\
1 & 6 & 6 & 1 & & \\
1 & 10 & 20 & 10 & 1 & \\
\vdots & & & & & \ddots \\
\end{array}\right].$$ The Narayana numbers $NA(n,k)$ have many combinatorial interpretations. An interesting one is that they appear as the rank numbers of the poset of noncrossing partitions associated to a Coxeter group of type $A$, see Armstrong [@Armstrong09 Chapter 4]. For this reason, we call $N_A$ the Narayana triangle of type $A$. Chen, Liang and Wang [@CLW-LAA] proposed the following conjecture.
\[CLW\] The Narayana triangle $N_A$ is TP.
Let $NB(n,k)=\binom{n}{k}^2$, and let $$N_B=\left[NB(n,k)\right]_{n,k\geq 0}
=\left[\begin{array}{rrrrrr}
1 & & & & & \\
1 & 1 & & & & \\
1 & 4 & 1 & & & \\
1 & 9 & 9 & 1 & & \\
1 & 16 & 36 & 16 & 1 & \\
\vdots & & & & & \ddots \\
\end{array}\right].$$ We call $N_B$ the Narayana triangle of type $B$ since the numbers $NB(n,k)$ can be interpreted as the rank numbers of the poset of noncrossing partitions associated to a Coxeter group of type $B$, see also Armstrong [@Armstrong09 Chapter 4] and references therein. Pan and Zeng [@PZ16] proposed the following conjecture.
\[PZ\] The Narayana triangle $N_B$ is TP.
In this note, we will prove that the Narayana triangles $N_A$ and $N_B$ are TP just like the Pascal triangle in a unified approach. We also prove that the corresponding Narayana squares $$N_A^\c=\left[NA(n+k,k)\right]_{n,k\geq 0}
=\left[\begin{array}{rrrrrr}
1 & 1 & 1 & 1 & & \cdots \\
1 & 3 & 6 & 10 & \\
1 & 6 & 20 & 50 & \\
1 & 10 & 50 & 175 & \\
%1 & * & * & & & \\
\vdots & & & & \ddots \\
\end{array}\right]$$ and $$N_B^\c=\left[NB({n+k},{k})\right]_{n,k\geq 0}=
\left[\begin{array}{ccccc}
1 & 1 & 1 & 1 & \cdots\\
1 & 4 & 9 & 16 & \\
1 & 9 & 36 & 100 & \\
1 & 16 & 100 & 400 & \\
\vdots & & & & \ddots \\
\end{array}\right]$$ are STP, as well as the Pascal square.
The Narayana triangles
======================
The main aim of this section is to prove the total positivity of the Narayana triangles $N_A$ and $N_B$.
Before proceeding to the proof, let us first note a simple property of totally positive matrices. Let $X=[x_{n,k}]$ and $Y=[y_{n,k}]$ be two matrices. If there exist positive numbers $a_n$ and $b_k$ such that $y_{n,k}=a_nb_kx_{n,k}$ for all $n$ and $k$, then we denote $x_{n,k}\sim y_{n,k}$ and $X\sim Y$. The following result is direct by definition.
\[prop-eq\] Suppose that $X\sim Y$. Then the matrix $X$ is TP (resp. STP) if and only if the matrix $Y$ is TP (resp. STP).
Our proof of Conjectures \[CLW\] and \[PZ\] is based on the Pólya frequency property of certain sequences. Let $(a_n)_{n\ge 0}$ be an infinite sequence of real numbers, and define its Toeplitz matrix as $$[a_{n-k}]_{n,k\ge 0}=
\left[\begin{array}{ccccc}
a_0 & & & & \\
a_1 & a_0 & & & \\
a_2 & a_1 & a_0 & & \\
a_3 & a_2 & a_1 & a_0 & \\
\vdots & & & & \ddots \\
\end{array}\right].$$ Recall that $(a_n)_{n\ge 0}$ is said to be a [*Pólya frequency*]{} (PF) sequence if its Toeplitz matrix is TP. The following is the fundamental representation theorem for PF sequences, see Karlin [@Kar68 p. 412] for instance.
\[SE-thm\] A nonnegative sequence $(a_0=1,a_1,a_2,\ldots)$ is PF if and only if its generating function has the form $$\sum_{n\ge 0}a_nx^n=\frac{\prod_j(1+\alpha_j x)}{\prod_j(1-\beta_j x)}e^{\gamma x},$$ where $\alpha_j,\beta_j,\gamma\ge 0$ and $\sum_j(\alpha_j+\beta_j)<+\infty$.
Clearly, the sequence $(1/n!)_{n\geq 0}$ is PF by Schoenberg-Edrei Theorem, which implies that the corresponding Toeplitz matrix $[a_{n-k}]=\left[1/(n-k)!\right]$ is TP. Also, note that $$\binom{n}{k}=\frac{n!}{k!(n-k)!}\sim \frac{1}{(n-k)!}.$$ Hence the Pascal triangle $P$ is TP by Proposition \[prop-eq\].
We are now in a position to prove Conjectures \[CLW\] and \[PZ\].
\[thm-Narayana-triangle\] The Narayana triangles $N_A$ and $N_B$ are TP.
We have $$NA(n,k)=\frac{n!(n+1)!}{k!(k+1)!(n-k)!(n-k+1)!}\sim\frac{1}{(n-k)!(n-k+1)!}$$ and $$NB(n,k)=\frac{n!^2}{k!^2(n-k)!^2}\sim\frac{1}{(n-k)!^2}.$$ So, to show that the Narayana triangles $N_A$ and $N_B$ are TP, it suffices to show that the sequences $(1/(n!(n+1)!))_{n\geq 0}$ and $(1/(n!^2))_{n\geq 0}$ are PF. Based a classic result of Laguerre on multiplier sequences, Chen, Ren and Yang [@CRY16 Proof of Conjecture 1.1] already proved that the sequence $(1/((t)_nn!))_{n\geq 0}$ is PF for any $t>0$, where $(t)_n=t(t+1)\cdots(t+n-1)$. Letting $t=2$ (resp. $t=1$), we obtain the PF property of $(1/(n!(n+1)!))_{n\geq 0}$ (resp. $(1/(n!^2))_{n\geq 0}$), as desired.
The method used here applies equally well to the triangle composed of $m$-Narayana numbers, which we will recall below. Fix an integer $m\geq 0$. For any $n\geq m$ and $0\leq k\leq n-m$, the $m$-Narayana number $NA_{\langle m\rangle}(n,k)$ is given by $$\begin{aligned}
\label{eq-mnarayana}
NA_{\langle m\rangle}(n,k)=\frac{m+1}{n+2}\binom{n+2}{k+1}\binom{n-m}{k}.\end{aligned}$$ When $m=0$ we get the usual Narayana numbers $NA(n,k)$. For more information on the numbers $NA_{\langle m\rangle}(n,k)$, see [@oeis]. It is easy to show that the Narayana triangle $N_A$ is symmetric: $NA(n,k)=NA(n,n-k)$, but $$N_{A,{\langle m\rangle}}=\left[NA_{\langle m\rangle}(n,k)\right]_{n\geq m, 0\leq k\leq n-m}$$ and $$\overleftarrow{N}_{A,{\langle m\rangle}}=\left[NA_{\langle m\rangle}(n,n-m-k)\right]_{n\geq m, 0\leq k\leq n-m}$$ are two different triangles for $m\geq 1$. The proof of Theorem \[thm-Narayana-triangle\] carries over directly to the following more general result.
\[thm-Narayana-triangle-m\] For any $m\geq 0$, both $N_{A,{\langle m\rangle}}$ and $\overleftarrow{N}_{A,{\langle m\rangle}}$ are TP.
The Narayana squares
====================
The object of this section is to prove the total positivity of the Narayana squares $N_A^\c$ and $N_B^\c$. Our proof is based on the theory of Stieltjes moment sequences.
Given an infinite sequence $(a_n)_{n\ge 0}$ of real numbers, define its Hankel matrix as $$[a_{n+k}]_{n,k\ge 0}=
\left[\begin{array}{ccccc}
a_0 & a_1 & a_2 & a_3 & \cdots\\
a_1 & a_2 & a_3 & a_4 & \\
a_2 & a_3 & a_4 & a_5 & \\
a_3 & a_4 & a_5 & a_6 & \\
\vdots & & & & \ddots \\
\end{array}\right].$$ We say that $(a_n)_{n\ge 0}$ is a [*Stieltjes moment*]{} (SM) sequence if it has the form $$\label{i-e}
a_n=\int_0^{+\infty}x^nd\mu(x),$$ where $\mu$ is a non-negative measure on $[0,+\infty)$. The following is a classic characterization for Stieltjes moment sequences (see [@Pin10 Theorem 4.4] for instance).
\[PSz\] A sequence $(a_n)_{n\ge 0}$ is SM if and only if
1. the Hankel matrix $[a_{i+j}]$ is STP; or
2. both $[a_{i+j}]_{0\le i,j\le n}$ and $[a_{i+j+1}]_{0\le i,j\le n}$ are positive definite.
Many well-known counting coefficients are Stieltjes moment sequences, see [@LMW-DM]. For example, the sequence $(n!)_{n\ge 0}$ is a Stieltjes moment sequence since $$n!=\int_0^{+\infty}x^ne^{-x}dx=\int_0^{+\infty}x^nd\left(1-e^{-x}\right).$$ Thus the corresponding Hankel matrix $[(n+k)!]$ is STP. Note that $$\binom{n+k}{k}=\frac{(n+k)!}{n!k!}\sim (n+k)!.$$ Hence the Pascal square $P^\c$ is also STP. The main result of this section is as follows.
\[thm-Narayana-square\] The Narayana squares $N_A^\c$ and $N_B^\c$ are STP.
We have $$\begin{aligned}
NA(n+k,k)&=\frac{(n+k)!(n+k+1)!}{k!(k+1)!n!(n+1)!}\sim (n+k)!(n+k+1)!\end{aligned}$$ and $$\begin{aligned}
NB(n+k,k)&=\frac{(n+k)!^2}{n!^2k!^2}\sim (n+k)!^2.\end{aligned}$$ So, to show that the Narayana squares $N_A^\c$ and $N_B^\c$ are STP, it suffices to show that the sequences $(n!(n+1)!)_{n\ge 0}$ and $((n!)^2)_{n\ge 0}$ are SM.
Note that the submatrix of a STP matrix is still STP. Hence if the sequence $(a_n)_{n\ge 0}$ is SM, then so is its shifted sequence $(a_{n+1})_{n\ge 0}$ by Lemma \[PSz\] (i). Now the sequence $(n!)_{n\ge 0}$ is SM, so is the sequence $((n+1)!)_{n\ge 0}$. On the other hand, the famous Schur product theorem states that the Hadamard product $[a_{i,j}b_{i,j}]$ of two positive definite matrices $[a_{i,j}]$ and $[b_{i,j}]$ is still positive definite. As a result, if both $(a_n)_{n\ge 0}$ and $(b_n)_{n\ge 0}$ are SM, then so is $(a_nb_n)_{n\ge 0}$ by Lemma \[PSz\] (ii). We refer the reader to [@Pin10 §4.10.4] for details. Thus we conclude that both $(n!(n+1)!)_{n\ge 0}$ and $((n!)^2)_{n\ge 0}$ are SM, as required.
We can also consider the strict total positivity of the $m$-th Narayana square: $$N_{A,{\langle m\rangle}}^{\c}=\left[NA_{\langle m\rangle}(n+k,k)\right]_{n\geq m, k\geq 0},$$ where $NA_{\langle m\rangle}(n,k)$ is given by . The following result can be proved in the same way as above.
\[thm-Narayana-square-m\] For any $m\geq 0$, the square $N_{A,{\langle m\rangle}}^{\c}$ is STP.
Remarks
=======
There are various generalizations of classical Narayana numbers, see for instance [@Armstrong09; @Barry11; @CYZ1601; @CYZ1602; @Petersen15]. As we mentioned before, the numbers $NA(n,k)$ (resp. $NB(n,k)$) appear as the rank numbers of the poset of generalized noncrossing partitions associated to a Coxeter group of type $A$ (resp. $B$). These posets are further generalized by Armstrong [@Armstrong09] by introducing the notion of $m$-divisible noncrossing partitions for any positive integer $m$ and any finite Coxeter group. Armstrong also showed that these generalized posets are not lattices but are still graded.
Fixing an integer $m\geq 1$, for $n\geq k\geq 0$ set $$\begin{aligned}
FNA_{\langle m\rangle}(n,k)&=\frac{1}{n+1}\binom{n+1}{k}\binom{m(n+1)}{n-k}\\
FNB_{\langle m\rangle}(n,k)&=\binom{n}{k}\binom{mn}{n-k}.
%FND_{\langle m\rangle}(n,k)&=\binom{n}{k}\binom{mn-m}{n-k}+\binom{n-2}{k}\binom{mn-m+1}{n-k}, n\geq 2.\end{aligned}$$ These numbers are called the Fuss-Narayana numbers by Armstrong [@Armstrong09], who proved that $FNA_{\langle m\rangle}(n,k)$ (resp. $FNB_{\langle m\rangle}(n,k)$) are the rank numbers of the poset of $m$-divisible noncrossing partitions associated to a Coxeter group of type $A$ (resp. $B$).
Note that, for any $m\geq 2$, we have $$FNA_{\langle m\rangle}(n,k)\neq FNA_{\langle m\rangle}(n,n-k), FNB_{\langle m\rangle}(n,k)\neq FNB_{\langle m\rangle}(n,n-k).$$ Now define the Fuss-Narayana triangles $$\begin{aligned}
FN_{A,{\langle m\rangle}}=\left[FNA_{\langle m\rangle}(n,k)\right]_{n,k\geq 0} ,\quad \overleftarrow{FN}_{A,{\langle m\rangle}}=\left[FNA_{\langle m\rangle}(n,n-k)\right]_{n,k\geq 0},\\
FN_{B,{\langle m\rangle}}=\left[FNB_{\langle m\rangle}(n,k)\right]_{n,k\geq 0} ,\quad \overleftarrow{FN}_{B,{\langle m\rangle}}=\left[FNB_{\langle m\rangle}(n,n-k)\right]_{n,k\geq 0}
%FN_{D,{\langle m\rangle}}=\left[FND_{\langle m\rangle}(n,k)\right]_{n,k\geq 0} ,\quad \overleftarrow{FN}_{D,{\langle m\rangle}}=\left[FND_{\langle m\rangle}(n,n-k)\right]_{n,k\geq 0},\end{aligned}$$ and the Fuss-Narayana squares $$\begin{aligned}
FN_{A,{\langle m\rangle}}^{\c}=\left[FNA_{\langle m\rangle}(n+k,k)\right]_{n,k\geq 0},\\
FN_{B,{\langle m\rangle}}^{\c}=\left[FNB_{\langle m\rangle}(n+k,k)\right]_{n,k\geq 0}.
%FN_{D,{\langle m\rangle}}^{\c}=\left[FND_{\langle m\rangle}(n+k,k)\right]_{n,k\geq 0}.\end{aligned}$$ We proposed the following conjecture.
For any $m\geq 1$, the Fuss-Narayana triangles are TP and the Fuss-Narayana squares are STP.
There are other symmetric combinatorial triangles, which are TP and the corresponding squares are STP. The Delannoy number $D(n,k)$ is the number of lattice paths from $(0,0)$ to $(n,k)$ using steps $(1,0), (0,1)$ and $(1,1)$. Clearly, $$D(n,k)=D(n-1,k)+D(n-1,k-1)+D(n,k-1),$$ with $D(0,k)=D(k,0)=1$. It is well known that the Narayana number $NA(n,k)$ counts the number of Dyck paths (using steps $(1,1)$ and $(1,-1)$) from $(0, 0)$ to $(2n, 0)$ with $k$ peaks. It is also known that $NA_{\langle m\rangle}(n,k)$ counts the number of Dyck paths of semilength $n$ whose last $m$ steps are $(1,-1)$ with $k$ peaks, see Callan’s note in [@oeis]. Brenti [@Bre95 Corollar 5.15] showed that the Delannoy triangle $D=\left[D(n-k,k)\right]_{n\geq k\geq 0}$ and and the Delannoy square $D^\c=\left[D(n,k)\right]_{n, k\geq 0}$ are TP by means of lattice path techniques. The following problem naturally arises.
Whether the total positivity of Narayana matrices can also be obtained by a similar combinatorial approach?
We have seen that the Pascal square has the decomposition $P^\c=PP^T$. We also have $D^\c=P\diag(1,2,2^2,\ldots)P^T$ since $$D(n,k)=\sum_{j}2^j\binom{k}{j}\binom{n}{j}$$ (see [@BS05] for instance). A natural problem is to find out the explicit (modified) Choleski decomposition of the Narayana squares $N_A^\c$ and $N_B^\c$.
Another well-known symmetric triangle is the Eulerian triangle $A=[A(n,k)]_{n,k\ge 1}$ where $A(n,k)$ is the Eulerian number, which counts the number of $n$-permutations with exactly $k-1$ excedances. Brenti [@Bre96 Conjecture 6.10] conjectured that the Eulerian triangle $A$ is TP. Motivated by the strict total positivity of the Narayana squares, we posed the following conjecture.
The Eulerian square $A^\c=[A(n+k,k)]$ is STP.
Acknowledgements {#acknowledgements .unnumbered}
================
This work was supported by the National Science Foundation of China (Nos. 11231004, 11371078, 11522110).
[99]{} T. Ando, Totally positive matrices, Linear Algebra Appl. 90 (1987), 165–217.
D. Armstrong, Generalized noncrossing partitions and combinatorics of Coxeter groups, Mem. Amer. Math. Soc. 202 (2009), no. 949, x+159 pp.
C.A. Athanasiadis and V. Reiner, Noncrossing partitions for the group $D_n$, SIAM J. Discrete Math. 18 (2004), 397¨C-417.
C. Banderier and S. Schwer, Why Delannoy numbers? J. Statist. Plann. Inference 135 (2005), 40–54.
P. Barry, On a generalization of the Narayana triangle, J. Integer Seq., 14 (2011), Article 11.4.5.
F. Brenti, Combinatorics and total positivity, J. Comb. Theory Ser. A 71 (1995), 175–218.
F. Brenti, The applications of total positivity to combinatorics, and conversely, in Total Positivity and its Applications, (M. Gasca, C. A. Micchelli, eds.), Kluwer Academic Pub., Dordrecht, The Netherlands, 1996, 451–473.
W.Y.C. Chen, A.X.Y. Ren and A.L.B. Yang, Proof of a positivity conjecture on Schur functions, J. Combin. Theory Ser. A 120 (2013), 644–648.
X. Chen, H. Liang and Y. Wang, Total positivity of Riordan arrays, European J. Combin. 46 (2015), 68–74.
X. Chen, H. Liang and Y. Wang, Total positivity of recursive matrices, Linear Algebra Appl. 471 (2015), 383–393.
H.Z.Q. Chen, A.L.B. Yang and P.B. Zhang, Kirillov’s unimodality conjecture for the rectangular Narayana polynomials, arXiv:1601.05863
H.Z.Q. Chen, A.L.B. Yang and P.B. Zhang, The Real-rootedness of Generalized Narayana Polynomials, arXiv:1602.00521
S.M. Fallat and C.R. Johanson, Totally Nonnegative Matrices, Princeton University Press, Princeton, 2011.
S. Karlin, Total Positivity, Volume 1, Stanford University Press, 1968.
H. Liang, L. Mu and Y. Wang, Catalan-like numbers and Stieltjes moment sequences, Discrete Math. 339 (2016), 484–488.
Q. Pan and J. Zeng, On total positivity of Catalan-Stieltjes matrices, Electron. J. Combin. 23 (4), (2016) \#P4.33.
K. Petersen, Eulerian Numbers, Birkhaüser Advanced Texts Basler Lehrbücher, Springer, New York, 2015.
A. Pinkus, Totally positive matrices, Cambridge University Press, Cambridge, 2010.
V. Reiner, Non-crossing partitions for classical reflection groups, Discrete Math. 177 (1997), 195–222.
The On-Line Encyclopedia of Integer Sequences, https://oeis.org/A281260.
| |
Q:
Every finite set contains its supremum: proof improvement.
Every finite subset of $\mathbb R$ contains its supremum (and its infimum)
Proof Let $A=\{a_1,...,a_n\}$ be a finite subset of $\mathbb{R}$. Since it is non-empty and it is bounded ($\max A$ is an upper bound), it has supremum, that is $\exists \sup A$ and by definition $\forall a \in A \;\, a \leq \sup A$. Let's suppose that $\sup A \not\in A$ then, since $\max A \in A$ we have that $\max A < \sup A$. But considering that $\mathbb Q$ is dense in $\mathbb R$ we can conclude that $\exists r \in \mathbb Q$ s.t. $\max A < r < \sup A$, but this is absurd since $r$ is an upper bound of $A$ and it is lower than the supremum. Necessarily, $\sup A\in A$.
Is there anything wrong? Is there any way to prove this without using density of $\mathbb Q$ or another property? Thanks in advance.
A:
Your proof starts right away with using $\max A$. But if you know the existence of $\max$ then it is automatically also the $\sup$. So this is probably not the way you are expected to proceed.
If $A=\{a\}$ is a singleton set, then clearly $\sup A=\max A = a$.
If $A$ has cardinality $n>1$ and we know as induction hypothesis that all sets of cardinality $<n$ have a maximal element, let $a\in A$ be an arbitrary element and let $A'=A\setminus\{a\}$.
Since $A'$ has less than $n$ elements, let $a'= \max A'$.
If $a'\ge a$, then $a'$ is a maximal element of $A$.
If $a'< a$, then $a$ is a maximal element of $A$.
| |
An assistant PPG instructor who has proven successful in training at least 5 students under the supervision of a full PPG instructor can now opt for full Instructorship. During this Instructorship training, he / she must efficaciously train at least 10 students for PPG flying, after which he / she can be called ripe enough for procuring a PPG Instructor’s rating.
PPG Senior Instructor level
A PPG Instructor, who has completed training at least 50 students, automatically earns a title as a Senior PPG Instructor and stands in a position to train other instructors as well.
PPG Coach
Training 100 students makes a Senior PPG Instructor into a Coach. | http://paraglidingindia.com/ppg-instructor/ |
It’s a great moment to learn a new language. As the world is globalizing, being a bi- or polylingual is one of the most useful skills to help you achieve personal or career advancement.
Becoming fluent in a foreign language is a challenging process. To make it easier for you, we’ve gathered some of the most useful tips for learning a new language.
Why Should You Learn a New Language?
There are lots of benefits to learning a second language. But what are some of the principal reasons for doing so?
-
Successful communication with people from foreign countries
Now that so many companies work across different locations, you might be expected to communicate with people who don’t speak your mother tongue. Getting in touch with people from different countries may challenge your ability to exchange ideas or make conversation on various topics. Knowing their language or a language that both of you understand is very important for effective communication.
-
Understanding different cultures
Why should you learn a new language? As the world becomes more culturally diverse and cross-cultural exchanges become more frequent, the importance of knowing a language other than your native one increases. Studying a foreign language does not only help you in the process of communication with the native speakers, but it also makes you familiar with their culture and traditions, specific food, etc.
-
Boosting intelligence
Studies have shown that learning a language trains your brain. This is due to the effort you make in learning and understanding new words, forming sentences and phrases, and applying all the acquired knowledge into practice. Knowing how to speak another language will help you make connections faster, improve your observation skills, and broaden your horizons, allowing you to see things from a different perspective.
|DID YOU KNOW: Scientists from Penn State University discovered that learning a language makes physical changes in the human brain. Research participants were put through two fMRI scans, and the final scan images showed that, after 6 weeks of learning Chinese, there were both structural and functional changes in their brains.|
Tips for Learning a New Language
The process of learning a foreign language is fun and exciting, but after some time, you may become frustrated because of something you can’t seem to remember or make use of. If you follow certain tips and tricks, however, you will succeed in overcoming these challenges and achieving your goal.
-
Understand your motivations
This might sound obvious, but having a good reason to learn a particular language is very important in the long run. A lot of people don’t know how to start learning a language, as they have no idea what they’ll use it for. And after some time, their motivation fades out, and they fail in achieving their goal. Here are a few ideas on how to keep motivated:
- Write down 5-10 reasons why you want to learn that particular language. That might serve you as a reminder when you feel like giving up.
- Another thing you can try is to adopt the reward principle. Get yourself a delicious treat or buy a new book every time you finish a task or achieve a certain learning goal.
Knowing how to keep yourself motivated for a longer period of time is tricky but crucial if you need to master a foreign language. Come up with something that will prevent you from giving up.
-
Set goals
Once you’ve decided you want to learn another language, determine what you want to achieve and by when. Simply setting a goal of becoming fluent in a particular language may not be enough.
How hard is it to learn another language? This depends on how you approach it. It is much less frustrating if you set tangible and realistic goals. For example, you can try reading an article without looking up the unknown words in the dictionary. Another thing you can do is decide on learning one aspect of the language at a time.
-
Stay dedicated
You need to choose the learning zone that best fits your studying preferences. If being in a social setting works best for you, then sign up for a course and attend the classes regularly. If you feel more comfortable working alone, gather learning materials and study from home. There are many apps, such as Busuu that you can use to study and practice the language you’ve chosen. They also provide useful language learning tips.
Spending as much time as you can studying and practicing will help you learn the language faster.
|DID YOU KNOW: If your aim is to learn the target language fast, you should set SMART goals. SMART stands for Specific, Measurable, Attainable, Relevant, and Timely. This kind of goal will help you stay on track while studying a new language.|
Key Takeaways
|Learning a foreign language can be crucial for successful communication with people from different cultures.|
|Learning a foreign language enhances your brain’s nerve networks, allowing for more efficient memory and flexibility.|
|The best way to learn a language is to follow some key tips, such as keeping yourself motivated, dedicated, and setting proper goals.|
|Setting SMART goals will help you determine the milestones you want to reach.|
|To expand your knowledge of a foreign language, don’t neglect learning the everyday way of communicating, i.e., the slang and idioms.|
Don’t Ignore Any Aspects of Learning
The four basic aspects of learning a language are writing, reading, listening, and speaking. As you learn, you will probably find out that you are stronger in some areas than others. But that does not mean that you should only focus on practicing your strengths. You cannot achieve fluency in a foreign language if you don’t put time and effort into practicing and improving your weaknesses, as well. The best method to learn a language is to focus on every skill equally.
Here are some tips on how to improve each learning skill:
-
Improve reading skills
The most obvious thing you can do to improve your reading skills is to read as many books as possible. They will not only help you in learning the language but will also help you explore the culture behind it.
A lot of free books are available on the internet, and you can easily find something that will meet your needs and preferences. If you’re wondering how to start learning a language, picture and comic books can help you with introducing the target language. As you progress, you can move on to articles, blogs, novels, etc. Using a dictionary is also a good idea, as it’s a great source of vocabulary.
-
Improve writing skills
Even though speaking is a more direct way of communication, writing gives the learner more time to think and come up with more accurate expressions.
You can practice your writing skills by writing a letter or a postcard to a friend or searching for written assignments on the internet. Then, you can use a grammar checker to scan for mistakes, and if there are any, you can correct them and take notes to avoid making them in the future.
-
Improve speaking skills
Trying to talk in a foreign language for the first time can be overwhelming. Most people find speaking the most difficult, but it is also the most effective practice you can get in order to improve your language skills.
To improve your speaking, you should first find someone to practice speaking with. It can be your friend, your language teacher, or a native speaker of your target language. Being persistent in getting speaking practice on a regular basis will show results in no time.
-
Improve listening skills
For some, listening is the hardest skill to master in their language learning process. There are learners who think that they can simply watch a foreign movie or listen to songs in their target language, and they’ll be on their way towards achieving fluency. After some time they become frustrated as they realize that their pattern of passive learning isn’t the best way to learn a language.
Foreign songs and movies can definitely help you with your language practice, but you should know how to make use of them. Playing a song in the background while you’re doing household chores won’t have the same effect as concentrating on listening for 5-10 minutes a day.
|DID YOU KNOW: Testing all four skills ensures that you can make effective use of the language you’ve been studying in the real world. One such exam is the Cambridge English exam, which encourages teachers and students to take a balanced approach to language learning and develop communication competence. It’s a good idea to find the equivalent of such a test for the language of your choice.|
Practice Is the Most Important Thing You Can Do
Mastering a language requires a lot of practice. But not all practice is good, and students need to learn how to practice effectively, as well as to get adequate exposure.
For example, if a student is only exposed to written Italian, and their practice mostly consists of doing grammar exercises, reading, and translation, they may struggle to express themselves regardless of the fact that they are great at reading Italian.
You should bear in mind that practice and exposure to all the aspects of the language are the key factors in the process of learning a new language. If you want to progress and achieve your ultimate goal, come up with a practice scheme that fits your personal needs and requirements.
Helpful Programs and Applications
Language learning apps have been on the market for quite some time, but their number has increased rapidly lately. Wondering how to learn a language fast and fluently? One of the advantages of online education is that there are a great number of apps you can use on your mobile and your computer, that offer a variety of language lessons/exercises to speed up your learning process. Here is our choice of the top five:
-
Rosetta Stone
It is rated the best premium software for learning the foundations of a foreign language. It is a great choice for absolute beginners, but it also offers more advanced language lessons for higher-level learners.
-
Duolingo
A 5/5 rating for a free language learning app means that the quality of the lessons it offers exceeds those offered by other free apps. It employs some unique features, such as gamification and a goal-setting option, has a clear structure, and contains lessons in 19 different languages.
-
Babbel
This app teaches you how to speak another language by introducing phrases and vocabulary that you can actually make use of. For its low subscription price, you get high-quality lessons that are unique to each language.
-
Memrise
Beginners find this app particularly useful, as it offers a lot of basic vocabulary and exercises. It is a combination of memes and gamification, which makes the learning experience really fun.
-
Busuu
Despite the fact that it doesn’t offer that many languages, Busuu has a lot of high-quality core learning content that learners find very useful. It’s among the better language learning apps and its feature-packed Premium subscription is definitely worth the money. Read our full review of Busuu here.
Conclusion
Learning a foreign language is a fun experience, but it can also be quite frustrating. Following some key language learning tips will help ease the process and bring better results.
Remember, success comes with trying. Make use of our tips and dive into the language learning process boldly.
FAQ
For most people, devoting around 30 minutes a day for active study and 1 hour of actual language exposure gives great results. Consistency is key, so give yourself some timWhat is the best strategy to learn a new language?
e before deciding whether this works for you.
Being motivated and dedicated, along with setting realistic goals are some basic requirements in the process of language learning. In general, the best tips for learning a new language are those that best work for you. They will help you achieve more progress, so make an effort to find what your optimal study mode is and you’ll achieve your goal.
There are four tiers of languages, classified by difficulty and study time required. According to research, it takes ca. 480 hours of learning and practice to achieve fluency in group 1 languages (languages more similar to English). | https://review42.com/resources/tips-for-learning-a-new-language/ |
Meals Under $1.25: Chicken Fried Rice and Egg Drop Soup
Chef's Note: Leftover beef, shrimp and pork can be used in addition to or instead of the chicken. They all taste great!
If you like Asian food, you are in for a treat! This meal combination will hit the spot without hurting your pocketbook. You will be able to begin your meal with a tasty Egg Drop Soup, followed by a stir fried rice recipe that my husband enjoyed so much he jokingly asked me if I had gone out and gotten Chinese food. I was pleased. He was pleased; and I think you will be too!
Let's first look at our ingredient list. To make the Chicken Fried Rice (serves 4), you will need:
-
2 cups chopped chicken, or one portion created from hub, Building Meals Under $1.25: Using 10 Pounds Chicken in 5 Recipes to Make 26 Servings
-
2 T. oil
-
2 carrots, sliced diagonally
-
1/2 onion, chopped
-
4 cups cooked long grain brown rice
-
2 eggs, beaten
-
1 T. soy sauce
If you are using a portion that had been frozen, make sure it has been thawed overnight in the refrigerator. Or Thaw using the microwave.
Or use 2 cups leftover chopped, cooked chicken.
Or 2 cups cups chopped Cloverleaf's Rotisserie Chicken.
Chop onions and diagonally slice carrots.
Add cooking oil to the frying pan or wok. (Sesame seed oil is more expensive but will provide a more authentic taste.) I used olive oil.
When oil is hot, add onions and carrots.
Fry until onions are transluscent and edges are starting to brown.
Other vegetables can also be added, such as green peas, sliced squash or snow pea pods. Stir fry for a minute before adding rice.
Add 4 cups cooked rice. Mix well.
Add chopped chicken.
Mix well and continue stirring until everything is heated well.
Push rice mixture away from middle of the pan forming a well. Pour in the two beaten eggs. Begin stirring eggs pulling rice into the egg as you stir; continue until egg is cooked. Mix all together well. Sprinkle mixture with 1 T. soy sauce. Stir well and remove from heat. Set aside while preparing the egg drop soup.
Here is a portion of the chicken fried rice.
Egg Drop Soup
- 4 cups water
- 4 chicken bouillon cubes
- 4 eggs, scrambled
- 4 green onions, green portion only, sliced
Bring water, bouillon cubes, green onions and parsley to a boil. Slowly stir in eggs constantly stirring, causing eggs to string in the liquid. Taste. If not too salty, add as much as 1 t. soy sauce to taste for the real authentic touch. Serve immediately. Serves 4
Cost Analysis:
- chicken = .30 / serving
-
2 T. oil = .05 / serving
-
2 carrots = .05 / serving
-
1/2 onion = .04 / serving
-
4 cups cooked long grain brown rice = .23 / serving
-
2 eggs, beaten = .05 / serving
-
1 T. soy sauce = .05 /serving
Total per serving = $.77
- 4 chicken bouillon cubes = .10 / serving
- 4 eggs = .10 / serving
- 4 green onions = .15 / serving
Total per serving = $.35
$.77 + $.35 = $1.12 per serving
All Rights Reserved
Copyright © 2011 Cindy Murdoch (homesteadbound)
Your Future is Waiting! Do you feel you have great information or stories to share with others? Sign Up Here. . . It’s quick, easy and free to join HubPages!
Related Links:
- Building Meals Under $1.25: Using 10 Pounds of Chicken in 5 Recipes to Make 26 Servings and Feeling
This hub will guide you through the process of creating 26 servings of satisfying food, using 5 different recipes, and only 10 pounds of chicken legs and thighs. At 59 cents a pound that is an amazing deal. Come see how it can be done!
- Meals Under $1: Homemade Country-Style Chicken Pot Pie
Discover homestyle chicken pot pie. Chicken Pot Pie is an unbelieveably delicious way to use up leftover chicken or turkey. Create your chicken pie using this flaky pie crust recipe included.
- Meals Under $1: Homemade Chicken Noodle Soup
Nothing beats the taste of chicken noodle soup. It is especially good on a cold winter day, or when you are feeling under the weather. Using the batch cook method this soup is as economical as it is tasty!
- Meals Under $1.25: Easy Chicken Enchilada Casserole
If you like enchiladas but not all the work associated with making them, you will like this easy to make layered casserole. This healthy casserole has a lot of flavor at a very reasonable cost.
- Meals under $1.25: Easy Chicken Spaghetti
Chicken Spaghetti is a wonderful pasta inspired casserole. Not only will it help you save on your food budget, but it will please you and all that you serve it to. Try this one - it is pleasing to the budget and the palate.
- Meals Under $1: Not Just Another Grilled Cheese Sandwich
In todays hard economy many people are finding themselves needing to cut corners. But cutting corners at meal time dshould not mean that you can't eat well. Here is a meal centered around a deluxe grilled cheese sandwich.
- Meals Under $1: Curried Squash Soup and Cheese Toast
What constitutes a good meal? This meal is satisfying, good, inexpensive and utilizes the cooking talents of two hubbers. What more could you ask for? Check it out for yourself and see if you don't agree.
- Meals Under $1.50: Easy Mexican Chicken Salad
Looking for a quick, low-cost and delicious meal? If you like Mexican food, and you like salads, look no further, this meal will satisfy you.
Comments: "Meals Under $1.25: Chicken Fried Rice and Egg Drop Soup"
Very nice Hub homesteadbound! I will try the egg drop soup soon. Thank you! :)
I tried the stir-fried rice. Thank you very much. It really let me give a fresh new taste to leftover chicken and rice.
Thanks for sharing your recipes. I'll have to try the egg drop soup again...I never have much success in getting good 'strings'.
I am going to try this soup recipe for sure, but will substitute my own homemade chicken stock for the bouillon cubes for a lower-sodium alternative. That way I can dump in more soy sauce! : ) Thanks for sharing.
Yummy. I like a good fried rice, and this one looks good! And I absolutely LOVE egg drop soup, so I'm anxious to try your recipe. Bookmarking this recipe, too. Voted up, useful, awesome, interesting!
up, useful, and awesome. This is a very useful article Homesteadbound. With the economy like it is and family, a meal like this is great for the family and good on the wallet. We have to take advantage of this in everyway possible. Good stuff, meals, and article. I always have enjoyed Asian and Chinese foods, especially the fried rice. Take care my dear Homesteadbound. Until next time....
Yummy! I've always wanted to make my own egg drop soup but it never turned out with an authentic taste. Thanks for sharing, can't wait to give this one a try. Voted up and shared.
Great hub! Nicely laid out and easy to follow recipes that are never cheap but instead inexpensive and good for you! | https://hubpages.com/food/Meals-Under-1-Chicken-Fried-Rice-and-Egg-Drop-Soup |
PROBLEM TO BE SOLVED: To provide a pachinko ball box from which a small number of residual pachinko balls can easily be taken out to the last one with the hand without tilting the box.
SOLUTION: In the pachinko ball box, a guiding groove 10 tilted to a degree where the pachinko balls naturally rolls in one direction along the inner periphery of a bottom surface part is formed, and a ball reservoir 11 with an area where a plurality of fingers can be inserted is formed at an end on the low order side of the guiding groove 10. Thus, since the small number of residual pachinko balls can be collected to the ball reservoir 11 along the guiding groove 10, the pachinko balls can efficiently be taken out to the last one. The pachinko balls collected to the recessed ball reservoir 11 can easily be grasped since they are never scattered to their surrounding even when the finger contacts them.
COPYRIGHT: (C)2005,JPO&NCIPI | |
Well let’s be clear, I’m not an expert – not in any of the subjects that I write about in these posts. All 624,378 words – if strung together – would probably not represent enough expertise to grow a hyacinth in one of those glass thingamajigs. In fact the whole idea of being regarded as some kind of guru fills me with horror. This blog isn’t about sharing expertise, it’s about the endlessly puzzling business of being conscious and trying to make some kind of sense of it. My best hope is that I can share some of the little epiphanies that unexpectedly arrive in the course of gardening, cooking, baking, pickling and fermenting , walking and botanising; oh – and loving of course.
The photo is of the asparagus bed – covered for the winter; the garlic which I finished planting out from November pots yesterday and the new strawberry bed (the longer of the two) which I dug out entirely and added four barrow loads of wood chip; then replaced the soil and a layer of compost. Wood chip makes a good substrate when there’s not enough soil to raise a bed; but it rots down quite fast so needs replacing every year or so. I know that the experts say wood chip can acidify the soil but we use it for paths, mulch and raising beds with no discernable ill effects. The strawberries – which are all offsets from the original six special offer plants, have overwintered in the polytunnel.
The smaller of the wooden raised bed is the old hotbed which we’re not heating this year because of fears of persistent vermicides in the stable manure we used to use. So it’s got four foot of first class topsoil in which we have grown lovely carrots but to rotate this season it’ll be cucumbers or squashes. I’ve now got a tremendous backache! | https://severnsider.com/2022/02/09/im-no-expert/ |
Keywords: reinforcement learning, autonomous reinforcement learning, adversarial imitation learning
TL;DR: We formalize the single-life RL problem setting, where given prior data, an agent must complete a novel task autonomously in a single trial, and propose an algorithm (QWALE) that leverages the prior data as guidance to complete the desired task.
Abstract: Reinforcement learning algorithms are typically designed to learn a performant policy that can repeatedly and autonomously complete a task, usually starting from scratch. However, in many real-world situations, the goal might not be to learn a policy that can do the task repeatedly, but simply to perform a new task successfully once in a single trial. For example, imagine a disaster relief robot tasked with retrieving an item from a fallen building, where it cannot get direct supervision from humans. It must retrieve this object within one test-time trial, and must do so while tackling unknown obstacles, though it may leverage knowledge it has of the building before the disaster. We formalize this problem setting, which we call single-life reinforcement learning (SLRL), where an agent must complete a task within a single episode without interventions, utilizing its prior experience while contending with some form of novelty. SLRL provides a natural setting to study the challenge of autonomously adapting to unfamiliar situations, and we find that algorithms designed for standard episodic reinforcement learning often struggle to recover from out-of-distribution states in this setting. Motivated by this observation, we propose an algorithm, Q-weighted adversarial learning (QWALE), which employs a distribution matching strategy that leverages the agent's prior experience as guidance in novel situations. Our experiments on several single-life continuous control problems indicate that methods based on our distribution matching formulation are 20-60% more successful because they can more quickly recover from novel states. | https://openreview.net/forum?id=303XqIQ5c_d |
TECHNICAL FIELD
BACKGROUND ART
DISCLOSURE OF THE INVENTION
BEST MODE FOR CARRYING OUT THE INVENTION
The present invention relates to technology to process a data-driven database, and more particularly to a data-driven database processor, and a processing method and processing program therefore that alleviate the burden of creating an access program.
Conventionally, computer programs for handling data in a database are formatted so as to be sequentially executed regardless of language format. In other words, the respective instructions described in such computer programs have to be executed in a described order.
FIG. 6
FIG. 6
FIG. 6
11
12
11
12
For example, illustrates an example of a general program for accessing a database. In , reference numerals and denote two schema definitions existing in the database. A schema definition is a description defining a data structure and, for example, defines a plurality of data items and a table (chart) made up of the data items. In the example illustrated in , a schema definition defines a table A and a schema definition defines a table B.
13
14
15
11
12
13
15
14
Reference numeral denotes a database management system, a database, and a sequential database access program. According to schema definitions and , the database management system receives an access instruction from the database access program and performs data processing on the database .
151
1510
15
151
1234
1
152
5
1
4
153
4
3
154
3
4
2
Reference numerals to denoting described contents of the database access program represent a description example of access instructions for the database. Reference numeral denotes an instruction to substitute a value into A. Reference numeral denotes an instruction to add to the value of A and substitute the sum into A. Reference numeral denotes an instruction to subtract 1 from the value of A and substitute the difference into A. Reference numeral denotes an instruction to add up the values of A and A and substitute the sum into A.
155
14
155
1
2
3
4
Reference numeral denotes an insert statement, a SQL statement that is a statement for accessing the database. The insert statement is an instruction to newly store a value in the database . In this example, reference numeral denotes an instruction to store data of A, A, A, and A in the table A.
156
1
1
157
2
4
158
3
4
3
159
3
2
2
1510
1
2
3
4
Reference numeral denotes an instruction to substitute a value of A into B. Reference numeral denotes an instruction to substitute a summation of values of A of all records in the table A into B. Reference numeral denotes an instruction to add to B and substitute the sum into B. Reference numeral denotes an instruction to multiply B by and substitute the product into B. Reference numeral denotes an instruction by an insert statement to store data of B, B, B, and B in the table B.
15
151
1510
14
152
153
3
4
3
152
153
With such a sequential database access program , interchanging the sequence of instructions from to results in changes in values to be stored in the database . For example, if the sequence of and is inversely described and executed, since A is computed before a value is substituted into A, the value of A ends up being different from when and are not interchanged. This means that whenever a programmer writes a program, the program must be written with utmost case for the sequence in which instructions are described.
Meanwhile, prior methods of reducing the effort expended by a programmer to write a database access program include the methods described below (refer to Non-Patent Document 1). As a first method, as many as possible of instructions which are normally described in a program by a programmer are incorporated into functions of a database management system. For example, by defining a computation between data items in a view description of the database, a description of an instruction to perform the computation can be eliminated from the database access program.
In this case, a view description refers to a description to provide an appearance of a single virtual table by, for example, selecting and deriving desired data from one or a plurality of tables. Since the view description is a virtual table description, unlike a real table, the description itself does not contain data.
Another method is to equip the database management system with a trigger function. A trigger function is a function in which processing such as updating performed on a table triggers processing such as updating on another table. Using the trigger function, in a plurality of database access programs that access the same database, processing normally described at a plurality of locations can now be described integrated at a single location. Accordingly, a reduction in a description length of database access programs has been achieved.
Non-Patent Document 1: Masashi Tsuchida, Takashi Kodera: “SQL 2003 Handbook”, Soft Research Center Inc., 2004
Patent Document 1: Japanese Patent Application Laid-open No. H6-149636
Patent Document 2: Japanese Patent Application Laid-open No. 2003-177948
Moreover, an automatic conversion system that enables a database to use a utilization program of another database by automatically modifying table names, item names, and the like of the program is disclosed in Patent Document 1. Furthermore, a technique for converting data by using open, close, name change (rename), production (create), delete, or the like as a trigger is disclosed in Patent Document 2.
However, there have been no significant changes in the description method of programs for accessing a database and data processing is still performed in the sequence of instructions in programs written by users such as programmers. Therefore, the same situation exists in that a user must create a database access program while being conscious of the sequence in which instructions are described.
Furthermore, in addition to the sequence in which instructions are described, the user must write the database access program while also paying attention to the interrelation of all data items.
Moreover, the user must redundantly describe an instruction for computing the same data item for each database access program. This problematically results in an increase in the program description length.
The present invention has been made in order to solve the aforementioned problems present in conventional art, and an object of the present invention is to provide a data-driven database processor, a data-driven database processing method, and a data-driven database processing program which free a user from having to be conscious of a sequence in which instructions of a program for accessing a database are described, an interrelation of data items, and the like and from having to describe redundant instructions.
In order to achieve the object described above, a data-driven database processor according to the present invention has: schema definition storage means for storing a schema definition of a database; derived definition storage means for storing a derived definition describing a cause-and-effect relationship between a data item in the schema definition and another data item for deriving the data item; derived definition processing means for generating a trigger program that makes a chain of changes to data items based on the derived definition; and a database management system for executing the trigger program when a change is made to another data item that affects the data item. Moreover, inventions that interpret the present invention as a data-driven database processing method and a data-driven database processing program are also modes of the present invention.
In the mode described above, a user need only simply describe a derived definition while focusing solely on a cause-and-effect relationship that shows from which other data item an object data item is derived. Therefore, there is no more need to concentrate on a sequence in which instructions are described and on an interrelation between the object data item and all data items, which had been required by conventional sequential programs. Consequently, the effort required to create a database access program is significantly reduced.
In addition, a chain of changes considered necessary based on a cause-and-effect relationship can be made by a mere single description of a derived definition corresponding to a schema. Therefore, a user need only describe, into the database access program, a data operation instruction to the data item that is causing the cause-and-effect relationship. Consequently, an advantage is gained in that the description length of the database access program is significantly reduced.
As described above, the present invention is capable of providing a data-driven database processor, a data-driven database processing method, and a data-driven database processing program which free a user from having to be conscious of a sequence in which instructions of a program for accessing a database are described, an interrelation of data items, and the like and from having to describe redundant instructions.
Next, best modes (hereinafter referred to as embodiments) for carrying out the present invention will be described in detail with reference to the drawings.
[Summary of Embodiment]
FIG. 1
FIG. 3
3
26
27
As illustrated in , in the present embodiment, derived definition storage means stores a derived definition that is a description of a cause-and-effect relationship that shows from which other data item a data item described in a schema definition is derived. Based on the derived definition, derived definition processing means generates a trigger program (refer to ).
25
23
27
FIG. 3
Then, triggered by a change made by a database access program (refer to ) to a data item causing the cause-and-effect relationship, a database management system executes the trigger program to enable a chain of changes to be automatically made to values of all data items affected by the value of the data item.
[Configuration of Embodiment]
1
FIG. 1
Next, a configuration of the present embodiment will be described. A data-driven database processor according to the present embodiment is configured by, for example, implementing functions such as described below with a computer run by a predetermined program. is a functional block diagram in which each function has been virtually blocked. Moreover, methods of implementing the functions, a program for implementing the functions, and a storage medium storing the program are also modes of the present invention.
1
24
23
2
3
26
5
6
7
8
That is, the data-driven database processor according to the present embodiment comprises a database , a database management system , schema definition storage means , derived definition storage means , derived definition processing means , trigger program storage means , database access program storage means , inputting means , outputting means , and the like.
24
23
24
23
23
25
23
23
23
24
a
b
a
c
The database is storage means storing data to be handled in the present embodiment. The database management system is means for integrally managing access to the database . The database management system comprises, for example, an interpreting unit that interprets the database access program to be described later, an executing unit that executes the program according to an interpretation by the interpreting unit , an accessing unit that executes an access to the database , and the like.
7
23
24
Consequently, according to a request (inquiry) inputted using the inputting means , the database management system is able to retrieve target data from the database and perform requested processing such as adding, changing, and updating of the data. Other functions included in a general database management system such as data consistency and concurrent control are well known techniques and descriptions thereof will be omitted.
2
3
26
27
26
26
26
26
26
27
26
a
b
a
b
a.
The schema definition storage means is means for storing a schema definition defining a data structure for generating a table made up of data items. The derived definition storage means is means for storing a derived definition describing a cause-and-effect relationship for deriving data items. The derived definition processing means is means for creating a trigger program , to be described later, based on the derived definition. The derived definition processing means comprises a judging unit and a generating unit . The judging unit is means for judging a sequence in which derived definitions are to be executed based on the cause-and-effect relationship. The generating unit is means for generating the trigger program based on a judgment result of the judging unit
5
27
26
6
25
24
The trigger program storage means is means for storing the trigger program generated by the derived definition processing means . The database access program storage means is means for storing a database access program for accessing the database .
7
24
1
1
7
7
25
23
The inputting means is means to be used by a user (widely including operators of the database such as a programmer and an administrator) to input various information to the data-driven database processor in order to operate the data-driven database processor . Any input device usable at present or in the future such as a keyboard, a mouse, and a touch panel can be used as the inputting means . The inputting means enables the user to input a data item, a schema definition, a derived definition, the database access program , and the like, and the database management system performs processing in response to such input.
8
24
1
8
The outputting means is means for enabling contents of the database , progress of processing, and the like of the data-driven database processor to become visible to the user by outputting the contents, process, and the like in various modes. Any output device usable at present or in the future such as a display and a printer can be used as the outputting means .
2
3
5
6
24
2
3
5
6
24
24
Any storage medium usable at present or in the future such as various memories and hard disks of a computer is applicable as a storage area for the schema definition storage means , the derived definition storage means , the trigger program storage means , the database access program storage means , the database , and the like described above. While the respective storage means , , , and and the database have been conceptually distinguished from each other in the above description, a part of or all of the storage means and the database may be implemented in a common storage medium.
26
23
7
8
In addition, the derived definition processing means , the database management system , and the like are to be implemented by a control unit comprising a CPU, a memory, and other peripheral circuits which run according to a program. The control unit is equipped with functions included in a general computer such as an inputting/outputting function of information between the inputting means and the outputting means and a computing function.
[Operations of Embodiment]
FIGS. 2 to 4
An example of processing according to the present embodiment configured as shown above will now be described with reference to .
[Summary of Processing]
FIG. 2
7
2
7
101
3
102
First, a summary of processing of the present embodiment will be described using a flowchart illustrated in . Let us assume that a schema definition inputted by a user in advance using the inputting means is stored in the schema definition storage means . When a derived definition corresponding to such a schema definition is inputted by the user using the inputting means (step ), the derived definition is stored in the derived definition storage means (step ).
26
26
103
26
27
104
27
105
a
b
Based on a cause-and-effect relationship of derived definitions, the judging unit of the derived definition processing means judges a sequence in which the derived definitions are to be executed (step ), and according to the judgment result, the generating unit generates a trigger program (step ). The trigger program is stored in the trigger program storage means (step ).
25
23
106
27
23
107
Then, triggered by the execution of the database access program by the database management system (step ), the trigger program is executed by the database management system (step ).
[Specific Example of Processing]
21
22
3
4
21
22
27
26
25
24
FIG. 3
Next, a specific example of processing using derived definitions and will be described with reference to FIGS. and . In , reference numeral denotes a derived definition of a table A and denotes a derived definition of a table B. Reference numeral denotes a trigger program converted and generated by derived definition processing means . Reference numeral denotes a database access program for accessing a database .
21
22
7
3
2
The derived definition of the table A and the derived definition of the table B are inputted by a user using the inputting means and stored in the derived definition storage means in association with respective data items described in the schema definition storage means . At this point, the user need only describe a definition by solely focusing on a cause-and-effect relationship that shows “which other data item has a value from which the value of a given data item is solely derived”. In other words, the user is not required to consider the sequence in which computations among the data items are to be performed.
211
2
3
4
212
3
4
213
4
1
For example, in table A, reference numeral denotes an example defining that a value of A is derived by a sum of a value of A and a value of A. Reference numeral denotes an example defining that a value of A is derived from a value obtained by subtracting 1 from a value of A. Reference numeral denotes an example defining that a value of A is derived from a value obtained by adding 5 to a value of A.
221
1
1
222
2
3
223
3
4
224
4
2
Furthermore, similarly in the table B, reference numeral denotes an example defining that a value of B is derived from a value of A in the table A. Reference numeral denotes an example defining that a value of B is derived from a value obtained by multiplying a value of B by 2. Reference numeral denotes an example defining that a value of B is derived from a value obtained by adding 3 to a value of B. Reference numeral denotes an example defining that a value of B is derived from a summation of values of A of all records in the table A.
21
22
In this case, contents respectively described in the derived definition and the derived definition can be independently described in individual data items defined by the schema. Therefore, it is unnecessary to consider in what sequence computations of the respective data items must be executed.
211
213
221
224
26
26
1
4
213
1
221
a
Based on the defined derived definitions to of the table A and the defined derived definitions to of the table B, the judging unit of the derived definition processing means judges a sequence of execution from the respective cause-and-effect relationships. For example, since the data items determined by the value of A are A that is determined by the derived definition in the table A and B that is determined by the derived definition in the table B, these derived definitions first become executable.
4
3
212
212
3
4
2
211
211
Next, since the data item determined by the value of A is A that is determined by the derived definition in the table A, the derived definition becomes executable. Next, since the data item determined by the values of A and A is A that is determined by the derived definition , the derived definition becomes executable.
2
4
224
224
4
3
223
223
3
2
222
222
FIG. 3
Next, since the data item determined by the value of A is B that is determined by the derived definition in the table B, the derived definition becomes executable. Next, since the data item determined by the value of B is B that is determined by the derived definition , the derived definition becomes executable. Next, since the data item determined by the value of B is B that is determined by the derived definition , the derived definition becomes executable. As shown, in the example illustrated in , an execution sequence of derived definitions can be determined by judging whether or not a given data item can be determined by a data item already determined and preferentially executing the derived definition of the data item that can be determined.
26
32
321
329
32
211
213
221
224
21
22
32
23
26
b
b.
FIG. 4
Subsequently, based on the judged execution sequence, the generating unit generates a trigger program such as that illustrated in . A group of instructions to in the trigger program respectively correspond to derived definitions to and to of the derived definition of the table A and the derived definition of the table B. In this manner, based on the cause-and-effect relationships of values of the respective data items, the trigger program that is interpretable and executable by the database management system is automatically converted and generated by the generating unit
32
23
5
25
23
32
21
22
The generated trigger program is read by the database management system and registered in the trigger program storage means . Subsequently, by executing the database access program , the database management system executes the trigger program at the moment a change is made to data items described in the derived definitions and .
24
25
251
1234
1
252
1
24
152
1510
252
FIG. 6
Access to the database is enabled from the database access program by an instruction of a SQL statement. In this case, merely describing a substitution instruction to substitute a value into A and an insert statement that is a SQL statement to store the value of A in the table A of the database will suffice. Consequently, descriptions to in a conventional database access program description illustrated in become unnecessary. Alternatively, a single line of the description of the insert statement that is a SQL statement provides the same processing result.
FIG. 4
252
25
31
1
321
329
32
As illustrated by the example in , the execution of the program statement described in the database access program is triggered by the execution of an insert statement that is a SQL statement to A that is a data item in the table A, in which case the instructions to in the trigger program are to be chain-executed as follows.
31
1
1
4
321
1
1
1
325
4
4
3
322
3
3
4
2
323
324
That is, triggered by the storage of a value of A into the table A, a substitution of a value obtained by adding 5 to the value of A into A is executed (). In addition, triggered by the storage of the value of A, a substitution of the value of A into B is executed (). Furthermore, triggered by the substitution into the value of A, a substitution of a value obtained by subtracting 1 from the value of A into A is executed (). Furthermore, triggered by the substitution into the value of A, a substitution of a sum of the values of A and A into A is executed (). In this manner, triggered by the substitution of the values of all data items described in the schema of the table A, storage into the table A is executed ().
2
4
326
4
4
3
327
3
3
2
328
329
Next, triggered by the storage into the table A, a substitution of a summation of the values of all As existing in the table A into B is executed (). In addition, triggered by the substitution into the value of B, a substitution of a value obtained by adding 3 to the value of B into B is executed (). Furthermore, triggered by the substitution into the value of B, a substitution of a value obtained by multiplying B by 2 into B is executed (). In this manner, triggered by the substitution of the values of all data items described in the schema of the table B, storage into the table B is executed ().
1
321
329
32
Accordingly, triggered by the storage of A that is a single data item into the database, all processing ( to ) in the trigger program is automatically chain-executed.
[Advantages of Embodiment]
Advantages of the present embodiment (present invention) described above are as follows. That is, according to the present embodiment, derived definitions for deriving the respective data items can be described without having to consider the sequence in which the respective derived definitions are executed. As a result, while it is conventionally required that consideration be given to the execution sequence when a user creates a sequential program, the present embodiment significantly reduces the user's effort for considering the execution sequence.
Since a CPU sequentially executes a program, the fact remains that the program to be ultimately executed must be described in a predetermined sequence. However, there is no reason that all future attempts to program a database by a user must be bound by this constraint. An ideal programming method ought to be not a method that requires a human to suit the convenience of a machine but a method that enables programming that suits the naturally-occurring thoughts of a human. The present embodiment frees a user from the constraints of sequential programming, and is significant in that the skill required of a person using the database can be reduced and that database programming styles themselves can be dramatically altered.
In addition, since only a derived definition for deriving each data item need be considered and the mutual relationships between all data items need not be considered, the effort required of a user can be further reduced.
Furthermore, with a database access program, merely making a change to a data item that acts as an initial trigger shall suffice. Therefore, the description length of the database access program decreases dramatically.
15
151
1510
251
252
21
22
25
25
FIG. 6
FIG. 3
For example, while a database access program in the conventional model illustrated in requires a program description length of 10 lines from to , the present embodiment only requires a program description length of 2 lines of and . Other descriptions need only be described in the derived definition of the table A and the derived definition of the table B illustrated in , which are descriptions of cause-and-effect relationships among respective data items. Therefore, even if there is a large number of database access programs for accessing the tables A and B, the program description length of each database access program can be significantly reduced.
Moreover, the present embodiment is not simply characterized by automatically generating a program or chain-processing desired data. This becomes evident through a comparison with a function in conventional spreadsheet software or processing similar to such a function. For example, when using (retrieving, displaying, updating, or the like) a given piece of data, an Excel function performs a computation on a specified argument based on a predefined operational expression and performs a virtual tally by returning a return value.
In addition, for example, an operational expression can be written into a view description of a database management system. This is similar to a function in that a virtual tally is performed when using data. Furthermore, some database management systems are capable of virtually obtaining a value by describing a mathematical expression using data items in the same table. In this case, the only difference is that a mathematical expression is directly written instead of using a function, and the fact remains that a computation or a tally is virtually performed upon the use of data.
1
24
31
1
2
3
4
1
2
3
4
FIG. 4
However, the present embodiment does not virtually obtain a calculation or tally result as was the case in the aforementioned functions and similar processing. The present embodiment is characterized in that, upon data update, related data that affect the data is updated and a computation result is actually retained in the database. For example, upon storage of A in the database by the insert statement of SQL denoted in by reference numeral , values of all items affected by A or, in other words, A, A, A, B, B, B, and B are automatically chain-updated.
The present embodiment differs from functions and the like in this regard. Functional programming is extremely time-consuming because a function runs when using data. In the present embodiment, since the database retains data actually computed and updated, no computation is required when using (reading) the data and the data can be immediately retrieved. Generally, between data update and data read, data read is performed much more frequently than data update. Therefore, according to the present embodiment, the load on the entire system can be reduced.
224
2
4
FIG. 3
Moreover, in the present embodiment, even a value straddling different tables is computed, tallied, and updated. For example, reference numeral illustrated in denotes tallying As in the table A and entering a summation thereof into B. Therefore, it can be stated that the present embodiment is more superior to a conventional database management system that is incapable of tallying and computing data straddling different tables even when employing a function method.
[Other Embodiments]
The present invention is not limited to the embodiment described above, and any specific mode of a schema definition, a derived definition, a trigger program, and a database access program can be adopted. In addition, criteria for judging an execution sequence of a trigger program from a cause-and-effect relationship of respective data items in a derived definition are similarly not limited to those exemplified in the embodiment described above.
4
1
2
3
4
4
2
4
1
2
3
3
4
For example, in the aforementioned embodiment, since A=A+5 and A=A+A, there is a direct cause-and-effect relationship between A and A and the execution sequence of the expressions is to be determined by the sequential relationship. In addition, if a cause-and-effect relationship exists, albeit indirect, such as the case of A=A+5, A=A+1, and A=A−1, then the execution sequence of the respective expressions is to be determined by the sequential relationship.
4
2
4
1
2
1
26
a
However, if a cause-and-effect relationship does not exist between A and A as in the case of, for example, A=A+5 and A=A+1, then a computation sequence cannot be unambiguously determined. In this case, the aforementioned judging unit can judge that the expressions have a same-level sequence and cause parallel processing, to be described later, to be performed, or determine an execution sequence based on predetermined criteria. An example of determining an execution sequence will be described below. It should be noted that this is merely an example and an execution sequence should be determined based on a processing ability of a system or according to the convenience of the system. As such, the present invention is not limited to this example.
FIG. 5
A
A
A
A
C
A
B
A
Let us assume a case involving a table A, a table B, and a table C according to a schema definition illustrated in . Let us also assume that, in this case, the following expressions exist as derived definitions.
4=1+52=1+11=sum(1)4=sum(1)
1
Now, if A is determined, then a sequence can be determined as follows.
4
1
[1. When expression is obtainable by same field in table (for example, A=A+5)]
(Method 1: Adopt Sequence of Field Descriptions in Schema)
4
2
2
2
4
Between A and A, since A is described earlier in the schema definition in the aforementioned table A, the execution order is A, A.
(Method 2: Adopt Sequence According to Characters Such as Alphabet and Numerals of Field Names)
4
2
2
2
4
Between A and A, since A takes precedence in alphanumeric order, the execution order is A, A.
4
1
[2. When expression also includes an expression obtainable by field of another table (for example, when expression includes B=sum (A)]
2
4
A conceivable method involves first executing an expression of the field in the same table (in the example, A, A; an execution sequence therein should be determined by applying the method described in [1 . . . ] above), and next executing the expression that uses a field in another table.
1
2
4
4
1
Furthermore, when there are a plurality of expressions using a field in another table, as described in [1 . . . ] above, execution can conceivably take place according to a sequence of table described in the schema (a sequence such as table A, table B, and table C) or a sequence according to characters in the table names. In this example, since the sequence of table definitions in the database is table A, table B, and table C, a determination of a value of A by the expression described above results in a sequence of A, A, B, and C.
In addition, expressions judged to have the same-level sequence among expressions defined by a derived definition statement can be parallel-processed. For example, according to the way computers are structured today, a single CPU can only execute processes sequentially. However, parallel processing can be performed with a processor having a plurality of CPUs. Consequently, data can be parallel-processed and processing speed can be increased.
Furthermore, the present invention is not limited to a configuration implemented by a single specific computer and can be configured by a plurality of server devices including a database server and a server for other data processing, or by a system in which processing is distributed over a plurality of computers according to each function block and the plurality of computers work together to perform processing. For example, the parallel processing described above can also be performed by distributed processing.
(a) a cause-and-effect relationship in a field in the same table;
(b) a cause-and-effect relationship in a field straddling tables,
the following cause-and-effect relationships can also be considered:
(c) a cause-and-effect relationship in a field straddling a plurality of databases;
(d) a cause-and-effect relationship in a field straddling databases in other servers.
Moreover, while the following cause-and-effect relationships have been exemplified in the embodiment described above:
Furthermore, data to be handled by the present invention is not limited to any particular type of data. The present invention is widely applicable as a program for accessing an ordinary database.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1
is a functional block diagram illustrating an embodiment of a data-driven database processor according to the present invention;
FIG. 2
FIG. 1
is a flowchart illustrating a processing flow according to the embodiment illustrated in ;
FIG. 3
FIG. 1
is an explanatory diagram illustrating an example of access to a database according to the embodiment illustrated in ;
FIG. 4
FIG. 1
is an explanatory diagram illustrating an example of an execution of a trigger program according to the embodiment illustrated in ;
FIG. 5
is an explanatory diagram illustrating schema definitions of a table A, a table B, and a table C of another embodiment of the data-driven database processor according to the present invention; and
FIG. 6
is an explanatory diagram illustrating an example of access to a database according to conventional art. | |
--- Day 13: A Maze of Twisty Little Cubicles ---
You arrive at the first floor of this new building to discover a much less welcoming environment than the shiny atrium of the last one. Instead, you are in a maze of twisty little cubicles, all alike.
Every location in this area is addressed by a pair of non-negative integers (x,y). Each such coordinate is either a wall or an open space. You can't move diagonally. The cube maze starts at 0,0 and seems to extend infinitely toward positivex and y; negative values are invalid, as they represent a location outside the building. You are in a small waiting area at 1,1.
While it seems chaotic, a nearby morale-boosting poster explains, the layout is actually quite logical. You can determine whether a given x,y coordinate will be a wall or an open space using a simple system:
| |
Hiring Bar Staff for busy Christmas period!!
No experience necessary as training provided. Role will involve serving customers, glass collecting, stocking etc
30+ hours per week, and with view of part time position after Christmas if candidate is suitable.
Contact Brian with CV or call 0851313863.
Job Types: Part-time, Temporary
Contract length: 3 weeks
Part-time hours: 30 per week
Salary: €10.50-€11.00 per hour
Flexible Language Requirement:
- English not required
Schedule:
- 10 hour shift
- 8 hour shift
Supplemental pay types:
- Tips
Ability to commute/relocate:
- Glanmire, Glanmire, CO. Cork: reliably commute or plan to relocate before starting work (required)
Experience:
- Hospitality: 1 year (preferred)
Language:
- English (preferred)
Shift availability: | https://www.sharingjobs.tk/job/bar-staff-riverstown-inn-glanmire-cork-e11-ph/ |
DHL to double production of electric vehicles
Deutsche Post DHL Group is to double production of its electric vehicles from 10,000 to 20,000 by the end of this year. DHL is now commissioning another production location for its StreetScooter electric van in North Rhine-Westphalia.
The logistics business has said that is also selling its own electric vehicles, made for postal and delivery operations, to third parties. DHL said that at least half of this year’s production is set for third party buyers.
DHL is hoping to at least double its own StreetScooter fleet for letter and parcel deliveries – it currently has 2,500 vehicles for these operations. | https://www.logisticsandsupplychain.com/dhl-double-production-electric-vehicles/ |
This lesson was written in partnership with Dorena Martineau, the Paiute Cultural Resource Director, and Shanan Martineau Anderson, a member of the Shivwits band of Paiutes that specializes in Native American universal sign language as well as petroglyphs and pictographs. It was approved by the Paiute Indian Tribe of Utah’s Tribal Council. Before teaching this lesson, please explain to your students that there are many indigenous tribes in the United States and that this lesson specifically focuses on the Paiute Indian Tribe of Utah and does not represent other Native American groups. It is the hope of the Paiutes that other native tribes will respect their choice to share these aspects of their history and culture.
Petroglyphs and Pictographs
Show slides 2-8 from the PowerPoint presentation to see how many signs and symbols the students can identify.
Teacher: We can use symbols and pictures to communicate ideas. Throughout history, many people have used images to represent thoughts. Some have even preserved images on rocks.
Show images in slides 9 and 10. Explain the difference between petroglyphs and pictographs (see slides for more information).
Explain that these pictures are considered ideograms. An ideogram is a symbol that is used to represent an idea, concept, word, or phrase. Instead of writing out a whole sentence using letters and words, you could communicate the same ideas by using ideograms. Ideograms can be understood even if people speak a different language, which makes them useful messages to leave behind. Native American picture writing, ancient Egyptian hieroglyphics, and Chinese characters are all examples of ideographic writing.
Show image of Rock Writing Panel (slide 11). Explain that some native tribes carved or painted ideograms as a way of telling stories from their lives and recording their history.
Show an image of Vandalized Rock Writing (slide 12) and address the importance of being respectful to others’ cultures and their history.
Show slides 13-18. Teacher: These images are examples of Paiute picture writing. Today, the Paiute Indian Tribe of Utah is a federally recognized Native American tribe. The Southern Paiute language doesn’t have words that are written; however, they have used another form of documentation where pictures and symbols were etched into or painted on rocks, known by the Paiutes as “storied rocks.” These images communicate the stories, history, and other aspects of life that those who created the pictures felt were worthy of the time it took to record on rocks. Storied rocks were created by many Native American tribes across North America. Even though their spoken languages were different, the universal images on the rocks were still understandable by various tribes. Now, because the Paiute language is being lost, those who can understand the stories on the rocks are trying to share their importance with others. This set of pictures was put together by Shanandoah Martineau Anderson, a Southern Paiute, to share how the Paiute symbols can be read today.
Show slides 19-21 for more information about the Paiute tribe.
Logos
The Paiute Indian Tribe of Utah consists of five constituent bands, meaning they combine together to create a whole. Each band has its own logo or seal. A logo is a symbol or ideogram that is meant to identify an organization or group.
Show students the Paiute Logo images and explanations (slides 22-28).
Teacher: We see logos all around us every day. Here are some common logos that you will probably recognize. Show slide 29-33. Go over logo meanings.
Teacher: Today you will be designing your own logo. Your logo can represent yourself, your family, your class, school, or any other group that is meaningful to you. Before you start drawing your logo, let’s brainstorm some ideas.
Have students choose what they want their logo to represent. Then have them take a couple of minutes to reflect on the following questions:
- What is important to you or your group?
- If you could describe yourself or your group in three words, what would they be?
- What is something unique about you or your group?
Have students do the following:
Choose three words that they associate with themselves or their group and have them write each of those words in the middle of three separate sheets of paper. Direct students to create a word web by expanding on each word with additional words that relate (slide 35).
Organize the words into three columns (slide 36).
Cross off words that aren’t as useful in creating your logo. Then mix and match to make unexpected combinations (slide 37).
Choose the strongest combination. Sketch the combination in as many different ways and styles as possible. Try to simplify shapes and details (slide 38).
Choose your combination of styles and draw your final design in pencil (slide 39).
Printmaking
Inform students that the printmaking process will turn letters and other images backwards, into a mirror image. Students also should know that the marks that they make with their pens will eventually end up white on their print.
- Cut out the drawing into a square or rectangle small enough to fit onto a foam plate (also known as the trademarked brand Styrofoam).
- Trace around the paper onto the foam surface using a pen.
- Cut out the piece of foam.
- Take the paper with the logo drawing and flip it over. Shade in the backside with a pencil or a dark piece of chalk. This will help transfer the image to the foam plate later.
- Place your logo drawing on top of the piece of foam.
- Hold the paper down so that it doesn't move around and use a ballpoint pen to carefully trace over your pencil lines. Use a medium pressure so that it creates an indentation in the plate but be careful to not press too hard or you will rip the paper or puncture your plate. Small amounts of the pencil shading on the back should transfer over so that you can see your marks better on the foam plate.
- Remove the paper and trace over the existing lines on the foam plate with your ballpoint pen. This will reinforce the lines and make sure that there is sufficient indentation in the foam. Be careful to not poke all the way through the foam.
- Use water soluble markers to color over the foam plate. Color with the lighter colors first then add darker colors last.
- Dip your sponge in water and squeeze out excess. Make sure the sponge isn’t too saturated with water. Drag the wet sponge evenly across a piece of paper twice so that the paper is slightly damp.
- Immediately place the foam plate face down in the center of the paper.
- Press down on the foam plate making sure not to let it shift or slide out of place. Lightly massage it and push down in different areas while keeping it in place.
- Carefully remove the foam plate from the paper without smudging the ink.
- Repeat the process to improve your prints or try a new color scheme. Residual colors from the previous print can be wiped off of the printing plate with a wet paper towel or they can be left to add layers to the next print.
If you don’t have the time or the materials to do a printmaking project in your classroom, consider having the students color in their logos with colored pencils or markers.
Reflection
Each band found it important to share the meaning of their logo. Have students write about the meaning behind their logo or give them an opportunity to talk about it with others. | https://www.education.byu.edu/arts/lessons/paiute-symbols-and-logos |
'Bykovsky belonged to the first generation of Soviet cosmonauts, who wrote many bright pages in the glorious history of Russian manned cosmonauts,' officials at the Gagarin Cosmonaut Training Center in Star City said in a statement.
Bykovsky spent his time in space conducting scientific experiments, and even recorded himself exercising in zero-gravity, studying his body's reaction to the environment.
During his first mission, Bykovsky planned to stay in orbit of the Earth for eight days, but increased solar flare activity brought his voyage to an end after four days, 23 hours and 17 minutes.
The mission remains the longest solo spaceflight in history.
He landed back down to Earth on June 19, 1963, three hours after Valentina Tereshkova, the first woman in space.
Bykovsky spent his time in space conducting scientific experiments, and even recording himself exercising in zero-gravity
Born in Pavlovsky Posad, in August 1934, Bykovsky spent 20 days, 17 hours and 47 minutes in Earth's orbit
Bykovsky's solo voyage of four days, 23 hours and 17 minutes, remains the longest solo spaceflight in history.
Mr Bykovsky made his second space flight 14 years later, in 1976.
Flying as the commander of Soyuz 22, Bykovsky spent a nearly seven days taking more than 2,400 photographs of Earth's horizon, and images of the moon rising and setting through the planet's atmosphere.
Bykovsky's third and final trip came in 1978, with Germany's first citizen in space, Sigmund Jahn.
Born in Pavlovsky Posad, in August 1934, Bykovsky spent 20 days, 17 hours and 47 minutes in Earth's orbit.
He also served as a backup to the Vostok 3 and Soyuz 37 missions. Bykovsky was primed to enter orbit with the Soyuz 2 crew, but the mission was cancelled after its predecessor, Soyuz 1, ended in tragedy.
RELATED ARTICLES
Share this article
For his service to his Russia's space program, Bykovsky was named a Hero of the Soviet Union and awarded the Order of Lenin and the Order of the Red Star, am amassing numerous other Russian and international accolades along the way.
Bykovsky was married to Valentina Sukhova, with whom he had two children.
He is survived by his son Sergei. His daughter Valery died in 1986.
'The leadership of the Cosmonaut Training Center, pilot-cosmonauts of the USSR and Russia and the whole team offer their condolences to the families and friends of Valery Fyodorovich,' Gagarin Cosmonaut Training Center said in a statement.
Do you want to automatically post your MailOnline comments to your Facebook Timeline?
Your comment will be posted to MailOnline as usual
We will automatically post your comment and a link to the news story to your Facebook timeline at the same time it is posted on MailOnline. To do this we will link your MailOnline account with your Facebook account. We’ll ask you to confirm this for your first post to Facebook.
You can choose on each post whether you would like it to be posted to Facebook. Your details from Facebook will be used to provide you with tailored content, marketing and ads in line with our Privacy Policy. | |
I didn’t have a specific reading goal for 2018. That is, I didn’t say to myself, I am going to read 50 books this year. Way back when I started keeping my list of books in 1996, I did have a goal: Read one book per week. It seemed reasonable at the time, and yet I never managed to make that goal until 2013, when I read 54 books.
Having a book count as a goal is tricky. Books vary in length. This year, for instance, the average length of books I read was 473 pages. But there is wide variation. The shortest book I read this year was The Testament of Mary by Colm Tóibín, which came in at 96 pages. The longest book I read this year was The Power Broker by Robert A. Caro, which exceed 1,300 pages. I read 16 books this year that I consider to be “long” book, each exceeding 700 pages. Such variation makes it difficult to set a specific number of books as a goal.
As most of the my reading comes through audiobook, I rely more on how much time I can spend listening to books each day. Audiobooks makes it easy to listen to books while doing other things: working out, commuting, doing chores around the house, waiting in line, watching your kid’s soccer or basketball practice. Audiobook turn out to be one of my best productivity tricks. Early in the year I set a target of 3-1/2 hours of listening time per day.
From that start, I looked at what the average length of a book I’ve read since I started listening to audiobooks in 2013. It turns out to be 453 pages, which translates into an average of 17 hours, 45 minutes of listening time per book. Well, knowing that, and with a target of 3-1/2 hours of listening time per day, I knew it would take me about 5 days on average to finish a book. And knowing that, I could make a reasonable estimate of how many books I could listen to in a year. That came out to about 73 books, far more than any previous year.
At first, that number seemed completed unreasonable. In 22 previous years the best I’d ever done was last year when I finished 58 books. 73 books would be a 25% increase over last year.
Second, over the last 5 years that I’ve listened to audiobooks, I’ve steadily increased the speed at which I listen. I started at 1x and after a long time, moved to 1.25x. Early this year, I moved to 1.5x. Then, this fall, when a new Audible app update introduced the 1.75x speed, I started listening at that speed. Each jump takes some getting used to initially. For the most part, these days, I listen to nonfiction at 1.75x and fiction at 1.5x. When I try to listen to a book at 1x these days, the narrator sounds as if they are on quaaludes.
This had a significant impact on how much I managed to read this year. At 1x speed and an average of 4 hours 15 minutes per day, I can get through 7 book in a month. By comparison, at 1.75x speed, I can get through almost 13 book in the same month. Over the course of an entire year, that’s 150 books! But as I didn’t make this change until more than halfway through the year, I adjusted my goal to something I still thought of as a stretch: 120 book for 2018.
Goodreads has an annual reading challenge where you can set a goal and track your progress, along with that of your friends. So I went into Goodreads and set of goal of 120 books. It looked to be a lot more books than what I was seeing for many people. Indeed, it turns out that the average goal for the Goodreads challenge this year is 59 books. My goal of 120 books is double that. I figured I’d come close, but fall a few books short.
I finished my 120th book in early December. It’s hard to believe, even with the evidence right there in front of me. And given that I’ve been averaging 14-15 books/month for the last few months, and that the second half of December I’ll be on vacation, I think it is safe to assume that I’ll finish 2018 in the neighborhood of 135 books.
So what is my goal for 2019? I’m tempted to set a goal of 148 books for 2019. That may seem like an odd number to pick, but there is some logic to it. Assuming I finish 14 more books this year, 148 books next year means that my last book of 2019 will be my 1,000th book since I started keeping my list in 1996.
That is a stretch goal if ever there was one, but I think stretch goals are good, and it gives me something with extra meaning to aim for.
Anyone else have reading goals for 2019? Let me know in the comments.
And for those wondering about the best books I’ve read in 2018, I’ll have a post on that–in January. I don’t think it is fair to put out a “best of” list for 2018 before the year is over. Back in 2016 the best book I read that year was Born to Run by Bruce Springsteen. I didn’t read that book until the end of December. So look for my “best of” list in January.
I’ve been doing 52 books a year for the past five years. I generally exceed it by a few with 80 being my high point thus far. The wrench forms is that I want to write a bit of a review with the books that I read which slows down the process.
If I was to add audio fiction I could easily hit 100 but I’d cut back on my podcast listening which I also value.
Avid reader here – at least one book per week; the last four-five years – almost twice. The last two years – well over 100. Almost 150 last year, and I think I’ll be able to reach (even pass) that amount this year. I include audiobooks. | https://www.jamierubin.net/2018/12/04/reading-goals-for-2019/ |
Status:
built
Construction Dates
Began
1997
Finished
2003
Floor Count
88
Basement Floors
6
Floor Area
185,805 m²
Building Uses
-
office
- parking garage
- retail
Structural Types
-
highrise
- mall
Architectural Style
- postmodern
Materials
- glass
- steel
Heights
Value
Source / Comments
Spire
1352 ft
Architect blueprints
Tip of fins
Roof
1335 ft
Architect blueprints
Highest steel platform
Floor 89, ceiling
1306 ft
Architect blueprints
Floor 88, ceiling
1291 ft
Architect blueprints
Street level highest
6 ft
Architect blueprints
Datum
0 ft
Sea level
-26 ft
Architect blueprints
Description
Architect: Rocco Design Ltd.
Design Architect: Cesar Pelli & Association Architects
Tallest building in Hong Kong until the completion of International Commerce Centre in 2010.
The International Finance Centre complex consists of four buildings: 1 (
http://www.skyscraperpage.com/cities/?buildingID=2735)
and 2 International Finance Centre, Four Season Suites (
http://www.skyscraperpage.com/cities/?buildingID=8450)
, and Four Seasons Hotel Hong Kong (
http://www.skyscraperpage.com/cities/?buildingID=8457)
.
Two International Finance Centre includes three levels of car parking that will provide over 1,800 car parking spaces for the office and shopping mall.
The width of the tower is 56.960 meters at the base, and 39.148 meters at the main roof, according to the blueprint.
•
Phorio Entry
•
Wikipedia Entry
•
Structurae Entry
•
Google Search
Companies
Rocco Design Limited
-
Architect
Drawing by
Yusheng
© SkyscraperPage.com
•
View All Drawings (20)
•
Hong Kong Skyscraper Diagram
•
Hong Kong SAR Diagram
•
China Diagram
•
Yusheng's Diagram
Aerial View
- click to view full city map
Show marker
•
Google Map
Do you see any incorrect data on this page?
Please let our editors know of any corrections you can make by posting them in the
Database Corrections
section of the discussion forum (open to the public).
Terms & Conditions
All content displayed on and contained within this page is subject to Skyscraper Source Media Inc.'s
Terms and Conditions
. No content displayed on this page may be reproduced, in whole or in part, without prior written permission by Skyscraper Source Media Inc. All content © Copyright Skyscraper Source Media Inc.
© Copyright 2022 Skyscraper Source Media, All Rights Reserved. | https://skyscraperpage.com/cities/?buildingID=11 |
Hinterglemm Prayer Times Today: Fajr 2:55 AM, Dhuhr 1:15 PM, Asr 5:27 PM, Maghrib 9:05 PM & Isha 11:27 PM. Get accurate Hinterglemm Namaz & Salah times with 7 Days and 30 Days Timetable of Hinterglemm Prayer Timing.
* All Timings are Beginning Times
Narrated Nafi`: I saw Ibn `Umar praying while taking his camel as a Sutra in front of him and he said, I saw the Prophet doing the same.
حَدَّثَنَا صَدَقَةُ بْنُ الْفَضْلِ ، قَالَ : أَخْبَرَنَا سُلَيْمَانُ بْنُ حَيَّانَ ، قَالَ : حَدَّثَنَا عُبَيْدُ اللَّهِ ، عَنْ نَافِعٍ ، قَالَ : رَأَيْتُ ابْنَ عُمَرَ يُصَلِّي إِلَى بَعِيرِهِ ، وَقَالَ : رَأَيْتُ النَّبِيَّ صَلَّى اللَّهُ عَلَيْهِ وَسَلَّمَ يَفْعَلُهُ .
ہم سے صدقہ بن فضل نے بیان کیا، انہوں نے کہا ہم سے سلیمان بن حیان نے، کہا ہم سے عبیداللہ نے نافع کے واسطہ سے، انہوں نے کہا کہ
میں نے ابن عمر رضی اللہ عنہما کو اپنے اونٹ کی طرف نماز پڑھتے دیکھا اور انہوں نے فرمایا کہ میں نے نبی کریم صلی اللہ علیہ وسلم کو اسی طرح پڑھتے دیکھا تھا۔
Narrated Ibn Abbas: I spent a night with my maternal aunt Maimunah. The Messenger of Allah صلی اللہ علیہ وسلم came after the evening has come. He asked: Did the boy pray ? She said: Yes. Then he lay down till a part of night had passed as much as Allah willed; he got up, performed ablution and prayed seven or five rak'ahs, observing witr with them. He uttered the salutation only in the last of them.
Hinterglemm Prayer Times - Today 11 Jul 2020 Hinterglemm Namaz Timings are Fajr 2:55 AM, Dhuhr 1:15 PM, Asr 5:27 PM, Maghrib 9:05 PM & Isha 11:27 PM. Find Azan and Salat Schedule & 7 Days Timetable. Daily Fajar (Fajr) timing in Hinterglemm, Dhuhur, Asr time in Hinterglemm, Maghrib Hinterglemm prayer times & Isha Hinterglemm Namaz timing. Today's 19 Dhul-Qadah 1441 Hinterglemm Namaaz ke auqat (Awqat) with next 7 days Schedule from 11 Jul, 2020 to 17 Jul, 2020 and Qibla direction with Customizable Prayer Time Calculation Methods to calculate proper time for your Namaz.
You may also find here makrooh time updates including
Hinterglemm
zawal time with complete details start and end time.
Hinterglemm
Sun Set time that is also called Iftar Time in
Hinterglemm
is
9:06 PM
and Dawn break Time that is also end of Sehri Time in
Hinterglemm
at
2:54 AM. Sehri and Iftar time is also called Ramadan Time during the month of Ramadan.
Today Namaz timings in Hinterglemm are as follow:
Fajr - 2:55 AM
Sunrise - 5:25 AM
Zuhr - 1:15 PM
Asr - 5:27 PM
Maghrib - 9:05 PM
Isha - 11:27 PM
It consists of 4 Rakat: 2 Farz and 2 Sunnat. Fajr time begins at the dawn time 2:55 AM and remains until the sun rises. Today Hinterglemm fajr namaz time ends at sunrise 5:25 AM.
Ishraq Namaz Timing in Hinterglemm today begins at 05:40 AM and the end time is 09:20 AM.
It consists of 12 Rakat: 4 Sunnat, 4 Farz, 2 Sunnat and 2 Nafl. Zuhar time begins after zawal time ends at 1:15 PM and ends before asar time starts at 5:27 PM.
It's an afternoon prayer consist of 8 Rakat: 4 Sunnat and 4 Farz. Today Asar time begins at 5:27 PM and end before Magrib time starts at 9:05 PM.
It's a sunset prayer consist of 7 Rakat: 3 Farz, 2 Sunnat and 2 Nafl. Today Magrib time begins at 9:05 PM and ends before Isha time starts at 11:27 PM.
It's a night prayer consist of 17 Rakat: 4 Sunnat, 4 Farz, 2 Sunnat, 2 Nafl, 3 Witr and 2 Nafl. Today Isha time begins at 11:27 PM and ends before Fajr time starts at 2:55 AM. | https://hamariweb.com/islam/hinterglemm_prayer-timing3854.aspx |
Paired with The Dallas Examiner as a summer intern through the Discover the Unexpected program, I elevated my writing skills by reporting on things happening in the Dallas community. Without the experience, I wouldn’t have a clue about the initiatives to assist those affected by domestic violence in Dallas, how Dr. Sorrell at Paul Quin College is implementing new programs for low-income students or the importance of Black blood donors to combat the sickle cell crisis.
My 10 weeks at The Dallas Examiner has opened my eyes to the role of the Black Press and community-based news. The rich history behind Black newspapers exemplifies the need for citizens to continue supporting Black newspapers in order to keep them afloat.
Beginning in 1827, John Russwurm and Samuel Cornish started the Freedom’s Journal in New York, and by the start of the civil war, 40 Black newspapers were up and running.
Since its inception, the Black Press has worked under the motto, “We wish to plead our cause. Too long have others spoken for us.”
Major news platforms have often ignored the stories of Black America, and if we happened to be included, it was often a demeaning narrative associated with crime. In mainstream media, we were always monsterized and made out to be poor, helpless and welfare-dependent. There was never news of what actually happened in our communities regarding marriages, strides in civil rights or contributions to academia. The erasure of the Black experience left our newspapers as one of few valuable resources available for Black news consumption.
In the 1960s, mainstream media outlets began hiring Black people on staff, bringing the Black audience along with them. Since then, but even more recently, the Black Press has seen a significant decline in readership and funding. Several publications have been forced to close completely or reestablish themselves as strictly online or electronic news publications.
Black media platforms are incredibly vital to our history; we must continue supporting them and pass down the importance of community-based journalism to future generations.
Often, we only see what’s featured on national news, the “watered down” and more sensationalized stories that generate the most views and likes. This style of rushed journalism fails to highlight the key messaging of stories that take place right here in our own backyards. Black men, women and children across the country should consume news from Black news media platforms.
The importance of the Black readership goes beyond the need for diversity in journalism. We need African Americans consuming media directly from Black publishers. Black papers are barely hanging on to their businesses, and there isn’t enough money to pay salaries for large editorial teams.
As a community, we need to be more responsible consumers and support the companies that advertise in Black newspapers. Advertisement is the primary source of income for the newspapers, and without it, they can’t continue pushing out news for our readership.
I have always been an avid reader of the New York Times, The Washington Post and The Wall Street Journal, which are great news organizations. Still, they aren’t telling the important stories that happen in South Dallas and other Black communities.
I hope, as a young and aspiring journalist, that corporations realize the need for advertisement in Black newspapers and the need for an increase in community readership. Otherwise, people may not understand the importance of these publications until they’re all gone.
Madison Williams is an intern at The Dallas Examiner and a student at Hampton University. | https://dallasexaminer.com/editorial/local-commentaries/why-the-black-press-needs-you-and-you-need-the-black-press/ |
Affiliations:
- Ural Federal University
- Issue: Vol 27, No 3 (2019)
- Pages: 68-80
- Section: Instrumentation, Metrology and Informative-measurings devices and systems
- URL: https://journals.eco-vector.com/1991-8542/article/view/21341
- DOI: https://doi.org/10.14498/tech.2019.3.%25u
Cite item
Full Text
Abstract
The surface reflection spectrum is the necessary information for calculating color coordinates in the colorimetric systems of the International Commission on Illumination (CIE), such as Lab or XYZ. These values determine the color sensations of the standard observer. Based on them, the accuracy of color reproduction is estimated, which is regulated by international standards for various industries. Simple and accurate methods of approximation of the spectra are required in the development of effective measuring and control systems for technological processes for obtaining artificially colored surfaces. The specified color of the surface can be obtained with a previously prepared ink mixture or by an autotypical printing method, i.e., by controlling the area of periodic micro-dots of four primary colors. At present, the methods of linear approximation of spectra for mixed ink systems are well studied. The principal component analysis (PCA) provides good accuracy of approximation using only 4–6 basis functions. Information about similar studies for autotyping systems was not found in the literature. Therefore, a comparative analysis of the approximation accuracy of spectral curves using the PCA for mixed and autotype systems is of great interest. The paper discusses the variants of the least-squares function approximation of the 24 spectra of the standard ColorChecker scale (X-Rite) and the 1944-field autotype test scale printed on a digital printing machine. Comparison is made by three criteria: to color, mean square and maximum deviations. For the most accurate approximation of the reflection spectra, an individual approach to each technological system is required. The spectra of mixed inks systems differ significantly from the spectra of autotype systems, and the last are structurally more simply and better modelled. In autotype systems, a representative set can consist of several dozen spectra. Apparently, it is impossible to create a universal set of basis vectors for approximating the reflection spectra of a wide range of industrial systems for obtaining a given color of surfaces.
Keywords
About the authors
S. Yu. ArapovUral Federal University
Author for correspondence. | https://journals.eco-vector.com/1991-8542/article/view/21341 |
Happy Friday!
Today, we are wrapping our Mamma Mia auditions with callbacks starting at 4:00 pm in the theater. The students have been doing incredible and have been just delightful throughout this whole process. Bravo to them!
UPCOMING CALENDAR THINGS:
- The cast & crew list for Mamma Mia will be posted no later then this Sunday, January 16, 2022.
- We have an in-person and Zoom optional parent informational meeting for those of you who will be embarking on this journey with us. That meeting is on Tuesday, 01.18.2022. Same time, 3:30 pm. Same place, La Salle Theater | Band room. The Zoom link is copied below:
Topic: Mamma Mia Informational Meeting
Time: Jan 18, 2022 03:30 PM Pacific Time (US and Canada)
Join Zoom Meeting
https://us02web.zoom.us/j/88290890213?pwd=c1ovWW8wVFo2VEN4WG9JNU5MMWlGdz09
Meeting ID: 882 9089 0213
Passcode: HD2KEa
- The first vocal rehearsal and cast & crew meeting will be in the choir room on Thursday, 01.20.2022 from 3:30 – 5:30pm. We’ll hand out libretto’s, rehearsal schedules, answer some questions, warm-up and hop right into our first vocal rehearsal!!! After that first rehearsal, there will not be much in the way of Mamma Mia, until after our winter production, Gifted directed by senior Aaron Leonard-Graham, closes on the second weekend of February. You should all come and see that show!.
- We are in the midst of our 4th annual scriptwriting competition which we do in conjunction with our annual Original Works Festival celebrating La Salle student writers and artists’ voices.
- Speaking of in the midst of and meetings, there is an in-person meeting for those joining us in London on Monday, 01.24.2022. After school from roughly 3:30 – 4:30pm. I’m hoping it will be a good opportunity to folks to meet each other, hang out, learn some more about our trip and have some conversations about what we’d like to see and do. There is no Zoom link for that meeting, so if will be unable to attend and would like the information from that meeting the notes will be available and posted after the meeting. A couple folks have reached out to me about joining us in London. That’s fantastic! You are welcome! It will be a life-affirming experience. To that end, we are about a month out from the deadline for signing up. Reach out to me if you have additional questions.
- Finals are coming up in a couple of weeks. Reach out to me if you have any questions or concerns in regards to final or late work.
- Lots going on and never a dull moment.
- I hope each and every one of you have a wonderful weekend. Be safe. Take care. | https://lspreptheater.org/2022/01/14/news-you-can-hopefully-use-01-14-2022/ |
Parental socioeconomic status and risk of offspring autism spectrum disorders in a Swedish population-based study.
Epidemiological studies in the United States consistently find autism spectrum disorders (ASD) to be overrepresented in high socioeconomic status (SES) families. These findings starkly contrast with SES gradients of many health conditions, and may result from SES inequalities in access to services. We hypothesized that prenatal measures of low, not high, parental SES would be associated with an increased risk of offspring ASD, once biases in case ascertainment are minimized. We tested this hypothesis in a population-based study in Sweden, a country that has free universal healthcare, routine screening for developmental problems, and thorough protocols for diagnoses of ASD. In a case-control study nested in a total population cohort of children aged 0 to 17 years living in Stockholm County between 2001 and 2007 (N = 589,114), we matched ASD cases (n = 4,709) by age and sex to 10 randomly selected controls. We retrieved parental SES measures collected at time of birth by record linkage. Children of families with lower income, and of parents with manual occupations (OR = 1.4, 95% CI = 1.3-1.6) were at higher risk of ASD. No important relationships with parental education were observed. These associations were present after accounting for parental ages, migration status, parity, psychiatric service use, maternal smoking during pregnancy, and birth characteristics; and regardless of comorbid intellectual disability. Lower, not higher, socioeconomic status was associated with an increased risk of ASD. Studies finding the opposite may be underestimating the burden of ASD in lower SES groups.
| |
Inside: Why photo organizing is important to me and how I hope to help you through my blog.
My Childhood and Old Photo Formats
My childhood is on slides. I remembered this as I cleaned out my parents’ home in 2015.
My mother had unexpectedly died in 1994, and my father stayed in the only home they ever purchased for another 20 years. I spent most of 2014 in that home taking care of him in his last days.
It amazed me at how much hadn’t changed – kitchen utensils, blankets, and many other everyday items were all where they had been when I left for college in 1975.
After my mother passed, the house filled up with computer parts, gadgets, and other nifty things my father, a retired Professional Engineer, found entertaining. But in 2011 he ended up in the hospital with pneumonia, and I traveled home to take care of him. I had to navigate through each room in the house by walking a cow path through the knee-high piles of junk. That brush with his own mortality inspired him to hire help and get the junk count down, and although they made great progress, there was still a lot of stuff to go through after he died.
Actual photograph of my father’s junk
I discovered the boxes of slides in the closet of my old bedroom. Boxes and boxes and boxes.
As I packed them up to take to my home, I remembered when my father purchased a slide projector and screen as a Christmas present for my mother. She had lain down the rule that gifts were not to have motors, but he couldn’t resist. He put my sister and me to work hiding 20 five-dollar bills throughout the components in the box. That was supposed to guarantee his forgiveness, but I don’t remember whether it did.
I wasn’t even looking at the slides, and yet, they were summoning memories.
The Problem Continues
Things aren’t so different for my daughter, now grown. Only her childhood is not on slides—it’s on videotape. The first four years are, anyway. Then I discovered scrapbooking, and I reverted to taking photographs of her activities so I could create scrapbooks with them. I wanted her to be able to revisit her childhood from a different perspective as she grew up, and even later, maybe when she had children of her own.
I don’t have any grandchildren yet, but if I did, their lives would be captured in digital photos. They might be printed, or not. Depending on their ages, they might be on floppy disks or CDs or Facebook.
Technology Has Changed
One problem today is that technology has changed so much and so fast, and because of that, we need photo organizing more than ever. If you are over a certain age, you likely have printed photos, digital photos, slides, videotapes, DVDs, and more. Like me, you may have inherited photos from your own childhood, or even generations before.
We are drowning in a sea of photo memories, and except for printed photos, if we know where they are and if they are organized, we often can no longer access those memories. They take up a lot of space, and we’re pretty sure our kids don’t want them. Sometimes we’re VERY sure our kids don’t want them!
But I don’t think it’s the memories they are rejecting as much as the physical stuff. They don’t want the boxes or even the albums or scrapbooks when they are so used to a more minimalist way of viewing memories – scrolling through photos on their phones.
Why We Take Pictures
When I attended my first scrapbooking class in 1994, it reminded me how important printed photos are, and I returned to still-photo taking. When, just a few weeks later, I had to go through my parents’ bags of photos to create something for my mother’s memorial service, so many of the photos of her later life were meaningless to me because there was nothing written on the back. I was overwhelmed with the task.
When we take pictures, we usually have a reason:
- to remember things like important people and events,
- to share something that means something to us – the picture is actually for the benefit of others,
- or to record something beautiful or notable, something that touched us in some way.
We don’t mean to leave them in the developing envelope, the box, the disc, or even the phone or computer to lie unseen and unappreciated. In the moment, our reason is important, but the less we revisit those photos, the less we remember the reason.
I believe in the power of photographs, as the old Creative Memories mission used to say, to preserve the past, enrich the present, and inspire hope for the future. And that’s why I became a photo organizer. I want to help people reconnect with their memories and to preserve and pass on their traditions and history. And I want to bring all those photo memories into the current century, using current technology, so they can get in front of the very people they are meant to inspire.
So I Started a Photo Organizing Blog
I started this blog so I can reach more people, help more people, and ensure that precious memories are saved from the ravages of time, changes in technology, and even the decluttering whims of millennials.
With this blog I hope to:
- Provide helpful tips and information
- Review products and make recommendations
- Share important photo industry news
- And, most important, to connect with you, my readers
And so if your childhood is also on slides (or videotape or Super8 or whatever) we can get through this together.
Your Turn
Let’s connect right now! Please comment below and tell me about your childhood photos and what format they are in. Tell me your plans for them and if you’re having any struggles around them.
This post may contain affiliate links. This means if you purchase from a link, I may receive a small commission at no additional cost to you. Thank you for supporting my business! | https://fancysphotosolutions.com/welcome-to-my-photo-organizing-blog/ |
The Pastry Box: Moments
.
San Francisco
I was three years old, laying horizontally at the top of the stairs of the first home I had with my parents. None of my siblings were born yet, and I remember very little else about life before my brothers and sister being somewhere nearby. The carpet was grey, with a hint of purple in dull light. I rolled down each grey step, one at a time. Except that now, twenty-five years later, I don't really remember it, because my perspective is from the foot of the stairs looking up; my vision of that moment shifted into the third person. It's not what actually happened, and I can no longer really be sure what I did there. But, whilst it's fuzzy, this momentary event remains special and preserved. Thinking of it makes me happy.
So, when is something really momentary on the internet? Time was that random, accidental, delightful events would happen to us in the world, or in conversation, or by chance where we stand, and be just that; moments to be remembered.
We are now intertwined with a medium on which everything is stored publicly, in multiple formats. A redundant copy is made when anybody even looks at something on the internet. Can something still be momentary if it exists across hundreds of computers, for indeterminate timespans?
Simultaneously, more of our casual interactions have moved online. Conversations filled with quips and jokes and sparks of serendipitous chemistry don't just happen in person any more, they happen in conversation and commentary in text, and in reply to the sharing of photographs. Each person participates in this online experience at a slightly different time, delayed at least by network latency, and perhaps a little longer whilst reading something else on another page. Hundreds of others will relive your moment when they read it in the minutes, hours, and days following. When you catch a silhouette against sunset, reach for your camera to capture it. You won't need to recount the story of your child's first steps, because you can replay it.
Pics, or did it not happen?
Before, moments were remembered. You carry an image in your head and think back to it. When you later recognise its importance you might write about your memory, scrapbook it, or share it with others through stories. You might refer to a photograph from near that place or time, or an artefact of another memory, in support of your account. Depending on how much time has passed, the accuracy of the memory of the moment will change, decay or be embellished. The moment remains true.
Now, the internet has enabled us to preserve not memories, but moments literally, first hand, in real time.
A strange thing happens when a web service shuts down, or sells up, or alters its business model. The preservation of our literal moments is threatened, and we may find ourselves with only the memories left. Entire chunks of our lives could change format in an instant. Are they remembered as well as they might be if they weren't captured so precisely to begin with?
We now rely on the web to preserve the moments of our lives in a way that we never could before. We expect them not only to hold on to our moments, but to recall them for us, too.
You take a photograph and you post it to the web. How often do you revisit that photograph later and feel inspired to write about the moment in another place? When that service goes away, will your memory go with it?
Are our architectural expectations of the web—our demands for archival, preservation, and export—at odds with the established human way of preserving meaningful moments by recording our memories instead? Without meaning to offer excuses or defence to businesses who are careless with their users data: Is it even right for us to refer to them as the canonical record of our lives?
Do services set our expectations correctly? What if a service came along declaring that actually, the content within was momentary, and that if you wanted to preserve it you would need to create something new? Think of This is My Jam, whose posts and the commentary they inspire disappear after seven days. Or 4chan, where posts simply drop off the page when the hive mind moves on.
The pressure for services hosting our creative works to take good care of that data must not relent. Theirs is a responsibility that needs to be better honoured. But at the same time, shouldn't we also preserve what matters most in the ways we always used to? By recording memories, not just moments.
Links
To share this entry, or reference it in commentary of your own, link to the following:
- Permalink: https://benward.uk/blog/pastry-april
- Shortlink: https://bnwrd.me/1i2P00
You can file issues or provide corrections: View Source on Github. Contributor credits. | https://benward.uk/blog/pastry-april |
Nature is good for our mental health
Submitted by Martin Williams on Tue, 2010-05-11 22:01
Recent news from Environmental Science & Technology includes:
[quote]even small doses of outdoor exercise can have remarkable effects on mental health, report Jules Pretty and Jo Barton of the University of Essex (U.K.) in this issue of ES&T (Environ. Sci. Technol. DOI 10.1021/es903183r). In a meta-analysis of 10 studies, they found that getting outside—and moving—for as little as five minutes at a time improved both mood and self-esteem. Exercise near a body of water had the biggest effect.
“It shows that green exercise benefits pretty much everybody and that the effect sizes are pretty substantial,” says William C. Sullivan, the president of the Environmental Council at the University of Illinois at Urbana-Champaign, who was not involved in the study.
...
the group has studied the effects of different types of green exercise on a variety of populations, from gardening by visitors to university farms to walking and sailing activities for young offender groups, as well as walking by members of urban flower shows. This new analysis reviewed 10 of these studies, which involved a total of 1252 participants. In each study, mood and self-esteem were measured using two widely accepted scales. All types of green exercise led to improvements in the mental health indicators. Most surprising to the researchers was that the strongest response was seen almost immediately.
[quote]By chance, a small hospital in Pennsylvania became the setting of a remarkable experiment. Scientist Roger Ulrich noticed some surgery patients recovered in a room with a view of leafy trees, while others recovered in an identical room, except its windows faced a brick wall.
Ulrich decided to test whether the view made any difference in the outcome for patients. He looked back at records on gall bladder surgery over a period of 10 years. The results proved enlightening.
Patients with the tree view were able to leave the hospital about a day earlier than those with a wall view, the study revealed. Patients with trees in sight also requested significantly less pain medication and reported fewer problems to nurses than wall-view patients. Contact with nature, even as limited as a view through a window, enhanced recovery from illness.
Researchers have learned much about the restorative effects of nature since Ulrich's landmark study appeared in 1984. Studies repeatedly have shown that contact with nature can lower blood pressure, reduce anxiety, relieve stress, sharpen mental states and, among children with attention and conduct disorders, improve behavior and learning. Regardless of cultural background, people consistently prefer natural settings over man-made environments.
"We know that exposure to natural environments has clearly beneficial physiological effects," says Portland psychologist Thomas Joseph Doherty.
...
Doherty says that part of the answer supplied by ecopsychology is to validate that an emotional connection to nature is normal and healthy. Doing so will help the environmental movement be more effective, he says, by appealing to positive ecological bonds rather than promoting conservation based on messages of fear or shame.[/quote]The best natural healer turns out to be nature
| |
Sample preparation plays an essential role in most biochemical reactions. Raw reactants are diluted to solutions with desirable concentration values in this process. Since the reactants, like infant's blood, DNA evidence collected from crime scenes, or costly reagents, are extremely valuable, their usage should be minimized whenever possible. In this paper, we propose a two-phased reactant minimization algorithm (REMIA), for sample preparation on digital microfluidic biochips. In the former phase, REMIA builds a reactant-minimized interpolated dilution tree with specific leaf nodes for a target concentration. Two approaches are developed for tree construction; one is based on integer linear programming (ILP) and the other is heuristic. The ILP one guarantees to produce an optimal dilution tree with minimal reactant consumption, whereas the heuristic one ensures runtime efficiency. Then, REMIA constructs a forest consisting of exponential dilution trees to produce those aforementioned specific leaf nodes with minimal reactant consumption in the latter phase. Experimental results show that REMIA achieves a reduction of reactant usage by 32%-52% as compared with three existing state-of-The-Art sample preparation approaches. Besides, REMIA can be easily extended to solve the sample preparation problem with multiple target concentrations, and the extended version also effectively lowers the reactant consumption further.
|Original language||English|
|Article number||7076584|
|Pages (from-to)||1429-1440|
|Number of pages||12|
|Journal||IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems|
|Volume||34|
|Issue number||9|
|DOIs|
|State||Published - 1 Sep 2015|
Keywords
- Biochip
- digital microfluidic biochip (DMFB)
- dilution
- dilution tree
- reactant minimization
- sample preparation. | https://scholar.nctu.edu.tw/en/publications/reactant-minimization-in-sample-preparation-on-digital-microfluid |
"We are fully within the energy transition and I would say that we are already in phase two: the friction from the first detachment has passed, today we are at the deployment of technologies." Giacomo Rispoli Executive Vice President and Director of Eni Portfolio Management & Supply and Licensing spoke at the Sarteano Summer School (SI) promoted by NextChem and Maire Tecnimont together with AIDIC, the Italian Association of Chemical Engineering. During a pause in the work, we asked him for some thoughts on the subject of energy transition, the heart of the NextChem mission.
"It is now clear to everyone that a reduction in CO2 cannot be ignored", he tells us; "this attention is slowly moving through the supply chain, but sustainable processes must also have an economic driver: if there is profitability for the entrepreneur, they work, otherwise they do not." The return on investment in the short term as an obstacle to a higher and longer-term view, therefore? Partly yes, but not only.
"The innovations being studied today will replace the use of oil tomorrow, this is certain," says Rispoli. "However, it is not a simple challenge: on our planet, a billion people still do not have access to electricity; in some areas of the world, the use of materials such as plastics still creates the minimum well-being that we experienced decades ago; in others, it is necessary to respond to a growing demand for mobility, which means quality of life, development. Globally there is still a huge gap."
"The transition will last another 20-30 years, during which fossil raw materials will still play a key role in allowing a part of the population to improve their level of development and well-being by reducing the gap with the most advanced societies, without the increased costs of the circular economy, which is even more expensive with respect to the linear economy." But what then is the role that the already advanced economies must have to avoid a two-speed transition?
"To facilitate the process of reducing inequality by minimizing the risks to the environment and the planet, countries with advanced economies must share technologies to allow less developed countries to skip intermediate steps and go to the final ones, without slowing down their path of social and economic development. Countries with an advanced economy have a strategic role in defining a path to create culture and know-how, and besides, the largest part of the planet's population needs this culture and know-how to accelerate evolutionary steps. The combination of key indicators that will guide us in this global transition from the linear economy to the circular economy is the production of CO2 per capita and the quality of life index."
Assuming that there may be a global direction of this great revolution that is called the energy transition and that this can take place guided by a 'dashboard' of indicators, how to carry out this great production project and research transfer? "All the research carried out in the past in the linear economy is fundamental today to develop the circular economy," said Rispoli, "without the results obtained so far, we would not be able to face the challenge of sustainability, of having to totally rethink production and the economy. Still too often however, research is considered reactive; instead, it must be seen as proactive, it must be supported by a medium/long-term vision. We need to look far ahead, we need to make regular investments over time." Something that requires stability, vision, support.
"Not all companies have the same approach to innovation; from this point of view, Eni is an example: in the case of our bio-refineries the research allowed us to invent them and create a new business. The refining industry is second in Europe for number of graduates: this know-how should be dedicated to green technologies. Companies like ENI, with great internal know-how, must be able to invest in reconversion. It is easier to reconvert (see the example of Eni bio-refineries) than to start from scratch" says Rispoli, who concludes: "Investing in research is easier for large companies that have large organizations and capital, that is why large companies are an important asset for a country: it means having the solidity of being able to look far ahead." | https://nextchem.it/news/dashboard-guide-energy-transition |
The puzzle of the high maternal and child mortality rate in Africa, especially for children under the age of five, remains a major concern even as all efforts are made to reverse the trend.
The figures are bleak: 1 in 12 children in sub-Saharan Africa dies before turning 5, and more than 430 women die each day from preventable causes related to pregnancy and childbirth, according to the World Health Organisation (WHO).
Infections related to the delivery process, and communicable diseases such as diarrhoea, pneumonia and malaria, are the leading cause of these deaths. The high number of maternal deaths reflects inequities in access to health services.
To help save lives, some companies have started tapping into new technologies that can diagnose health conditions and diseases more efficiently and accurately than current practices using standard equipment.
One such technology is the Vscan, a non-invasive ultrasound device the size of a smartphone, which provides real-time high-resolution images that can be used in medical fields such as cardiology and obstetrics and gynaecology.
Created by General Electric, a US conglomerate corporation, and launched at the World Health Assembly in Geneva last May, the Vscan, with its handheld size and easy-to-navigate touch screen, can come in handy in rural areas in Africa where health facilities are under-equipped.
The new invention can be a valuable asset in prenatal and antenatal care for mothers who do not have access to larger health care facilities.
WHO recommends that women have at least four antenatal visits to detect any complications in pregnancies, and many in rural areas do not have the means or the access to facilities to undertake even a single visit. One ultrasound scan before 24 weeks’ gestation (known as an early ultrasound) is crucial to estimate gestational age, improve detection of fetal anomalies and multiple pregnancies, reduce induction of labour for post-term pregnancy, and improve a woman’s pregnancy experience.
A Vscan machine retails for about $10,000, as opposed to the traditional cart ultrasound machine, which can cost $250,000 or more.
The device has been well received, especially by pregnant women and health experts, because it can be instrumental in finding birth defects in foetuses, and can help in monitoring high-risk pregnancies as well as determining the position of a baby before birth.
Towards the end of this year, GE officials travelled to Nigeria to assist more than one thousand midwives and health care providers with training over the course of three years.
“Through the availability of relevant technologies, such as the Vscan access and comprehensive training, we aim to make a meaningful contribution to primary and referral care by building capacity, enhancing skills and driving better outcomes for Nigerian mothers and babies and for their communities,” says Farid Fezoua, the president and CEO of GE Healthcare Africa. | https://www.un.org/africarenewal/magazine/december-2016-march-2017/africa-wired-portable-ultrasound-device-tackle-child-mortality |
TECHNICAL FIELD
BACKGROUND
SUMMARY
DETAILED DESCRIPTION
The invention is directed to contact lens cases, and in particular, to the delivery of ophthalmic active agents to the eyes by means of contact lenses.
Current methods of delivering ophthalmic active agents, i.e., both pharmaceutical and non-pharmaceutical agents, to treat ocular disorders and diseases are somewhat inefficient and cumbersome. For example, ninety percent of current ophthalmic active agents are provided in drops or ointments, which typically have low absorption rates. In fact, usually less than seven percent of the applied active agent is absorbed by eye tissue. Due to low absorption rates, drops and ointments must include high dosages of active agent(s), and multiple dosages often must be applied in order for the active agent(s) to be effective. Additionally, side effects, such as heart problems, can result when using eye drops because the active agent(s) in the drops can seep into the nasal cavity and then into the bloodstream and other tissues.
Convenience for the patient, and consequently, patient compliance in administering the ophthalmic agents is also an issue to be considered. For example, a person may need to transport one or more containers (such as bottles or tubes) containing eye drop solution or eye ointment to ensure that appropriate treatments are applied at specific times during a day. Also, the person will likely have to administer the drops every two to four hours because of the low absorption rates and tear wash out. Not only is this an inconvenience, but the person may forget or miss one or two treatments each day.
It is therefore desirable to provide a more effective and convenient system for delivering ophthalmic active agents to the eye. More specifically, it is desirable to provide a system that delivers ophthalmic agents to the eye via a contact lens.
Contact lens cases for delivering ophthalmic agents to contact lenses are described. The ophthalmic agents are then delivered to the eyes by the way of the contact lenses as the agents are introduced to the contact lenses prior to insertion. According to one embodiment, a contact lens case comprises a container defining a reservoir for holding a contact lens solution and a contact lens, and a first lid assembly for attachment to the container. The lid assembly includes a lid attachable to the container for closing the reservoir, and a dispenser pack that is attachable top the lid. The dispenser pack comprises at least one compartment containing a treatment unit for dispensing into the reservoir. The reservoir would also contain a lens solution. The treatment unit is dispensed into the reservoir for mixing or dissolving into the solution. A contact lens is allowed to soak in the reservoir to absorb or retain at least a portion of the solution and a quantity of an ophthalmic agent from the treatment unit. The contact lens is subsequently placed on an eye for administering at least some of the ophthalmic agent to an eye for treatment.
According to another embodiment, a contact lens case includes a container defining a reservoir for holding a solution and a contact lens, and a one-piece lid for attachment to the container and closing the reservoir. The lid includes at least one compartment containing a treatment unit for dispensing into the reservoir and delivery of a quantity of an ophthalmic agent from the treatment unit to the contact lens.
Another embodiment includes a contact lens case with two containers and two lid assemblies or lids corresponding to the two containers, thereby enabling the storage of two contact lenses and the treatment of two eyes at the same time.
Another embodiment includes a contact lens kit includes a contact lens case and multiple dispenser packs containing treatment units. The packs are individually attachable to the lid(s) of the case, thereby allowing dispenser packs that have been depleted of treatment units to be replaced without replacing the case.
According to further embodiments, a contact lens kit includes a case having multiple lids containing treatment units. The lids are individually attachable to the container(s) of the case, thereby allowing lids that have been depleted of treatment units to be replaced without replacing the entire case.
Further features and advantages of the invention will be apparent upon reference to the following description, appended drawings, and claims.
10
10
20
30
20
30
40
50
40
FIGS. 1-7
FIG. 2
A contact lens case according to one embodiment is illustrated in . The case includes a container and a removable lid assembly for attachment to the container . Referring to , the lid assembly includes a lid and a dispenser pack, or blister pack sized and configured to fit inside the lid .
FIG. 2
20
22
24
22
22
24
2
4
6
4
26
24
28
27
26
20
Still referring to , the container may be constructed of a rigid material such as plastic, and includes a bottom wall and a generally annular side wall extending from the bottom wall . The bottom wall and side wall define an internal reservoir or cavity , which is suitable for holding solution and a contact lens in the solution . A lid engaging portion is defined at the top of the side wall . A plurality of external threads are provided on an external side of the lid engaging portion . The container may be constructed of plastic, however other materials providing suitable weight, durability and rigidity may be used.
FIGS. 2-4
40
42
44
42
42
48
46
45
44
28
40
20
2
49
45
49
40
Referring to , the lid includes a top wall and an annular side wall extending from the top wall . The top wall includes a plurality of openings . A plurality of internal threads are provided on an internal surface of the side wall , and are designed to engage the external threads to allow the lid to be threaded onto and off of the container to cover and uncover, respectively, the reservoir . A retaining ledge extends from the internal surface . The retaining ledge may comprise a continuous annular projection or a plurality of spaced projections. The lid may be constructed of plastic or another material that has a suitable weight, durability and rigidity.
46
40
28
26
20
40
26
26
According to alternate embodiments (not shown), the threads on the lid and the threads on the lid engaging portion of the container may be eliminated, and the lid may be configured to fit onto the lid-engaging portion by being pressed or snapped onto the lid engaging portion with an interference or friction fit.
FIGS. 5-7
FIG. 7
50
52
54
50
56
52
54
52
57
56
54
58
56
57
58
60
56
58
59
59
59
52
54
As shown in , the dispenser pack is a substantially disc-shaped member formed by an upper sheet of material attached to a lower sheet of material . The pack includes a plurality of bubble-shaped compartments, or blisters are defined by the upper sheet of material and the lower sheet of material . More specifically, the upper sheet of material defines the top walls of the compartments , while the lower sheet of material defines the bottom walls () of the compartments . The top walls may be convex and the bottom walls may be substantially flat, thereby defining an interior volume of each compartment . The bottom walls may include patterns of weakness . The patterns of weakness may comprise nicks, scores, cuts, perforations or combinations thereof. In the embodiment shown, the patterns of weakness are circular in shape, however patterns of different shapes and sizes may be provided. The upper sheet of material may be constructed of a deformable material such as plastic, for example, and preferably a translucent or transparent plastic. The lower sheet of material may be rupturable, and may be constructed of a material that is impermeable to water, such as foil or plastic.
56
8
60
59
58
8
56
Each compartment contains a treatment unit contained in its interior volume . The patterns of weakness facilitate rupturing of the bottom walls to dispense the treatment units from the compartments .
50
51
50
40
20
51
52
54
50
51
The pack may further include an annular gasket positioned at the outer periphery of the pack for providing a seal between the lid and the container . The gasket may cover portions of the upper and lower sheets of material , and/or the outer peripheral edge of the pack . The gasket may be constructed of a resilient material such as rubber, silicone, or plastic, for example.
FIGS. 2 and 4
FIG. 2
30
50
40
44
50
49
56
48
42
57
56
51
50
51
50
40
50
40
57
56
48
40
58
40
Referring to , the lid assembly is assembled by inserting the pack through the bottom of the lid (in direction U, shown in ) inside the side wall , securing the peripheral edge of the pack above the retaining ledge , aligning each compartment with a respective opening in the top wall , and fitting the top walls of the compartments through the openings. If the gasket is provided on the pack , the gasket forms a tight seal between the pack and the lid . When the pack is inserted into the lid , the top walls of the compartments protrude through the openings and are at least partially exposed at the top of the lid , and the bottom walls are exposed at a bottom side of the lid .
30
30
40
20
20
4
6
2
10
10
8
6
8
56
57
56
57
8
58
8
2
8
2
6
4
8
4
4
8
6
4
6
8
6
FIG. 1
Once the lid assembly is assembled, the lid assembly may be threaded or pressed (where the lid and container are threadless) onto the container after the solution and the contact lens are placed in the reservoir in order to close the case and prepare the case for delivering a quantity of a treatment agent from a treatment unit to the contact lens . A treatment unit may be individually dispensed from a compartment by pressing the top wall of the compartment downward (in direction D, shown in ) such that the top wall and the unit move downward to thereby rupture the bottom wall , and release the unit into the reservoir . After the unit is released into the reservoir , the lens is allowed to soak in the solution for a period sufficient to allow the unit to dissolve into or mix with the solution , and to allow at least some of the solution and at least some of the agent from the unit to be absorbed or retained by the contact lens . The time required for soaking will vary depending on the composition of the solution , the lens , and the form and composition of the unit . After soaking, the contact lens may be placed in a users eye for treating the eye with the agent.
8
56
30
20
40
20
It should be noted that the treatment unit may also be dispensed from a compartment prior to securing the lid assembly to the container , while the lid is detached and from the container and held thereabove.
8
6
8
40
50
3
5
6
8
56
52
50
56
56
8
50
8
56
50
40
FIGS. 1
Treatment units may be administered to the contact lens in the above-described manner at daily intervals, or at any other prescribed intervals, depending on the form and composition of the unit and the condition being treated. In order to remind a user when a treatment is due, the lid and/or dispenser pack may include treatment identifiers or labels I (, and ) on its outer surface, in the form of words, abbreviated words (such as Mon, Tue, Wed, Thur, Fri, Sat, Sun), numbers or symbols indicative of treatment days, treatment numbers or treatment times. Subsequent treatments are applied to a lens by dispensing units from unused compartments at the prescribed time intervals. Where the upper layer of the pack is translucent or transparent, it is easy for a user to determine visually whether a compartment is empty or whether a compartment contains any undispensed units . In the embodiment shown, the pack includes seven compartments, which is suitable for applying daily treatments over the course of one week. Once the units have been dispensed from all of the compartments , a new pack may be inserted into the lid .
10
50
56
10
50
10
50
50
56
The described contact lens cases can also be one component of a drug delivery kit. For example, a kit would include a case and a plurality of packs such as four packs containing seven compartments . The four packs would provide one month of eye treatments before the case is discarded and replaced. Proper hygiene is promoted by limiting the number of packs , and therefore the number of eye treatments, provided with the case . It should be understood, however, that any number of packs may be provided, and each pack may be provided with any number of compartments , as desired.
FIG. 8
FIG. 8
FIGS. 1-7
FIGS. 1-7
FIGS. 1-7
100
100
200
20
29
20
4
6
2
30
20
8
56
100
50
30
56
50
50
56
shows a contact lens case according to another embodiment. In , reference characters repeated from the description of indicate similar features to which the description of applies. The case includes a base having two containers connected to each other by a center span . Each container may contain solution and a contact lens in a reservoir . A pair of lid assemblies are provided for attachment to the containers and dispensing treatment units from compartments in the same manner discussed in the embodiment of . According to this embodiment, in order to allow for approximately one-month of daily eye treatments, the case may be provided with eight packs (four for each eye/lid assembly ), each containing seven compartments . As in the previous embodiment, any number of packs may be provided, and the packs may include any number of compartments .
FIGS. 9 and 10
FIGS. 1-8
130
30
130
140
50
140
40
140
132
148
56
40
30
30
57
56
148
show a lid assembly according to another embodiment, wherein reference numbers repeated from the lid assembly of the previous embodiment indicate similar features. The lid assembly includes a lid and a dispenser pack . The lid is similar to the lid of the embodiments of , except that the lid has a top wall with a single central opening through which all of the compartments are exposed on a top side of the lid . The lid assembly is assembled in a manner similar to the way in which the previously described lid assembly is assembled, except that the top walls of all of the compartments are inserted through the opening .
FIGS. 11-13
FIG. 12
240
240
30
130
240
242
244
242
242
252
254
252
252
254
252
254
244
240
show a lid according to another embodiment. The lid may be used in place of the lid assemblies , of the previously described embodiments. The lid includes a top wall and an annular side wall extending from the top wall . The top wall may be formed from an upper sheet of material and a lower sheet of material attached to the upper sheet of material (). The upper sheet of material may be constructed of a plastic, preferably translucent or transparent plastic, while the lower sheet of material may be constructed of foil or plastic. The upper and lower sheets of material , may be bonded to the side wall such that the lid is formed as a one-piece body.
246
245
244
28
20
240
20
2
240
40
140
246
240
240
20
28
FIGS. 1-8
A plurality of internal threads are provided on an internal surface of the side wall , and are designed to engage the external threads of a container as described the in the embodiments of in order to allow the lid to engage or disengage the container to cover and uncover, respectively, the reservoir by turning the lid . As is the case with the lids , the threads may be eliminated from the lid in order to provide a press-on, interference fit between the lid a container that lacks threads .
242
256
252
254
256
8
256
252
257
256
254
258
256
257
258
260
58
259
259
252
254
FIGS. 12 and 13
The top wall includes a plurality of compartments defined by the upper and lower sheets of material , . Each of the compartments may contain a treatment unit that can be dispensed from the compartment . The upper sheet of material defines the top walls of the compartments , and the lower sheet of material defines the bottom walls (see ) of the compartments . The top walls may be convex and the bottom walls may be substantially flat, thereby defining an interior volume . The bottom walls may include patterns of weakness , which may comprise nicks scores, cuts, perforations or combinations thereof. Although the patterns of weakness are shown circular in shape, patterns of different shapes and sizes may be provided. The upper sheet of material may be constructed of a deformable material such as plastic, for example, and preferably a translucent or transparent plastic. The lower sheet of material may be rupturable, and may be constructed of a material that is impermeable to water, such as foil or plastic.
240
30
130
240
8
256
240
240
240
256
240
240
256
240
256
FIGS. 1-10
FIGS. 1-8
The lid is a one-piece alternative to the lid assemblies , described above with respect to . Therefore, the entire lid must be replaced when the treatment units have been dispensed from all of the compartments in the lid . As in the embodiments of , a kit containing a one-month supply of treatments for a single eye may include four lids , wherein each lid contains seven compartments . A kit, for example, containing a one-month supply of treatments for two eyes may include eight lids , wherein each lid contains seven compartments . However, any number of lids having any number of compartments may be provided, as desired.
8
256
8
56
257
257
8
258
256
240
256
FIG. 11
A unit can be dispensed from a compartment in the same way a unit is dispensed from a compartment in the previous embodiments. Specifically, pressing down on the top wall in the direction D will force the top wall and the unit downward and thereby rupture the bottom wall of the compartment . To help ensure proper administration of treatments, the lid may also include treatment identifiers or labels I () on its outer surface corresponding to each compartment .
FIGS. 14 and 15
FIGS. 14 and 15
50
240
50
240
50
240
56
256
62
262
62
262
57
257
60
260
62
262
58
258
62
262
8
56
256
258
257
258
a
a
a
a
respectively show a dispenser pack and a lid according to further embodiments. The pack and lid are similar to the previously described pack and lid , respectively, except that the compartments , include puncturing elements , . The puncturing elements , are attached to interior surfaces of the top walls , , and extend downward into the interior volumes , . The puncturing elements , may each comprise a rod, pin, blade, or any other projection that is sufficiently sharp to puncture the bottom walls , . According to the embodiments of , the puncturing elements , facilitate the dispensing of a treatment unit from the compartments , by moving downward into contact with the bottom walls when the top walls are depressed, and then puncturing the bottom walls .
FIG. 16
FIGS. 2 and 4
FIG. 2
FIGS. 9 and 10
350
50
350
50
350
350
26
20
20
40
140
2
4
40
140
20
o
illustrates a dispenser pack according to another embodiment, wherein reference numbers repeated from the pack () indicate similar features. The dispenser pack is similar to the previously described dispenser pack , except that the outer diameter Dof the dispenser pack is selected such that the pack rests on top of the lid engaging portion of a container () and is sandwiched between the container and a lid / (FIGS. and /) when the lid / is secured onto the container .
FIG. 17
FIG. 14
FIG. 2
350
40
140
350
50
350
350
26
20
20
40
140
40
140
20
a
a
a
a
a
o
illustrates a dispenser pack for use with the lids , , according to yet another embodiment. The dispenser pack is similar to the dispenser pack (), as indicated by shared reference characters, except that the outer diameter Dof the dispenser pack is selected such that the pack rests on top of the lid engaging portion of a container () and is sandwiched between the container and a lid / when the lid / is secured onto the container .
FIGS. 18 and 19
FIG. 19
500
500
600
620
629
620
624
629
629
602
4
6
4
626
624
628
627
626
600
illustrate a contact lens case according to yet another embodiment. The case includes a base having two containers joined together by a central partition wall . Each container includes an annular side wall extending from the partition wall , and which combines with the partition wall to define an internal reservoir or cavity for holding solution and a contact lens in the solution . As shown in , a lid engaging portion is defined at the top of the side wall . A plurality of external threads are provided on an external side of the lid engaging portion . The base may be constructed of plastic, however other materials providing suitable weight, durability and rigidity may be used.
30
620
20
500
40
500
620
620
500
502
629
600
30
FIGS. 1-7
FIGS. 18 and 19
A lid assembly may be attached to each container in the same manner described with respect to the container in the embodiment of . The case may be referred to as a “vertical lens case,” because the case is configured to rest on a surface S in a storage position, with either of the lids supporting the case on the surface S, and the containers aligned with each other along a vertical axis Y. Thus, the containers are vertically stacked when the case is in a storage position, and the reservoirs are disposed on vertically opposite sides of the partition wall . It is noted that the base is not limited to use with the lid assemblies described in the embodiment of , but may also be used with any of the lid assemblies, lids and dispenser packs described in the other various embodiments herein.
8
6
8
8
8
The treatment units to be administered to contact lenses according to the various embodiments disclosed herein can be in powder, tablet, liquid or liquid emulsion form, and can contain any ophthalmic agent or compound that is used to treat any ocular disease or any ocular condition. Accordingly, the agent(s) in the units can be selected from any class of compounds, for example, anti-inflammatory agents, anti-infective agents (including antibacterial, antifungal, antiviral, antiprotozoal agents), anti-allergic agents, antiproliferative agents, anti-angiogenic agents, anti-oxidants, neuroprotective agents, cell receptor agonists, cell receptor antagonists, immunomodulating agents, immunosuppressive agents, IOP lowering agents (anti-glaucoma), beta adrenoceptor antagonists, alpha-2 adrenoceptor agonists, carbonic anhydrase inhibitors, cholinergic agonists, prostaglandins and prostaglandin receptor agonists, AMPA receptor antagonists, NMDA antagonists, angiotensin receptor antagonists, somatostatin agonists, mast cell degranulation inhibitors, alpha-2 adrenoceptor antagonists, thromboxane A2 mimetics, protein kinase inhibitors, prostaglandin F derivatives, prostaglandin-2 alpha antagonists and muscarinic agents. It should be understood that while each of the units may comprise the same active agent(s), it is possible to provide different active agents in individual units , as desired.
Of particular interest are pharmaceutical active agents that are known to treat an ocular disease or disorder including, but not limited to, a posterior-segment disease or disorder. In certain embodiments, such disease or disorder typically may include diabetic retinopathy, diabetic macular edema, cystoid macular edema, age macular degeneration (including the wet and dry form), optic neuritis, retinitis, chorioretinitis, intermediate and posterior uveitis and choroidal neovascuralization.
Glaucoma is a group of diseases that are characterized by the death of retinal ganglion cells (“RGCs”), specific visual field loss, and optic nerve atrophy. Glaucoma is the third leading cause of blindness worldwide. An intraocular pressure (“IOP”) that is high compared to the population mean is a risk factor for the development of glaucoma. However, many individuals with high IOP do not have glaucomatous loss of vision. Conversely, there are glaucoma patients with normal IOP. Therefore, continued efforts have been devoted to elucidate the pathogenic mechanisms of glaucomatous optic nerve degeneration.
Curr. Med. Chem.—Central Nervous System Agents,
It has been postulated that optic nerve fibers are compressed by high IOP, leading to an effective physiological axotomy and problems with axonal transport. High IOP also results in compression of blood vessels supplying the optic nerve heads (“ONHs”), leading to the progressive death of RGCs. See; e.g., M. Rudzinski and H. U. Saragovi, Vol. 5, 43 (2005).
Pharmaceutical active agents that are prescribed by a physician for the treatment of glaucoma, and that may be formulated and disposed in the compartmentalized lens case for delivery to a contact lens and subsequently to the eye of a patient include travoprost, brimonidine, levobunolol, epinephrine, bitmatoprost, dipivefrin, carteolol and metipranolol.
In one embodiment, the anti-glaucoma pharmaceutical agent is of general formula II
1
10
1
5
1
5
1
5
1
2
3
wherein A and Q are independently selected from the group consisting of aryl and heteroaryl groups substituted with at least a halogen atom, cyano group, hydroxy group, or C-Calkoxy group; R, R, and Rare independently selected from the group consisting of unsubstituted and substituted C-Calkyl groups; B is a C-Calkylene group; D is the —NH or —NR′— group, wherein R′ is a C-Calkyl group; and E is the hydroxy group.
1
2
3
1
5
1
3
Exemplary, pharmaceutical agents of general formula II include A as a dihydrobenzofuranyl group substituted with a fluorine atom; Q as a quinolinyl or isoquinolinyl group substituted with a methyl group; Rand Rare independently selected from the group consisting of unsubstituted and substituted C-Calkyl groups; B is a C-Calkylene group; D is the —NH— group; E is a hydroxy group; and Ris a trifluoromethyl group.
Exemplary compounds include a glucocorticoid receptor agonist having Formulae III or IV, as disclosed in US Patent Application Publication 2006/0116396.
4
5
1
10
1
5
1
3
1
10
1
5
1
3
1
10
1
5
1
3
3
10
3
6
3
5
3
10
3
6
3
5
wherein Rand Rare independently selected from the group consisting of hydrogen, halogen, cyano, hydroxy, C-C(alternatively, C-Cor C-C) alkoxy groups, unsubstituted C-C(alternatively, C-Cor C-C) linear or branched alkyl groups, substituted C-C(alternatively, C-Cor C-C) linear or branched alkyl groups, unsubstituted C-C(alternatively, C-Cor C-C) cyclic alkyl groups, and substituted C-C(alternatively, C-Cor C-C) cyclic alkyl groups.
Compositions of the invention also include ocular formulations prescribed by or recommended by a physician, or a health care provider, to treat ocular allergic conditions. Allergy is characterized by a local or systemic inflammatory response to allergens. Allergic conjunctivitis is a disorder that is characterized by the clinical signs and symptoms of eye itching, redness, tearing, and swelling. An estimated 20% of the population in the United States suffer from inflammation of the eye. The signs and symptoms of allergic conjunctivitis can significantly impact the quality of life of patients, from social interactions, productivity at work and school, to the ability to perform visual tasks such as working on a computer or reading.
Currently, available pharmaceutical treatments for inflammation of the eye or symptoms of inflammation of the eye include (1) antihistamines, (2) drugs that block the release of histamine and other substances from a mast cell (e.g., mast cell stabilizers), (3) drugs with multiple modes of action (e.g. antihistamine/mast cell stabilizing agents), and (4) drugs that can actively constrict blood vessels thus reducing redness and swelling (e.g., vasoconstrictors). Additionally, artificial tears have been used to wash the eye of allergens.
The desirability of a particular treatment for inflammation of the eye can be measured against the following factors (1) efficacy at onset of action, (2) duration of action, (3) efficacy at controlling signs and symptoms of allergic conjunctivitis, and (4) comfort of the drop when instilled in the eye.
Pharmaceutical active agents that are prescribed by a physician for the treatment of an ocular allergic condition, and that may be formulated and disposed in the compartmentalized lens case for delivery to a contact lens and subsequently to the eye of a patient include olopatadine, nedocromil, and lotepdrenol.
In one embodiment, the pharmaceutical active agent is ketotifen or a salt thereof. Ketotifen or any ophthalmically acceptable ketotifen salt may be used in the compartmentalized lens case herein described, although ketotifen fumarate is preferred. Ketotifen fumarate is represented by the following formula:
In another embodiment, the pharmaceutical active agent is an anti-redness agent, which may relieve redness in the eye. The preferred anti-redness agent is naphazoline or an ophthalmically acceptable salt thereof such as, for example, naphazoline hydrochloride. Other anti-redness agents that may be used include, but are not limited to, tetrahydrozoline, ephedrine, phenylephrine, oxymetazoline, xylometazoline, pseudoephedrine, tramazoline, other vasoconstrictors, combinations thereof, as well as ophthalmically acceptable salts thereof (e.g., tetrahydrozoline hydrochloride).
Naphazoline hydrochloride is represented by the following formula:
Naphazoline or a naphazoline salt may be present in a concentration from about 0.001% to about 0.2% (or alternatively, from about 0.001% to about 0.1%). In one embodiment, naphazoline or a naphazoline salt is present in a composition at a concentration from about 0.01% to about 0.1%; preferably, from about 0.01% to about 0.07%; more preferably, from about 0.02% to about 0.06%. In some embodiments, the method provides stability to compositions comprising naphazoline or a naphazoline salt in a concentration such that the concentration of naphazoline in the composition is about 0.02% to about 0.05%. Concentrations of a naphazoline salt yielding such concentrations of naphazoline base may be readily calculated; for example, using naphazoline hydrochloride in a concentration of about 0.025% in the composition provides a concentration of naphazoline base in the composition of 0.021%.
Additional information on formulations containing ketotifen, naphazoline or a corresponding pharmaceutically salt of each thereof can be found in U.S. patent application Ser. No. 10/972,571 filed Oct. 25, 2004.
Pharmaceutical active agents that are prescribed by a physician for the treatment of an ocular infection, and that may be formulated and disposed in the compartmentalized lens case for delivery to a contact lens and subsequently to the eye of a patient include antimicrobial agents, antibiotic agents and antifungal agents. The antimicrobial agents are selected from the group consisting of ciprofloxacin, sulfacetamide, trimethoprin, polymyxin B and norfloxacin. The antibiotic agents are selected from the group consisting of natamycin, tobramycin, gentamicin, gatifloxacin and ofloxacin. One of the more preferred antfungal agents is cromolyn.
Pharmaceutical active agents that are prescribed by a physician for the treatment of ocular inflammation, and which are formulated and disposed in the compartmentalized lens case for delivery to a contact lens and subsequently to the eye of a patient include stearoidal anti-inflammatory agents including dexamethasone, prednisolone, fluormetholone, medrysone, flurbiprofen and loteprednol. Alternatively, a non-steroidal anti-inflammatory agent such as ketorolac can be used with the compartmentalized lens case.
In yet another embodiment, cyclosporine can be formulated into stable emulsions and disposed in the compartmental lens case herein described. Cyclosporine is an immunosuppressive agent that is prescribed to patients with an ocular infection associated with keraconjunctivitis sicca. The cyclosporine is believed to act as a partial immunomodulator and enhances tear production.
In yet another embodiment, pharmaceutical active agents of the FK506 class can be formulated into stable emulsions and disposed in the compartmental lens case herein described. Emulsions, since they contain an aqueous phase, are much less occlusive than oil-based compositions and hence are better tolerated in many situations. Accordingly, in one embodiment a formulation, in the form of an emulsion, comprises a compound of the FK506 class, and a physiologically acceptable alkanediol, ether diol or diether alcohol containing up to 8 carbon atoms as solvent. A compound of the “FK506 class” is a compound which has the basic structure as FK506 and which has at least one of the biological properties of FK506 (e.g., immunosuppressant properties). The compound may be in free base form or pharmaceutically acceptable, acid addition, salt form. A preferred compound of the FK 506 class is disclosed in EP 427 680, e.g. Example 66a (also called 33-epi-chloro-33-desoxyascomycin).
8
8
8
In other embodiments, the agent(s) in treatment units may include non-pharmaceutical ocular agents. For example, the units may comprise agents such as lens rewetting agents, lubricating agents, moisturizing agents, alginate, HA [?], comfort agents, etc. The units may also include disinfectant powders or tablets with rapid dissolution.
The foregoing disclosure provides illustrative embodiments and is not intended to be limiting. It should be understood that modifications of the disclosed embodiments are possible within the spirit and scope of the invention, and the invention should be construed to encompass such modifications.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1
is a perspective view of a contact lens case according to one embodiment of the invention;
FIG. 2
FIG. 1
is an exploded side cross-sectional view of the contact lens case of ;
FIG. 3
FIG. 1
is a top view of the contact lens case of ;
FIG. 4
FIG. 1
is a side cross-sectional view of a lid assembly of the contact lens case of ;
FIGS. 5
FIG. 4
6
7
, and are top, side cross-sectional and bottom views, respectively, of a dispenser pack of the lid assembly of ;
FIG. 8
is a partially transparent perspective view of a contact lens case according to another embodiment of the invention;
FIG. 9
is a perspective view of a lid assembly for a contact lens case according to another embodiment of the invention;
FIG. 10
FIG. 9
is a side cross-sectional view of the lid assembly of .
FIG. 11
is a perspective view of a lid for a contact lens case according to another embodiment of the invention;
FIG. 12
FIG. 11
is a side cross-sectional view of the lid of ;
FIG. 13
FIG. 11
shows a bottom side of the lid of ;
FIG. 14
is an enlarged partial view of a dispenser pack for a lid assembly according to another embodiment of the invention; and
FIG. 15
is an enlarged partial view of a lid for a contact lens case according to yet another embodiment of the invention.
FIG. 16
is a side view showing a dispenser pack disposed on a contact lens case, according to another embodiment.
FIG. 17
is a side view showing a dispenser pack disposed on a contact lens case, according to yet another embodiment.
FIG. 18
is a side view of a “vertical” contact lens case according to another embodiment.
FIG. 19
FIG. 18
is a side cross-sectional view of a base of the contact lens case of . | |
“The farthest west is but the farthest east.” Henry Thoreau
At the beginning of the second half of the twentieth century the writing was on the wall for all to see. While the destructive force of war and the emergence of new political alliances, with even greater potential for annihilation, highlighted the need for a new spirit of the age, it was to be an age marked by uncertainty.
Throughout history humanity has faced similar situations. Whole civilisations have come and gone, often leaving barely a trace behind. But what is different about our modern age is the sense of helplessness that we all feel in the face of rapid change on a global scale. Our dilemma is that while we are highly conscious of what is happening, we seem to be powerless to do anything about it. As we get smarter, we are not getting any wiser.
We are incredibly well informed about the problems that we all face, but this does little to assuage the collective and individual anxiety that we all experience. Often we describe our problems in objective terms as something that can be managed: ‘the environment’, ‘pollution’, ‘economic situation’, or even indeed, the ‘pace of change’ itself. What we all too often fail to recognise is the source of the problem – ourselves.
Buddhism advocates a radical approach that is built upon three pillars. From an ostensibly static beginning – a seated position – one learns to regulate and develop one’s breathing, posture and faculty of attention so that unity of mind and body becomes habitual. From this simple beginning one can learn the right way to live.
In the Aikido that Morihei Ueshiba (1883-1969) developed, the principles of unification and harmony are contained in the kata (forms) derived from both armed and unarmed arts (Budo), and applied in a dynamic situation as a way of training. While it is true that the Founder of Aikido was not formally a Buddhist, he inherited a form of Shinto that was itself the result of an earlier synthesis of Buddhist theology and Shinto beliefs. There are many elements in Aikido that are common to both belief systems, particularly in relation to the insubstantial nature of the self.
The aim in Aikido is not to develop fighters, but to develop a fully rounded individual that can make a productive contribution to society. In the societal realm, this contributes to social and interpersonal harmony (wa).
In the personal realm, development can continue as a way of spiritual training or misogi, a way of purification. How far one wishes to move in this direction is a matter of individual choice and responsibility.
Morihei Ueshiba’s Aikido, which continued to evolve throughout his lifetime, was a vehicle for his own personal development. While others could follow, this did not guarantee that they would find anything. For many people, however, it has proved to be a worthwhile staring point. For the founder of Aikido, the challenge facing humanity in the twentieth century was essentially spiritual. The responsibility for personal transformation, however, rests solely with the individual. The starting point in both Aikido and Zen is one’s self.
The spread of Aikido (a Japanese form of Budo or martial art) and Zen Buddhism to the west in the mid-1950’s, first to the US and then to Europe, was the result of a combination of social, economic and cultural factors that heralded the beginning of a new era. As nation states set about the business of post-war restoration and modernisation, it was a task framed against the back drop of a world that was deeply divided.
Morihei Ueshiba regarded the spirit of Aiki (love and harmony) as an essential ingredient for the reunification of the Japanese people in the post-war years. He spoke of Aiki as “a golden bridge uniting the Japanese people.”
In later years, after the successful introduction of Aikido to the West, he was to talk of Aiki as a “silver bridge uniting the people of the world.” But it was to be sometime before his vision of Aikido as a new form of ‘true Budo’ could be realised in his country of birth.
In Japan, a heroic reconstruction was underway, fueled by massive US investment and the need to provide a bulwark in the Pacific against Sino-Soviet interests. It was a recovery that was to bring the Japanese nation from near extinction to become a major economic super power, dominating the economy of the Pacific Basin for many years.
Immediately after the war, however, there was widespread disaffection among the Japanese population with all things connected to its militaristic past. In the eyes of the Japanese people, the leadership and values of the past were viewed with some misgivings. This naturally included the activities and beliefs of traditional Budo and Zen Buddhism, whose ideals had been appropriated to support an ill-fated policy of Japanese nationalist expansionism. Budo was also a proscribed activity by the MacArthur led allied occupation, and it was to be some years before restrictions were relaxed.
In any event, the mass of the Japanese people were too preoccupied with day-to-day survival to be concerned about Zen or Budo. Hombu Dojo in Tokyo, the home of Aikido, served more as a hostel for the displaced and homeless in the years immediately following the war, than as a martial arts training establishment.
Many years of hardship were to follow, and it was not until the late 1950’s that the hard work of the Aikido founder’s son and successor, Kisshomaru Ueshiba (1921-1999), and other committed Aikido teachers started to pay off as Aikido’s popularity began to grow in Japan. In the interim, Aikido instructors had already been sent to the US and Europe by the forward thinking Japanese.
Japan now looked to the West – to the US in particular – for economic support and direction. In the West there had been considerable interest in Japanese culture prior to the war, which began to resurface when hostilities ended. This provided Japan with an opportunity to show that the Japanese people had a gentler, more humane aspect, and they were keen to share this with the world. By the early 1950s, a period of cultural exchange was underway.
Japonisme was back in fashion, an echo of its seminal influence on the Impressionist, Cubist and Art Noveau movements before the war. Interest re-emerged in the wood block prints of Bertha Lum (1869-1954), an American artist who had learned carving and printing techniques from Japanese master craftsmen as early as 1907.
The philosophical writings of Daisetsu Teitaro Suzuki (1870-1966) and Karlfried Graf Dürckheim (1896-1988), who was imprisoned by the Americans in Japan just after the war, helped explain many of the ideas underpinning Japanese culture and spiritual belief.
Translations of Haiku poetry from Japanese into English by the British academic R.H. Blyth (1898-1964), helped popularise Japanese literary forms in the early 1950s, and influenced the poetic writing of a whole generation in the US and Europe.
The Italian academic, writer, photographer, ethnologist and mountaineer, Fosco Maraini (1912-2004), interned by the Japanese authorities for the last two years of the war, revisited his adopted Japanese home with obvious affection in a book published in 1955, “Meeting with Japan.” In his book Maraini provides one of the most well informed, insightful and sympathetic accounts of Japanese life and culture to be written by a European.
In 2002, Maraini was honoured by an award from the Japanese Photographic Society recognising his achievements and contribution to fine-art photography, the ethnology of the Ainu of Hokkaido, and his efforts to strengthen ties between Japan and Italy spanning some sixty years. Italy still retains strong cultural links with Japan to this day and has a thriving Aikido community. By the mid-fifties, Japanese teachers of Budo and Zen began arriving in America and Europe. And in Japan, Western students turned up on the doorsteps of Zen monasteries and martial arts’ Dojos.
Early encounters were awkward and fumbling at first as neither side really understood the other, and imitation often took the place of understanding. But this was to change as the enthusiasm and passion of foreign students became evident to their Japanese teachers. Japanese Aikido teachers were already instructing in Europe as early as 1951. In France, Minoru Mochizuki introduced Aikido to French Judo students, and began laying the groundwork for Tadashi Abe who was to succeed him in 1952.
In 1955, the eclectic Budo Master Kenshiro Abe took up an instructor’s post at the invitation of the London Judo Society. He taught many Budo arts, including Aikido, and was to remain in Britain until 1966, when – apart from a brief visit – he returned to live permanently in Japan. In May 1959, Shunryu Suzuki (1904-1971), a Soto Zen teacher, arrived in San Francisco. Zen had come to the West to take up permanent residence.
During this time Europe was also recovering from the effects of war and undergoing a period of rapid change and development. The economy of Europe began to improve, and by the mid- to late-1950’s, particularly in Britain, there were signs of increased prosperity – the “never had it so good” era of Harold Macmillan.
This was not just political rhetoric. The mass of the British population had never been healthier. Education, housing, health care, employment and income had all improved throughout the 1950s and into the 1960s. Slums were cleared, national service was mandatory, and people had increased leisure time and access to more financial resources than ever before.
A youth culture began to emerge in Britain for the first time. Traditional values buckled under the pressure of new and powerful social forces. Demobilised men from the services finding their way in civilian life, rebellious ‘angry young men’, street gangs, counter-culture groups and protest movements on both sides of the Atlantic all attested to the state of flux and uncertainty of the times.
Intellectuals rediscovered romantic naturalism and mysticism in the writings of Blake, Henry Thoreau, R.W. Emerson and Thomas Merton. The modernist poetry of Pound and Eliot, though influenced by East Asian religious thought, inspired a reaction by the poets and writers of the ‘Beat Generation’ against their objectivist tendencies.
Freedom and spontaneity were the watchwords of the new subjectivism of the Beat movement. ‘Beat Zen’ became a fashionable, though transient phenomenon in the Bohemian circles of Paris, London and New York. The literary gurus of the day included the American writers Allen Ginsberg, Jack Kerouac and William S Burroughs. For them, personal freedom came first – the ‘me’ generation was born.
They were a highly influential group and inspired the ‘confessional’ writing of Robert Lowell, Sylvia Plath, Anne Sexton and many others. But there was a self-destructive element to the movement that led some commentators to describe them as the ‘Lost Generation’. Many had been inspired by the work of Wilhelm, Jung and R.H Blyth, but their connection with Eastern religion and philosophy was a tenuous one at best.
Alan Watts (1915-1973), a writer and philosopher that had connections with both the Beat Generation and the academic establishment, did a great deal to promote Eastern thought and Zen Buddhism. But, like so many of his contemporaries, the attraction of Zen appeared to lie in the message of liberation rather than the practice.
Towards the end of his short life, however, Watts placed less emphasis on the significance of self-realisation and saw the tumultuous changes that were sweeping through society as evidence of a cosmological Zen principle – the dynamic of change itself was Zen in action. Accordingly, his interests centred on the psychology of man’s alienation from nature and its effects on man’s social and environmental quality of life.
The rise of secularism and the decline of religious faith, taken together with powerful forces of social change, set against the backdrop of the cold war and the omnipresent threat of a nuclear holocaust, brought everything into question. The future of mankind was itself in question.
In Paris, arguably the intellectual and cultural capital of Europe in the sixties, the existentialists gloomily pronounced that life was devoid of meaning and that the only true morality was action. A spiritual chasm opened that intellectual materialism was unable to bridge, and which traditional forms of Christian religious belief were unable to counter. Church congregations dwindled in numbers and young people began to look elsewhere for spiritual alternatives.
Many found relief in socialist inspired movements, which saw unparalleled growth during the 1950s and 60s; others looked to alternative religions and messianic prophecies promising a new age; and yet others explored alternative lifestyles and experimented with various ‘mind altering’ substances. The times were ‘a-changin’.
Aikido and Zen proved to be especially popular in France. While the existential materialism of Sartre offered a doctrine of individual freedom through choice and action, it had a hard edged quality to it that reinforced feelings of emptiness and isolation. Zen used similar language and talked of emptiness, freedom and action as well, but it cut off the head of the isolated self at a stroke. Self was an illusion. According to Zen Buddhism mankind is not other than nature, but part of an interconnected, dynamic whole.
The association between the French and the Japanese has always been closer than that of other occidental nations. Fosco Maraini, writing in 1955, commented on the special nature of that connection:
“ I should say, in fact, that French and Japanese approach each other with the fewest mental reservations, the most open mutual humanity; that is why they achieve understanding.”
It is not by accident that there are more people practicing Aikido in France today than in any other country, including Japan.
In the philosophy of Zen and Aikido, the relationship between emptiness and form represents a vibrant, creative principle in which emptiness and fullness are not mutually exclusive concepts – full is empty, and empty is full. For the Zen Buddhist being and non-being are equally illusory.
Within the extremes of existence and non-existence human beings are free to live in a spirited way that is both meaningful and rewarding. Both Zen and Aikido provide a practical, rather than an intellectual way to understand this:
“As soon as you see something, you already start to intellectualise it. As soon as you intellectualise something, it is no longer what you saw.” (Shunryu Suzuki)
When Japanese teachers of Budo and Zen arrived in the west, they found fertile soil for what they had to teach. What they had to offer, in terms of spiritual training, struck a chord for those individuals whose search for meaning in life needed something that was both practical and spiritual. They offered a way of life that addressed mankind’s fundamental need for balance in the inner and outer dimensions of existence.
Those teachers came, not with definitions and philosophical complexities, but with a heartfelt desire to spread the ‘Way’ (Way of the Universe) in whatever discipline they happened to be teaching. They taught a way of harmony.
They didn’t come to replace or contest a system of belief or faith, but to provide an antidote to the worst excesses of a rationalist world view that had succeeded in reducing mankind to an endangered species.
They came from a country that had come perilously close to the edge of extinction, not to teach us to fight with one another, but how to take better care of ourselves. They came to teach us how to build a bridge between heaven and earth. | http://blog.aikidojournal.com/2011/07/18/building-a-bridge-between-heaven-and-earth-by-alister-gillies/ |
At SIGNARAMA of Huntington, we understand that there are many factors that impact your choice of signage solutions including business location, distance from road, exposure to sun and wind, ADA requirements, and town and city regulations, to name a few.
Our staff is dedicated to identifying your needs and to providing your organization with the most effective sign solution to accommodate your specific requirements. As part of this process, SIGNARAMA of Huntington can visit your location, provide an on-site survey and analysis, and recommend the best sign possible.
Signage should be a lasting investment, so it’s important to select the right materials, determine the optimal sign location, and chose the best design for your needs. All of this is just part of the professional service we provide at SIGNARAMA of Huntington. Call us to schedule an on-site survey and analysis, and we’ll help you get it right the first time. | https://www.ssar.com/products_services/services.html/title/on-site-survey-analysis |
Brainwaves are the patterns of electric pulses made by brain cells (neurons) when they communicate with each other. These pulses are the vibrations made by particles bumping into each other creating a disturbance and transferring energy. All forms of energy, such as light and sound, are transferred in this way. One pulse is measured as a single positive - negative - vibration called a cycle. The frequency of the cycles within one second is measured in Hertz (Hz). The wave pattern of positive-negative-neutral is represented as a line with peaks and valleys of different heights. The height is called amplitude and represents the strength of the wave. Therefore, when you see a brainwave plotted on a screen, you are seeing the frequency and amplitude of brainwaves. Brainwave types are categorized by frequency and associated with different types of activity - described below.
Multiple frequencies can be present in the same brain region at the same time. Think of large sound speaker at a concert. When you look at it closely, or touch it, it is vibrating intensely. Is it producing one sound? No, of course not. There are several sounds coming through the same medium at the same time. Your brain works in the same way - multiple brainwaves are present at the same location at the same time. With neurofeedback, we are training your brain to amplify different brainwave frequencies relative to others to create a dominant frequency and type of brain activity.
The ability to train to specific frequencies is heavily reliant on the quality of the signal that the electroencephalogram (EEG) can detect. There are many factors that impact quality including muscle movement, electrode placement, and digital signal processing algorithms. So, a device can produce a graph or number and label it a brainwave measurement without the quality being good enough for training purposes. You can spend your time training “noise” and get sub-optimal results. Sens.ai has gone to great lengths to address these and other factors. This includes a signal quality test at the beginning of every session to ensure your experience is optimal.
Having too much or too little of power in each waveband can be a problem, and depends on the situation and task. You want lots of delta while in deep sleep, but not that much delta while awake and tasking. So the ability of the brain to be flexible and produce more or less of each band depending the requirements of the moment is key. That is why each band is associated with both negative and positive characteristics. If you are stuck in high theta, then that can be great for access during creative problem solving or therapy but not so great in getting something done. | https://sens.ai/science/brainwaves/ |
A RESTful Web Services API was added to the product with XPP 9.3.
It is available, and is distributed for use with any new installation or upgrade as of the XPP 9.3 release.
Replace Web Services with a RESTful API that is URL addressable for XPP functionality.
Low-level "micro" services (toxsf, compose, print, citi, etc) supported by XPP, that can be built upon for more high level customized functionality.
Continuing the discussion of sync/async. From a direct socket perspective, the new rest API would maintain the socket so time out could be a problem (regardless of whether the client is using a callback to be notified when the data is ready). This is not a problem for the 'short term' functions you might want to run (like set the active style in the job ticket, or check with a division exists).
But for long term processes (like compose, print), I can see the need for a launch without waiting. The "nowait" would start the process in the background, return an ID and provide a way to check on the status of the command based on ID. If the command finished, you would get the status code and the stdout/stderr messages.
So minimally, a "nowait" option should be provided for compose, print, and "usercommand".
This type of connection cannot be tied to a callback (unless the client writes some kind of timer), because in this scenario the XPP rest server cannot send data back to the client via HTTP because the connection is gone. It has to be a query using the ID to get the status. | https://community.sdl.com/ideas/contenta-ideas/i/ideas-xpp/xpp-restful-api-web-services?CommentId=bd556969-b1e4-49ee-949b-b0515e25e340 |
MOSCOW, January 10. /TASS/. Trilateral talks of Russian President Vladimir Putin, his Azerbaijani counterpart Ilham Aliyev, and Armenian Prime Minister Nikol Pashinyan will be held on Monday, January 11, in Moscow at the initiative of the Russian head of state. The sides plan to review the implementation of the November 9, 2020 statement of the three leaders on Nagorno-Karabakh and discuss steps to resolve the regional issues, the Kremlin press service reported on Sunday.
"At the initiative of the president of the Russian Federation Vladimir Putin, on January 11, 2021, in Moscow, trilateral talks will be held of the President of the Russian Federation, the President of the Republic of Azerbaijan Ilham Aliyev, and the Prime Minister of the Republic of Armenia Nikol Pashinyan. It is planned to review the implementation of the November 9, 2020 statement of leaders of Azerbaijan, Armenia and Russia on Nagorno-Karabakh and discuss further steps on resolving the issues present in the region," the statement said.
According to the press service, a particular attention will be paid to the issues of aid to residents of districts affected by the military action as well as of unblocking and developing trade and economic and transport connections.
Additionally, separate conversations of Vladimir Putin with Ilham Aliyev and Nikol Pashinyan are planned.
Renewed clashes between Azerbaijan and Armenia erupted on September 27, with intense battles raging in the disputed region of Nagorno-Karabakh. The conflict over Nagorno-Karabakh, a disputed territory that had been part of Azerbaijan before the Soviet Union break-up, but primarily populated by ethnic Armenians, broke out in February 1988 after the Nagorno-Karabakh Autonomous Region announced its withdrawal from the Azerbaijan Soviet Socialist Republic. In 1992-1994, tensions boiled over and exploded into large-scale military action for control over the enclave and seven adjacent territories after Azerbaijan lost control of them.
On November 9, Russian President Vladimir Putin, Azerbaijani President Ilham Aliyev and Armenian Prime Minister Nikol Pashinyan signed a joint statement on a complete ceasefire in Nagorno-Karabakh starting from November 10. The Russian leader said the Azerbaijani and Armenian sides would maintain the positions that they had held and Russian peacekeepers would be deployed to the region. | https://tass.com/politics/1243143 |
I used to work in a call center where you had to dial 9 for an outside line (like most places). The IT department had a team that would load new batches of phone numbers for the salespeople to call. One of those batches somehow started with the prefix 11x, and, well, you can imagine what happened next.
At my office you can use either a 9 or an 8 to get an outside line. We had an issue with people dialing 9 for an outside line, then calling a long distance number in the US 1 and accidentally hitting another 1. Cops showed up a few times... | https://community.spiceworks.com/topic/2207278-when-your-boss-extension-reaches-farther-than-expected?page=4 |
Every course offered through the Civil Engineering program will have an official course description webpage in UVic's calendar. The site provides general information about the course and its administration. Each course may also have its own home page with more detailed information. Its address will be given on the course outline and its content, and observation of property rights is the responsibility of the instructor.
The tables below detail the Civil Engineering program by term. Consult the University Calendar and the Civil Engineering Program Planning Worksheet for the latest information.
Term schedule
|
|
Fall
|
|
Spring
|
|
Summer
|
|
Year 1
|
|
1A
|
|
1B
|
|
Co-op
|
|
Year 2
|
|
2A
|
|
Co-op
|
|
2B
|
|
Year 3
|
|
Co-op
|
|
3A
|
|
Co-op
|
|
Year 4
|
|
3B
|
|
Co-op
|
|
4A
|
|
Year 5
|
|
Co-op
|
|
4B
Term 1
|Term 1A||Term 1B|
|
|
Fundamentals of programming with engineering applications
|
|
Engineering chemistry
|
|
Design and communication I
|
|
Design and communication II
|
|
Introduction to professional practice
|
|
Engineering mechanics
|
|
Calculus I
|
|
Calculus II
|
|
Matrix algebra for engineers
|
|
Introductory physics II
|
|
Introductory physics I
Term 2
|Term 2A||Term 2B|
|CIVE 200||Engineering drawing: outline||CIVE 210||Sustainable development in civil engineering outline|
|CIVE 299||Geomatics engineering: outline||CIVE 285||Civil engineering materials outline|
|GEOG 103||Introduction to physical geography||MATH 204||Calculus IV|
|MATH 200||Calculus III||CIVE 220||Mechanics of solids I outline|
|STAT 254||Probability and statistics for engineers||CIVE 242||Dynamics outline|
|CIVE 295||Buildings science fundamentals outline|
Term 3
|Term 3A||Term 3B|
|CSC 349A||Numerical analysis||CIVE 340||
|
Sustainable water resources: outline
|CIVE 310||Environmental engineering outline||CIVE 351||Sustainable design of steel and timber structures: outline|
|CIVE 315||Environmental policy outline||CIVE 352||Reinforced concrete structural design: outline|
|CIVE 345||Fluid mechanics outline||CIVE 360||Sustainable transportation systems: outline|
|CIVE 350||Structural analysis outline||CIVE 370||Construction and project management: outline|
|CIVE 385||Geotechnical engineering outline|
Term 4
|Term 4A||Term 4B|
|LIST OF ELECTIVES||CIVE 400||Cross-disciplinary capstone design project: outline|
|CIVE 410||Solid waste, air and water pollution outline||ENGR 498||Engineering law|
|CIVE 421||Advanced Structural Analysis outline||LIST OF ELECTIVES|
|CIVE 440||Hydrology and hydraulics outline||CIVE 411||Resilient smart cities: outline|
|CIVE 450||Building and district energy simulation: outline||CIVE 444||Water & sanitation for developing countries: outline|
|CIVE 480A||Special Topics: Energy Systems Decarbonization outline||CIVE 445
||Groundwater hydrology|
|CIVE 480B||Special Topics: Drinking Water Contaminants - Chemistry, Toxicology and Greener Interventions outline||CIVE 452||Engineering for earthquakes and extreme events: outline|
|CIVE 480C||Special Topics: Case Studies in Construction Management outline||CIVE 480F||Special Topics: Timber Structures|
|CIVE 499||Research Project||CIVE 485||Foundation Engineering|
|CS ELECTIVE||2 Complementary studies electives *||CIVE 499||Research Project|
Current Timetable
Students must complete four Co-op work terms (ENGR 001, 002, 003, 004) as per the Faculty of Engineering Academic and Work/Other Terms Schedule in the Undergraduate Academic Calendar.
*A Complementary Studies Elective course deals with central issues in humanities or social sciences. The Faculty of Engineering must approve the chosen courses, prior to registration. Consult the Faculty website for a current list of approved courses. Not all technical electives listed may be available. CS electives can be taken in any term. | https://www.uvic.ca/engineering/civil/undergrad-students/home/courses/index.php |
Movement of substance through a membrane often blood.
Excretion
The removal of waste products from the body
Epithelial Tissues are classified according to their function as
Membranous or Glandular
Layers of the skin
Epidermis
Dermis
Layers of the epidermis
Stratum Corneum
Stratum Lucidum
Stratum Granulosum
Stratum spinosum
Stratum basale
Simple Squamous Epithelum
alveoli of lungs, lining of blood and lymphatic vessels, surface of pleura, pericardium, and peritoneum
Simple Cuboidal Epithelium
Many types of glands and their ducts, ducts and tubules of other organs such as kidneys
Simple Columnar Epithelium
Surface of mucous membrane that lines stomach, intestine, uterus, uterine, tubes, and parts of respiratory tract. Goblets cells, cillia, and microvilli, are modifications which are frequently seen in this nose. | https://freezingblue.com/flashcards/267635/preview/tissues |
International machine learning conference en route to Sydney
An international machine learning conference is set to land in Australia in 2021 following a successful bid by CSIRO’s Data61 and tendering partner BESydney.
In its 35th year, the Annual Conference on Neural Information Processing Systems (NeurIPS) will trek to Sydney for its first event outside the Northern Hemisphere, according to CSIRO.
CSIRO’s Data61 Group Leader, Machine Learning, Dr Richard Nock, said the conference will “bring together thousands of machine learning specialists to share the latest research and discuss key issues including the ethical design and deployment of machine learning, and it is a great opportunity for Australia to be at the centre of this conversation”.
“This past year, our researchers have applied AI and machine learning to assist in diagnosing complex mental health disorders, detect disease outbreaks and ‘vaccinate’ algorithms against adversarial attacks,” Nock added.
The conference is expected to draw attendance from big tech companies, such as Tesla, Google, Microsoft and Facebook, as well as representatives from international universities, and bring opportunities for trade, investment and talent attraction, BESydney CEO, Lyn Lewis Smith and NSW Minister for Jobs, Investment and Tourism, Stuart Ayres said.
Dr Terrence Sejnowski, President of the NeurIPS Foundation, which organises the event, said the conference was “part of our ongoing mission to bolster the global community of AI and machine learning researchers and create opportunities for them to continue to connect in new ways and new places, especially locations like Australia where there is growing interest and investment in this important field of technology.”
This will be the third time the conference has taken place outside North America, after Granada and Barcelona hosted NeurIPS in 2011 and 2016, respectively, CSIRO said.
The conference will take place at the International Convention Centre Sydney in November 2021.
UniSA working on COVID-19 detecting drone
University of South Australia researchers and Canada's Draganfly are developing a drone...
COVID-19: Microsoft postpones end of life for 1709
Microsoft has added six months' support to the enterprise edition of Windows 10 1709 and...
Telstra freezes job cuts amid COVID-19 crisis
Telstra has announced a six-month halt on further job cuts planned as part of its T22... | https://www.technologydecisions.com.au/content/it-management/news/international-machine-learning-conference-en-route-to-sydney-1360894933 |
Genetic Advances in Post-traumatic Stress Disorder.
Post-traumatic stress disorder, or PTSD, is a condition that affects a subgroup of individuals that have suffered a previous traumatic event capable of generating changes at a psychological and behavioural level. These changes affect the personal, family, and social environment of those who suffer from this condition. Different genes have been identified as risk markers for development of this disorder. The population heterogeneity and individual differences (genetic and environmental) of each subject have made it difficult to identify valid markers in previous studies. For this reason, studies of Gene x Environment (G×E) have gathered importance in the last two decades, with the aim of identifying of the phenotypes of a particular disease. These studies have included genes such as SLC64A, FKBP5, and ADCYAP1R1, among others. Little is known about the interaction between the genes, pathways, and the molecular and neural circuitry that underlie PTSD. However their identification and association with stimuli and specific environments that stimulate the development of PTSD makes it focus of interest for identify genomic variations in this disorder. In turn, the epigenetic modifications that regulate the expression of genes involved in the hypothalamic-pituitary-adrenal (HPA) axis and the amygdala- hippocampal-medial prefrontal cortex circuits play a role in the identification of biomarkers and endophenotypes in PTSD. In this review, the advances in genetic and epigenetic that have occurred in the genomic era in PTSD are presented.
| |
Boulevard Animal Hospital is a Animal Hospital facility at 2310 Spring Forest Road in Raleigh, NC.
Services Boulevard Animal Hospital practices at 2310 Spring Forest Road, Raleigh, NC 27606.
To learn more, or to make an appointment with Boulevard Animal Hospital in Raleigh, NC, please call (919) 781-5145 for more information. | https://www.wellness.com/dir/3215684/animal-hospital/nc/raleigh/boulevard-animal-hospital |
[Bibliometric study of the journal Nutrición Hospitalaria for the period 2001--2005: Part 2, consumption analysis; the bibliographic references].
To describe and assess the consumption of the information consulted and cited in the articles published in the journal Nutrición Hospitalaria for the period 2001--2005 by means of bibliometric analysis. Cross-sectional descriptive analysis of the results obtained from the analysis of the lists of bibliographic references of the articles published at Nutrición Hospitalaria. We studied the most cited journals, the signatures index, the type of document referred, the publication language, the distribution of geographical origin, and obsolescence and readiness index. We took into account all types of documents with the exception of Communications to Congresses. 345 articles were published at Nutr Hosp, containing 8,113 bibliographic references, with a median of 18, a maximum of 136 and minimum of 0 BR per article. The mean (rate of publications per published article during the specified period) is 23.52 (95% IC 20.93-26.10) and the mean at 5% is 20.66 per article. The 25th and 75th percentiles are 6 and 32, respectively, the interquartile interval being 26 BR per document. The semi-period of Burton and Kebler is 7 years and the Price Index is 38.18%. The bibliographic references, the consumption of information, of the articles published at Nutrición Hospitalaria present parameters similar to other journals on health science. However, good data on obsolescence are observed, which reveal the good validity of most of the references studied.
| |
Statistical cluster analysis is an Exploratory Data Analysis Technique which groups heterogeneous objects (M.D.) into homogeneous groups. We will learn the basics of cluster analysis with mathematical way.
In HCA, the observation vector(cases) are grouped together on the basis of their mutual distance.
An HCA is usually visualised through a hierarchical tree called dendrogram tree. This hierarchical tree is a nested set of partitions represented by a tree diagram.
If 2 groups are chosen from different partitions then either the groups are disjoint or 1 group is totally contained within the other.
A numerical value is associated with each partition of the tree where branches join together. This value is a measure of distance or dissimilarity between two merged clusters.
Begin with clusters, each containing single cases.
At each stage merge the 2 most similar group to form a new cluster, thus reducing the number of the cluster by n.
The divisive method operates by the successive splitting of groups.
Search the distance matrix (D) for nearest (most similar) pair of objects. Let the distance between the most similar cluster say (U&V) be denoted by d[u×v].
Adding a row & a column giving the distances between the newly formed cluster (U, V) and the remaining cluster.
Structure the dendrogram tree from the information on mergers and merger levels.
Median linkage: Distance between the median of two clusters.
I am a Data Science practitioner with an extensive experience in solving issues using analytical approaches across different domains and passionate about helping those interested in Data Science. Most importantly, I am obsessed with the idea of Collaborative Learning and strongly believe in “Learning is the eye of the mind”. Feel free to contact me at [email protected]. I am always looking forward to Scale Up your Skills through the quality content of my blogs/articles. | https://stepupanalytics.com/beginners-guide-to-statistical-cluster-analysis-in-detail-part-1/ |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.