content
stringlengths 0
1.88M
| url
stringlengths 0
5.28k
|
---|---|
In the distribution of closed questions
You are analyzing the distribution of your closed-ended questions and you can't explain a 1% anomaly? If you group all categories of respondents, you get 101% or 99% of respondents instead of 100%?
This slight discrepancy can be explained simply by the use of whole numbers. To make it easier to read and use the results, all percentages are rounded to the nearest whole number.
For example, in the breakdown below, when all categories are added together, the result is 101% of respondents:
|
|
Catégorie de répondants
|
|
Actual percentages
|
|
Rounded results
|
|
Positive
|
|
5,61 %
|
|
6 %
|
|
Neutral
|
|
39,2312 %
|
|
39 %
|
|
Negative
|
|
48,51283 %
|
|
49 %
|
|
Strongly negative
|
|
6,64597 %
|
|
7 %
Why this choice?
Rounding avoids displaying percentages with several decimal places, such as "33.56789%" which in the interface appears as "34%". This way you can see the main trends at a glance.
Don't worry: with a maximum of 1% accuracy, all Supermood results are extremely reliable, these variations are sometimes disturbing... but negligible!
When analyzing multiple-choice questions
If employees can check only one answer: the explanation is the same as above, it's a question of rounding!
On the other hand, if employees are allowed to check more than one answer for the same question, the total number of answers very quickly exceeds the number of respondents. When you add up the percentages, you end up with a total that is well over 100% - and that's normal.
This format mainly allows you to analyze trends. If you prefer results that allow you to categorize populations, you can turn to single-choice MCQs. Please see our article Composing a Multiple Choice Question (MCQ) for more details. | https://help.supermood.com/hc/en-us/articles/4408843682322-Why-do-the-percentages-not-add-up-to-100- |
Mars has shallow deposits of water-ice that astronauts can reach with a shovel: NASA
We may not have a spaceship to carry our astronauts to Mars yet, but NASA sure knows where to land it.
The spaceship that could ferry the first humans to explore Mars is still far from complete. But potential locations on Mars for the first human missions to land on Mars might come our way a lot sooner.
Mars geologists at the American space agency NASA have cited a new research study mapping locations of rich water-ice deposits on the red planet in never-before-seen detail. In some of these regions, the water-ice deposits are mere inches from the surface — an exciting prospect for future resource-seeking missions to the Red Planet.
"You wouldn't need a backhoe to dig up this ice. You could use a shovel," Sylvain Piqueux, the study's lead author from NASA's Jet Propulsion Laboratory, said in a press release. "We're continuing to collect data on buried ice on Mars, zeroing in on the best places for astronauts to land."
Map of underground water-ice on Mars. Image: NASA
The geological map of Mars compiled in the study show an abundance of water-ice in the Martian poles and its (equatorial) mid-latitudes. One specific stretch in the Northern mid-latitudes is a "treasure map" of water-ice that is as close as an inch from the surface, based on satellite estimates.
This resource is a prime target for NASA's larger plan of "in situ resource utilization" on the moon and Mars — finding and using resources that are naturally found on a planet or moon to enable human colonies to survive on it. Considering NASA is among the key players vying to operate the first research base on Mars, the research going into findings hotspots like these will shape the pursuit of Mars research for decades to come. Satellites that are orbiting Mars (including ISRO's Mars Orbiter, Mangalyaan) — are helping scientists zero in on the "best places" to build the first Martian research station.
The study NASA point to was recently published in Geophysical Research Letters that will help it in mapping the water ice locations on the red planet. Data from two spacecrafts — NASA's Mars Reconnaissance Orbiter (MRO) and Mars Odyssey orbiter — went into mapping water-ice on Mars that could potentially be within reach of astronauts.
"These regions near the poles have been studied by NASA's Phoenix lander, which scraped up ice, and the MRO, which has taken many images from space of meteor impacts that have excavated this ice," NASA writes in a blog post.
The study also proposes Arcadia Planitia, a region on Mars shaped by ancient lava flow, as a good spot to land a human Mars mission. It supposedly has an abundance of water-ice that can be scooped up by the astronauts when the times comes. That said, the groundwork is far from complete. Researchers at NASA are hoping to further the study and examine underground water-ice deposits on Mars to see the fluctuations in its levels across seasons. | |
The UN Human Rights Council should condemn serious, ongoing human rights violations by militias in Libya, Human Rights Watch said today. The council should appoint an independent expert to document the abuses and monitor the government’s response.
The Human Rights Council is discussing Libya during its current session, with a resolution expected the week of March 18, 2012.
Despite commitments by Libya’s transitional government to stem abuses, Human Rights Watch has documented ongoing killings, torture, and forced displacement by militias. (…) The government has proven incapable of reining in these militias or holding to account those responsible for abuses. (…)
(…) A draft Human Rights Council resolution proposed by the transitional Libyan government is woefully weak, Human Rights Watch said. It only “takes note” of the Commission of Inquiry report and “encourages” the government to investigate human rights violations. Negotiations on the draft will continue until the voting, on March 22 or 23.
The resolution should include the appointment of an independent expert to monitor human rights violations and report back to the Council, Human Rights Watch said. At minimum, the Council should mandate the high commissioner for human rights to report on the human rights situation in the country publicly and regularly, Human Rights Watch said.
Libya’s friends, especially those that supported the NATO intervention there, should approach Libya at the highest levels of government and insist on continued monitoring and involvement by the Human Rights Council, Human Rights Watch said. (…)
(…) The Human Rights Council established the Commission of Inquiry in February 2011, with a mandate to investigate all alleged violations of international human rights law in Libya and to make recommendations. Libya’s membership at the council was suspended in March 2011, because of serious abuses by the Gaddafi government. Libya rejoined the council in November.
The Commission of Inquiry’s March 2 report, its second, found that Gaddafi forces had committed war crimes and crimes against humanity. It also found that anti-Gaddafi forces had “committed serious violations, including war crimes and breaches of international human rights law, the latter continuing at the time of the present report.” (…) The difference between past and present abuses, the report said, is that “those responsible for abuses now are not part of a system of brutality sanctioned by the central government.”
The UN report highlighted the plight of the people from Tawergha, perceived as Gaddafi supporters, who have been killed, arbitrarily arrested and tortured by anti-Gaddafi fighters from Misrata. The widespread and systematic nature of these abuses indicates that crimes against humanity have been committed, the report said.
The report called on the Human Rights Council to “establish a mechanism to ensure the implementation of the recommendations in [the] report.” (…)
To read the full article, see here. | http://responsibilitytoprotect.org/index.php/crises/190-crisis-in-libya/4042-human-rights-watch-libya-human-rights-council-monitoring-needed |
1) Please remove ALL Metal Ornaments/Jewelry, Watches, Glasses, Pager and Cellphone (and any other devices generating electromagnetic radiations), before doing the following procedure.
2) Please wear Clothes made from NATURAL Fabric(s).
3) Please be SURE that you've read and that you understand your BioTec2000 Frequency Instrument - Operating Instructions, before proceeding with Operating Steps 1 - 17.
4) Please preprogram your BioTec2000 Frequency Instrument with the REQUIRED Specific Frequencies and Time Periods, before proceeding with Operating Steps 1 - 17.
5) Please ELIMINATE Operating Steps 1 - 7 (see below), once they are initially COMPLETED, if you're going to use your BIOENERAY XL3™ Plasma Tube Unit (version 2) on a REGULAR basis.
6) Please pay SPECIAL attention to the RED NOTE in Step 13 (see below) and the NOTES, which are right after Operating Steps 1, 2, 4, 6 & 12.
7) Please place your Power Supply #1 (blue/white metal cabinet) within 1 ft. of the rear of your Glass Plasma Tube & Stand/Holder, but at LEAST 6 ft. away from your Power Supply #2 (cream-colored metal cabinet) and at least 12 ft. away from your BioTec2000 Frequency Instrument.
8) Please place your Power Supply #2 (cream-colored metal cabinet) at LEAST 3 ft. away from your Body, during your EXPERIMENTAL Treatment Session.
NOTE: Your Power Supply #2 (cream-colored metal cabinet) emits a 60 Hz. Electromagnetic Field, which is a Frequency, that can ADVERSELY affect your Health, especially when you're SERIOUSLY ILL.
9) Please place your BioTec2000 Frequency Instrument on either a Plastic or a Wood Table and your BIOENERAY XL3™ Plasma Tube Unit (version 2) on various shelves of a Book Shelf, so that you can use your BIOENERAY XL3™ Plasma Tube Unit (version 2) for the top to the bottom of your Body, if NEEDED.
NOTE: Whatever you use, there should be FEW, if ANY, Metal Parts involved.
10) Please clean your Glass Plasma Tube, if it has Film/Haze on it, with an ammonia-based Glass Cleaner, when it has COOLED DOWN and is NOT being used.
11) Please hold onto the Jack &/or the Plug, rather than holding onto the Cable, when you are connecting or disconnecting Cables, pertaining to the Units, which are involved with your BIOENERAY XL3™ Plasma Tube Unit (version 2) and your BioTec2000 Frequency Instrument.
12) Please lift your Power Supply #1 (blue/white metal cabinet), when moving it to another location, because sliding your Power Supply #1 (blue/white metal cabinet) could DISPLACE it's Rubber Feet.
13) Please REPLACE a Metal Amplitude/Intensity Knob, with a Plastic Knob (or wear a glove), if applicable.
NOTE: The High Voltage Electric Field, being generated, can flow from your Body into a Metal Knob and then into your BioTec2000 Frequency Instrument, causing Digital Readout PROBLEMS and possible internal DAMAGE!
14) Please do NOT operate your BIOENERAY XL3™ Plasma Tube Unit (version 2) CONTINUOUSLY for MORE than 3 hours, because the Power Transistor and Ignition Coils in your Power Supply #1 (blue/white metal cabinet) could OVERHEAT EXCESSIVELY!
NOTE: If you allow your Power Supply #1 (blue/white metal cabinet) to COOL DOWN for 15 minutes, you may then use it again CONTINUOUSLY for another 3 hours.
15) Please do NOT operate below 20 Hz. or use the "PULSE" Function below 100 Hz. on your BioTec2000 Frequency Instrument.
16a) Please follow Step 16b below, if you're SERIOUSLY-ILL, instead of this Step 16a. Otherwise, please do your EXPERIMENTAL Treatment Session once per day (morning or evening) for a TOTAL EXPERIMENTAL Treatment Time per Session of 10 mins.. Perform your EXPERIMENTAL Treatments for 6 consecutive days, and then NONE on the 7th day. REPEAT the SAME Cycle for 2 MORE weeks and then DISCONTINUE for 1 week. If the DESIRED Results have NOT been achieved, during the 1 week break, then REPEAT your EXPERIMENTAL Treatments, and CONTINUE them until the DESIRED Results are achieved.
NOTE: If you're ABLE, you should INCREASE your TOTAL EXPERIMENTAL Treatment Time on the 2nd Week to 20 mins. and on the 3rd week to 30 mins..
16b) Please do your EXPERIMENTAL Treatment Session, 2 times per day (morning and evening), if you're SERIOUSLY-ILL, starting out with 1-3 mins. per Frequency, and NOT EXCEEDING 10 minutes of TOTAL EXPERIMENTAL Treatment Time each Session. Do your EXPERIMENTAL Treatment Session EVERY 3 days (i.e. - Day Nos. 1, 4, 7, 10, 13, 16, 19 & 22) and then DISCONTINUE for 7 days. If you have NOT achieved the DESIRED Results, during the 1 week BREAK, then REPEAT your EXPERIMENTAL Treatments, and CONTINUE them, until the DESIRED Results are ACHIEVED.
NOTE: If you're ABLE, you should INCREASE your TOTAL EXPERIMENTAL Treatment Time per day on the 2nd Week to 30 mins..
17) Your EXPERIMENTAL Treatment Sessions should ONLY be STOPPED TEMPORARILY, if UNDUE DISCOMFORT is experienced, during the Detoxification Process. If DETOXIFICATION does become TOO UNCOMFORTABLE or ANY ADVERSE Reactions occur, then please DISCONTINUE &/or MODIFY your EXPERIMENTAL Treatment Sessions, and if possible, please try to continue your EXPERIMENTAL Treatment Sessions. Please drink 8 oz. of Activated Charcoal Slurry 6-8 times/day to ADSORB DEAD Pathogens, TOXIC Organic &/or Inorganic Chemicals, that being DETOXED from your Body. Please see Final Comments below) also.
18) Please see Setup Diagram and Setup Diagram 2, before proceeding with Operating Steps 1 - 12.
NOTE: Eventhough these Setup Diagrams pertain to the BIOENERAY XL2™ Plasma Tube Unit (version 2), you can use them to basically set-up your BIOENERAY XL3™ Plasma Tube Unit (version 2).
1) Plug your Power Supply #2 (cream-colored metal cabinet) into a HIGH-Quality Surge Suppressor Power Strip (i.e. - TrippLite - 800+ joules), thus ensuring it's PROTECTION.
NOTE: Your BioTec2000 Frequency Instrument (and NO other Electrical/Electronic Device) should also be plugged into this HIGH Quality Surge Suppressor Power Strip. In fact, you should NOT plug ANY other Electrical/Electronic Device into the SAME A.C. Wall Outlet into which this Surge Suppressor Power Strip has been plugged.
2) Push each of the 2 - Brass Ignition Coil Connectors (+ rubber boots), which are connected to ends of the 7mm Ignition Coil Wires (thick & black), that have their other ends connected to the ends of your Glass Plasma Tube, into the High Voltage Towers of each of the Ignition Coils, which protrude through holes in the Rear Panel of your Power Supply #1 (blue/white metal cabinet), turning each of the 2 - Brass Ignition Coil Connectors clockwise and counterclockwise a few times until each of them are SOLIDLY seated.
NOTE: You MUST SOLIDLY connect the 2 - Ignition Coils in your Power Supply #1 (blue/white metal cabinet) to your Glass Plasma Tube, because ARCING, which you may be ABLE to hear, will be generated! ARCING radiates an Interference Signal, which can play HAVOC with and possibly DESTROY Digital Displays and Chips, which are within 3-5 feet of your Power Supply #1 (blue/white metal cabinet)! And so it would be a GOOD Idea anyhow to keep ANY Devices with Digital Displays and Chips at least 6 ft. away from your Power Supply #1 (blue/white metal cabinet).
You MUST connect your Glass Plasma Tube to your Power Supply #1 (blue/white metal cabinet), whenever you're operating them or EXCESSIVE High Voltages will be generated, which will ARC-OVER to Ground, causing Components to BURN-OUT, including the Power Transistor and the 2 - SPECIAL Ignition Coils in your Power Supply #1 (blue/white metal cabinet)!
You MUST NOT probe around the Ignition Coils or your Glass Plasma Tube with a Metal Object, such as a Screwdriver, whenever your BIOENERAY XL3™ Plasma Tube Unit (version 2) is operating, because you will receive a SEVERE SHOCK and a BURN!!
3) Place the front length of your Glass Plasma Tube, so that it is PARALLEL with your Body and 1-3 ft. away from the Area of your Body to be treated. Please be seated, so that your Glass Plasma Tube, being pulsed at the SPECFIC Pulse Frequency or Frequencies, will radiate a Pulsed High Voltage Electrical Field towards and through your Body.
NOTE: Please see Setup Diagram 2. You don't have to be EXACTLY pointed towards the Area, because the Energy Output from your Glass Plasma Tube RADIATES towards your Body. Children and Animals should be 3-6 ft. away from your Glass Plasma Tube.
4) Flip the Toggle Switch to the "A-C" Position, if you want to use the Anti-Parallel Connected Dual-Ignition Coil Circuit to obtain HIGH Electro-Magnetic Fields for Pathogen Destruction in the Body. Or flip the Toggle Switch to the "S-C" Position, if you want to use the UNIQUE Series-Connected Dual-Ignition Coil Circuit to obtain HIGH Electric (electrostatic) Fields for Energy Level Amplification and/or Detoxification in the Body.
NOTE: DO NOT flip this Toggle Switch, whenever your Power Supply #2 (cream-colored metal cabinet) is "ON", because the Power Transistor can BURN-OUT! Rock the Power Switch to the "OFF" Position FIRST and then after you've flipped the Toggle Switch, you can rock the Power Switch to the "ON" Position again.
Please observe, that whenever you're operating in the "S-C" Mode, the Plasma Glow, which can be seen inside of the Glass Plasma Tube, will NOT be as BRIGHT as what can be seen, whenever you're operating in the "A-C" Mode.
5) Connect the BNC Plug, which is connected to one end of the 6 ft. Cable #1, which has the 2 - flexible Banana Plugs (or BNC Plug) connected to the other end, to the BNC Jack, which is labeled "PULSE IN", and is mounted on the Front Panel (right side) of your Power Supply #1 (blue/white metal cabinet).
NOTE: Match the 2 - Slots on the BNC Plug with the 2 - Pins on the BNC Jack and push the BNC Plug 'IN' and then turn the BNC Plug clockwise 1/4 turn.
6) Push the Black Banana Plug, which is connected to the other end of the 6 ft. Cable #1 into the Left Blue Banana Jack, which is on the Front Panel of your BioTec2000 Frequency Instrument. Then push the Red Banana Plug into the Right Blue Banana Jack, which is also on the Front Panel of your BioTec2000 Frequency Instrument.
NOTE: DO NOT plug these Banana Plugs (Red & Black) into the Red & Black Banana Jacks on your Power Supply #2 (cream-colored metal cabinet), because the 12 Volts will BURN-OUT the Power Transistor in your Power Supply #1 (blue/white metal cabinet)!!
7) Push the 2 - stacking-type Gold Banana Plugs (Red & Black) into the corresponding color (Red & Black) Banana Jacks on the Front Panel (left side) of your Power Supply #1 (blue/white metal cabinet).
8) Flip the POWER Switch on your BioTec2000 Frequency Instrument to the "ON" Position.
9) Adjust the Amplitude/Intensity on your BioTec2000 Frequency Instrument to the setting of '0', if it isn't ALREADY at that particular setting.
10) Program your BioTec2000 Frequency Instrument to generate the desired SPECIFIC Frequencies, after reading your BioTec2000 Frequency Instrument - Operating Instructions and START the Program on your BioTec2000 Frequency Instrument.
NOTE: You may also want and/or NEED to use each of the following Frequencies for 3-5 mins.: 64 (oxygen/ozone), 66.5 Hz. (universal pathogen), 528 Hz. followed by 15 Hz. (DNA repair), 727.5 Hz. (universal Rife) and 5,000 Hz. + 10,000 Hz. (cell regeneration).
11) Rock the Power Switch (bottom left side) on your Power Supply #2 (cream-colored metal cabinet) to turn it's Power "ON". Turn the Coarse and/or Fine Voltage Control Knob until the Digital Display reads 15.0 (+/-.2) Volts. Touch and hold the 2 - stacking-type Gold Banana Plugs (Red & Black) together, which haven't yet been connected, and turn the Coarse and/or Fine Current Control Knob until the Digital Display reads 2.0 (+/-.1) Amps. in the S-C Mode (or 3.0 Amps. in the A-P Mode) and then STOP touching these 2 - stacking-type Gold Banana Plugs (Red & Black) together.
12) Push the other 2 - stacking-type Gold Banana Plugs (Red & Black), which are connected to one end of the 6 ft. Cable #2, that has 2 - stacking-type Banana Plugs connected to the other end, into the corresponding color (Red & Black) Banana Jacks, which are labeled "D.C. IN", and are mounted on the Front Panel (bottom left side) of your Power Supply #2 (cream-colored metal cabinet).
NOTE: DO NOT plug these 2 - stacking-type Gold Banana Plugs (Red & Black) into the 2 - Blue Banana Jacks on your BioTec2000 Frequency Instrument, because the 12 Volts will BURN-OUT it's Output Transistors!!
Please observe that the 4" Cooling Fan on the top of your Power Supply #1 (blue/white metal cabinet) is running and that your Glass Plasma Tube is NOT yet glowing. If your Glass Plasma Tube is GLOWING and the Amplitude Control Knob is set at '0' on your BioTec2000 Frequency Instrument, then you've got a SHORTED Power Transistor in your Power Supply #1 (blue/white metal cabinet) and it will NEED to be replaced! Please email and/or call Tom Harrelson (614/237-8708).
13) Turn the Amplitude/Intensity Knob on BioTec2000 Frequency Instrument SLOWLY clockwise from the Setting/Position of '0', until you reach a Setting/Position, where you don't perceptably see a CHANGE in the Brightness of the Plasma Glow, if you turn the Knob SLIGHTLY HIGHER.
NOTE: Whenever you operate your BioTec2000 Frequency Instrument at a DIFFERENT Frequency or Frequencies, you may want to CHANGE this Setting/Position SLIGHTLY, and particularly whenever you observe NO or VERY LOW Plasma Glow and especially whenever you're operating in the "S-C" Mode", operating below 100 Hz..
When you've COMPLETED your EXPERIMENTAL Pulsed High Voltage Electrical Field Treatment, then please follow Operating Steps 14 - 17 sequentially and then please see "My Final Comments".
14) Rock the Power Switch on your Power Supply #2 (cream-colored metal cabinet) again to turn the Power "OFF".
15) Turn the Amplitude/Intensity Knob on your BioTec2000 Frequency Instrument counter-clockwise to a Setting of 0.
16) Flip the Power Switch on your BioTec2000 Frequency Instrument to the "OFF" Position.
17) Rock the Power Switch on the Power Strip to turn the Power "OFF" COMPLETELY to your BioTec2000 Frequency Instrument and to the Units, which are involved with your BIOENERAY XL3™ Plasma Tube Unit (version 2).
NOTE: Activated Charcoal Powder is AVAILABLE from Tom Harrelson/TOTAL HEALTH Associates/P.O. Box 9872/Columbus, OH 43209-00872/614-237-8708. | http://healingtools.tripod.com/BT_betxl3v2oper.html |
With its over 30 million collection items and its international reputation in research, the Museum für Naturkunde Berlin is one of the world’s most significant research museums in evolutionary and biodiversity research.
However, damage to the building during the Second World War and decades of operating on a shoestring led to a renovation backlog. As a consequence, many parts of the collections had to be housed in cramped spaces or unsuitable parts of the building. Utility systems were outdated, preservation conditions and fire safety inadequate.
The renovation of the museum building was conceived as a long-term project, divided into a sequence of steps to be completed, while research and exhibitions were ongoing.
Renovation measures began with the partial renovation of some exhibition areas in 2004-2007, followed by a first rebuilding phase, the reconstruction of the East Wing that had been destroyed in World War II. Its completion prepared the ground for the second building phase, which will be completed in the summer of 2018.
The opening of the new collection, work and exhibition facilities will take the Museum für Naturkunde a step closer to its aim to become an open, integrative research museum that stands up for nature.
This was a building assignment where – perhaps for the first time in a listed historical building of this scale – the latest insights into building and conservation research had to be reconciled with contemporary ideas of a “green museum”.
At the heart of the building project are optimum storage for the dry collections, improved logistics for collection management and the establishment of guest researcher work places. All of this will ensure the long-term preservation of the collections as research infrastructure, which will be available to our staff and about 600 guest researchers every year.
Another focus of the building project is a further opening-up of research collections. The public will be able to see or even access selective parts of the collections. Thus, spaces are created to encourage participation and dialogue between research, collection and society, and crossing boundaries between exhibition and collection. The area that houses exhibitions and visitor facilities will be enlarged to manage increasing visitor numbers, even at peak times and during events.
However, the completion of the second building phase does not mean that the whole concept has been completed, as only 38 percent of the overall building have been renovated to fulfil modern requirements. Planning for the third building phase is already underway.
Architectural history: A building concept developed 140 years ago setting the course for a natural history museum in the 21st century. | https://www.museumfuernaturkunde.berlin/en/uber-uns/building |
Table of Contents
These people can serve as deterrents to an organization's growth and development. Those who go back to wait and see, pretend to comply, or disengage completely, are in the non-active state. This also can have harmful results on advancing your strategy. As a leader, it is vital that you comprehend the concept of the Choice Design; then you can examine where workers are and help them to proactively devote their energy to the company and themselves.
Individuals choose to be engaged or not. Before you can really assist, it's important to understand your staff members' perspective in a scenario of terrific modification.
By overcoming the 4 Levels of Leadership, you can start the procedure of increasing your own leadership effectiveness and create a culture that works finest for your company. Everything starts with you. You require to lead yourself prior to you can lead others. It has to do with being clear on your own sense of function and why you selected to be a leader.
Numerous leaders concentrate on enhancing their one-to-one and one-to-group skills. Today's leader requires to comprehend what it takes to produce a culture that enables the full engagement of all staff members. Leading a work culture has to do with leaders understanding their obligation to engage others to dedicate energy to the organization.
To start, here are three actions you can take today to become the leader you desire to be: Show to your own management purpose and worths. Be a for the beliefs, practices, custom-mades, and behaviors you want all staff members to show in their interactions with one another and in their daily work.
The culture of the company will happen whether you influence it or not. And as you move your company through these brand-new, amazing times, are you happy to run the risk that your workers' behavior is less than or not what you need it to be? Are your leaders actively involved in establishing a culture of engagement? If not, they require to be, with you setting the example.
On the other hand, a research by Towers Perrin says that companies with engaged workers have 6% greater net revenue margins. Offered this figure, it is no longer a matter of option for organization. Business that wish to grow and make it through need to pay attention to digital engagement. Employees today desire a collaborative, versatile, favorable, and inclusive office.
Every employee must be enabled to give opinions and take part in essential functions as this increases engagement. According to a survey performed by Jane Mc, Connell on 300 managers across 27 countries, people feel more linked and actively engage when their viewpoints matter. Enable others- Staff members ought to be made it possible for to perform their finest by devoting time and attention to their advancement.
Foster innovation and agility- Imaginative and ingenious ways to resolve obstacles need to always be welcome. Workers need to be offered possibilities regardless of their hierarchy level, since excellent concepts can come from anywhere. Leaders must work together with their groups for a shared vision and make sure engagement by encouraging and motivating their employees on the task.
Employees today have greater expectations in terms of an innovation inspired work environment and usage of digital tools would make their work simpler and intriguing and keep them more engaged. Leaders must successfully engage themselves in helping the employees be successful and guarantee that they have the ability to perform their roles and duties in positioning with that of the company.
At the most basic level, it's usually agreed that worker engagement is vital to company success. But numerous companies fail to remember that engagement actually lies with the leaders in business, which those leaders require to be assisted to genuinely comprehend how to get their individuals influenced and stimulated to accomplish typical goals.
These companies might feel they are providing their people every chance. - Dale Carnegie, Worker engagement implies different things to different people, but ultimately it's about the relationship between the specific and the company they work for.
As an outcome, they typically put more effort in, go above and beyond what's expected of them and truly appreciate the success of business. They are prepared to put in discretionary effort to accomplish the objectives of the organization. At the end of the day, many workers will be lead exclusively by their direct managers, not by the Executive Board or high level managers.
As the saying goes, individuals leave managers, not companies. Organizations have an obligation to make sure their leaders know what abilities they need to have to get their employees engaged, and offer them the tools and knowledge to make it happen. The method to keep employees engaged is to lead them through a shared function and vision a shared way of doing things.
Engaged staff members wish to concern work and consistently provide 110% effort, so participation is high, they are hardly ever off ill and they produce above typical standards of efficiency. Often this happens by itself, which is a dream, and you know when you exist, because everybody understands it is special while it's occurring.
Think of if you will:"You are a leader. You take your group to the top of a high structure, a skyscraper. It has a flat roof, it is dark, there is no barrier round the edge of the roofing and the employee have roller skates on. You ask them to skate around, but they huddle together in the center not daring to go far it is extremely frightening for them.
Derek Biddle, If you stop working to shine the light (which is your vision), fail to put up the best railings (which are your limits), or stop working to spot when some members of your team are skating precisely the way you want and not motivating it, that's when things fail.
If you put a fence round the area near to you or even rather a method away, they have a sense of boundary and security. If you tell them they can play anywhere within the fence, they will utilize all the offered space and might even attempt and climb up over the fence, just to see what occurs and check the limit. | https://develop.leadershipequip.com/page/improve-employee-engagement-strong-leadership-st-george-ut-HqcLtAAj8Pdo |
Tucked away in the heart of Hamilton, St Cecilia's small class sizes and extensive outdoor play areas provide children with a unique learning environment that encourages creativity and exploration and rewards curiosity.
Located in the heritage-listed St Cecilia's school building, our families enjoy the best of both new facilities and old world charm. Our high ceilings, French doors and polished timber floors provide a comfortable, home-like space where children feel relaxed and at ease. Our spacious wrap-around verandas, with natural light and fresh air, are popular with children looking for a tranquil setting to read, paint, draw or just 'be'. Our big backyard, with plenty of open space, creates a sense of freedom and adventure which naturally encourages your child to explore and create something new everyday.
With capacity for just 49 children, our staff can really get to know your child and what is important to them so they can adjust programs based on how they socialise and learn, and how we can best help them grow.
At St Cecilia's we care for children from six weeks and offer a Queensland Government approved kindergarten program for children in the year before prep. We are conveniently located opposite Hamilton State School with easy access to Kingsford Smith Drive for families who commute between the CBD and the inner north suburbs of Hamilton, Ascot, Hendra, Nundah, Eagle Farm and the Portside Wharf and Northshore Hamilton precincts. We supply all meals, wipes, sunscreen and cot linen. All you need to bring is nappies, a hat and a spare change of clothes. We offer nutritious, age-specific meals for morning tea, lunch and afternoon tea. We are supportive of special dietary requirements and encourage families to discuss their child's needs with us. | https://brisbanecatholic.org.au/support/centacare/st-cecilias-long-day-care-kindergarten-hamilton/ |
New Sanctuary reflects congregation’s appreciation of the arts and an expansive vision for the future.
Johnson Air worked with Quiring General to provide design build HVAC for College Community Church Mennonite Brethren located in Clovis, CA. The project included a new 8,300 square foot sanctuary building, and remodel of existing administration facilities. The updated facility contains sanctuary seating for 255, polished concrete floors, large windows on each side of the sanctuary to allow plenty of natural light to fill the space, a nursery, a narthex, and office space.
The new facility features windows and abundant natural light for a sense of transparency and openness, high ceilings for an expansive feel and welcoming areas like the new courtyard. In many ways, the new space contrasts the feel of the old space, which was small, round and windowless and had a low-ceiling. | https://johnsonair.net/projects/college-community-church/ |
Machines Like Me takes place in an alternative 1980s London. Charlie, drifting through life and dodging full-time employment, is in love with Miranda, a bright student who lives with a terrible secret. When Charlie comes into money, he buys Adam, one of the first synthetic humans and—with Miranda's help—he designs Adam's personality. The near-perfect human that emerges is beautiful, strong, and clever. It isn't long before a love triangle soon forms, and these three beings confront a profound moral dilemma.
In his subversive new novel, Ian McEwan asks whether a machine can understand the human heart—or whether we are the ones who lack understanding.
About the Author
IAN McEWAN is the bestselling author of seventeen books, including the novels Nutshell; The Children Act; Sweet Tooth; Solar, winner of the Bollinger Everyman Wodehouse Prize; On Chesil Beach; Saturday; Atonement, winner of the National Book Critics Circle Award and the W. H. Smith Literary Award; The Comfort of Strangers and Black Dogs, both short-listed for the Booker Prize; Amsterdam, winner of the Booker Prize; and The Child in Time, winner of the Whitbread Award; as well as the story collections First Love, Last Rites, winner of the Somerset Maugham Award, and In Between the Sheets.
Praise For…
“A sharply intelligent novel of ideas. McEwan’s writing about the creation of a robot’s personality allows him to speculate on the nature of personality, and thus humanity, in general . . . Beguiling.”
—Dwight Garner, The New York Times
"[A] sharp, unsettling read . . . about love, family, jealousy and deceit. Ultimately, it asks a surprisingly mournful question: If we built a machine that could look into our hearts, could we really expect it to like what it sees?"
—Jeff Giles, The New York Times Book Review
“[McEwan] is not only one of the most elegant writers alive, he is one of the most astute at crafting moral dilemmas within the drama of everyday life. Half a century ago, Philip K. Dick asked, ‘Do Androids Dream of Electric Sheep?,’ and now McEwan is sure those androids are pulling the wool over our eyes. McEwan’s special contribution is not to articulate the challenge of robots but to cleverly embed that challenge in the lives of two people trying to find a way to exist with purpose. That human drama makes Machines Like Me strikingly relevant even though it’s set in a world that never happened almost 40 years ago.”
—Ron Charles, The Washington Post
“Witty and humane . . . a retrofuturist family drama that doubles as a cautionary fable about artificial intelligence, consent, and justice.”
—Julian Lucas, The New Yorker
“[A] densely allusive, mind-bending novel of ideas that plays to our acute sense of foreboding about where technology is leading us. In Machines Like Me, British literary fiction master Ian McEwan posits an alternative history . . . [it has] the feel of an intricate literary machine situated squarely on the fault lines of contemporary debates about technology.”
—LA Times
“A thought-provoking, well-oiled literary machine . . . [It] manages to flesh out—literally and grippingly—questions about what constitutes a person, and the troubling future of humans if the smart machines we create can overtake us."
—Heller McAlpin, NPR
“A searching, sharply intelligent, and often deeply discomfiting pass through the Black Mirror looking glass—and all the promise and peril of machine dreams.”
—Leah Greenblatt, Entertainment Weekly
“A ruminative mix of science fiction, romance and alternate history set in 1980s London….thought-provoking…[A] cautionary tale based on McEwan’s sharp observations of our flawed human nature.”
—Denver Post
“Enormous fun . . . McEwan has engaged with science before [and] his world of artificial intelligence is chilly, clever and utterly credible. This bold and brilliant novel tells a consistently compelling tale but it also provides regular food for thought regarding who we are, what we feel, what we construct, and what we might become.”
–Minneapolis Star Tribune
“Reminds you of [McEwan’s] mastery of the underrated craft of storytelling. The narrative is propulsive, thanks to our uncertainties about the characters’ motives, the turning points that suddenly reconfigure our understanding of the plot, and the figure of Adam, whose ambiguous energy is both mysteriously human and mysteriously not . . . Morally complex and very disturbing, animated by a spirit of sinister and intelligent mischief that feels unique to its author.”
—Marcel Theroux, The Guardian
"Thought provoking . . . consistently surprising . . . an intriguing novel about humans, machines, and what constitutes a self."
—Publishers Weekly
"McEwan brings humor and considerable ethical rumination to a cautionary tale about artificial intelligence." | https://www.elmstreetbooks.com/book/9780593152812 |
Selecting a point of view for your stories is the first step in finding your “voice” in writing.
When you begin to write a story, whether a short story or a novel, you first need to know from which point of view (POV) the story will be told. You can always change this once the story is written or just doesn’t work out the way you had intended, but it’s best to plan from the beginning.
You cannot successfully write a story unless you’ve chosen your point of view.
1st Person POV – The story is told through the mind of one character. 1st Person is also used when the author is telling a story or nonfiction experience from his or her own POV. When writing this way, what unfolds in the telling can only be what the point of view character perceives. The author cannot provide a point of view from another character’s mind.
2nd Person POV – The writer speaks directly to another character using “you.” 2nd Person is the least favored and most difficult point of view to use in fiction. The reader then becomes the protagonist; the hero or heroine. Joyce Carol Oates writes in 2nd Person.
3rd Person POV – Stories are usually written through the main character’s POV. Use 3rd person to replace the tightness of 1st and 2nd Person in a story. 3rd Person can be broken down into varying styles of points of view. Here are three:
• 3rd Person Limited – This means that the entire story is written from the main character’s POV and everything is told in past tense. The reader gets to know only what the main POV character knows. I find this stimulating because it can hide the obvious and keep the climax a secret till the riveting ending. This is the POV that is easiest to read and is readily accepted by publishers.
• 3rd Person Omniscient – The narrator takes an all encompassing view of the story action. Many points of view can be utilized. This can be quite an intricate way to write because too much detail needs to be included and may over-complicate the story. A poorly written omniscient story may inadvertently give away the ending thereby deflating a reader’s enjoyment. A well-written story in this POV was And then There Were None by Agatha Christie.
•3rd Person Multiple – The story is told from several characters’ points of view. This has an effect to heighten drama and action if successful at writing from multiple characters’ points of view. Tony Hillerman’s Coyote Waits is a perfect example here.
No set rule for points of view applies when writing. A writer usually sticks to the POV that feels comfortable.
If you are a beginning writer, try writing several paragraphs, including dialogue, from each POV. You will know immediately what feels right for your way of storytelling.
I suggest you stick with one character’s POV to begin with. Even successful writers risk giving readers whiplash when pinging back and forth between points of view.
Nora Roberts head-hops but does it with such skill the reader barely notices the jumps.
Once you have established your favored POV, get busy writing your story. Your “voice” will develop as you write. “Voice” is your storytelling ability; it identifies your style.
Please visit Mary Deal’s website for more wonderful articles like this one: Write Any Genre. | http://mikeangley.com/2010/08/choosing-a-point-of-view-important-advice-from-mary-deal/ |
Another point which may strengthen the case against the “professional managerial elite” is that a fancy university degree does not buy a person a brain or a fully functional mind. In most cases, a fancy university degree may condition a person to become servile and accustomed to taking orders rather than thinking critically and independently. And despite the reverence for fancy university degrees, when one compares China’s seven-member “Standing Committee” of its Politburo to the principal members of America’s “National Security Council” (NSC), the difference in brainpower and knowledge between the two groups is essentially the difference between night and day.
In a sense, the physical and mental health problems in America can serve as a mirror for an even deeper social reality, which is that most Americans are sick and tired and are fed up with the fearmongering and terror which emanates from both Trump and the establishment. The main difference between Trump and the establishment is that the latter exports the fearmongering and terror overseas, whereas the former brings it all home. Which is worse is hard to determine, given that external policy is the direct and primary cause of internal affairs. Serendipitously and in a synchronistic way, I sat next to a Hispanic man who was accompanying his mother during a flight to Miami this past February. Out of the blue, he proceeded to chat with me, and he gave me very basic but important advice. His advice was to always “keep things simple” in life and that the fact of the matter is that people are “sick and tired of war.”
And as mentioned before, a convergence and synchronicity of interests and viewpoints might be occurring in the global public sphere as a result of the “cloud” and internet annexing the physical space and physical world to a large extent. The mood and sentiment expressed by the man sitting next to me during a plane ride in Miami is now universal and widespread because a mood and sentiment is the most credible and legitimate indicator of broader social reality. In turn, general situations and trends are largely deciphered and inferred through samples and individual experiences. And as the Korean philosopher and scholar Byung-Chul Han wrote: “A mood is not a subjective state that rubs off on the objective world. It is the world.”
In turn, our collective and conscious mood and sentiments are a byproduct or result of subconscious factors which control and shape conscious reality. And arguably, everything which manifests in our conscious reality has long existed in the subconscious mind. Hence, all of us are unconsciously and subconsciously manifesting a pre-existing conscious reality which has long been situated in our subconscious mind. Experience and ‘Eidetic Memory’ are perhaps the predominant factors which determine the contents that are collected by the subconscious mind through the course of time and are then manifested into conscious reality. Hence, as Carl Jung wrote:
“My thesis…is as follows: In addition to our immediate consciousness, which is of a thoroughly personal nature and which we believe to be the only empirical psyche (even if we tack on the personal unconscious as an appendix), there exists a second psychic system of a collective, universal, and impersonal nature which is identical in all individuals. This collective unconscious does not develop individually but is inherited. It consists of pre-existent forms, the archetypes, which can only become conscious secondarily and which give definite form to certain psychic contents.”
Hence, the ontological turbulence of a postmodern world which is exacerbated by the influx of information and viewpoints as a result of the advancements and evolutions in globalization and technology is part of a collective ‘individuation’ process which in turn will determine our collective social reality in due time. | https://adam-azim.com/2022/09/03/the-case-against-the-professional-managerial-elite-part-four/ |
INTRODUCTION: Proximal femoral fractures are common in frail institutionalised older patients. No convincing evidence exists regarding the optimal treatment strategy for those with a limited pre-fracture life expectancy, underpinning the importance of shared decision-making (SDM). This study investigated healthcare providers' barriers to and facilitators of the implementation of SDM. METHODS: Dutch healthcare providers completed an adapted version of the Measurement Instrument for Determinants of Innovations questionnaire to identify barriers and facilitators. If ≥20% of participants responded with 'totally disagree/disagree', items were considered barriers and, if ≥80% responded with 'agree/totally agree', items were considered facilitators. RESULTS: A total of 271 healthcare providers participated. Five barriers and 23 facilitators were identified. Barriers included the time required to both prepare for and hold SDM conversations, in addition to the reflective period required to allow patients/relatives to make their final decision, and the number of parties required to ensure optimal SDM. Facilitators were related to patients' values, wishes and satisfaction, the importance of SDM for patients/relatives and the fact that SDM is not considered complex by healthcare providers, is considered to be part of routine care and is believed to be associated with positive patient outcomes. CONCLUSION: Awareness of identified facilitators and barriers is an important step in expanding the use of SDM. Implementation strategies should be aimed at managing time constraints. High-quality evidence on outcomes of non-operative and operative management can enhance implementation of SDM to address current concerns around the outcomes.
|Original language||English|
|Article number||afac174|
|Journal||Age and Ageing|
|Volume||51|
|Issue number||8|
|DOIs|
|Publication status||Published - Aug 2022|
Bibliographical noteDeclaration of Sources of Funding: The Netherlands
Organization for Health Research and Development
(ZonMw; ref. no. 843,004,120) and Osteosynthesis and
Trauma Care Foundation (ref. no. 2019-PJKP) funded
this study but did not play a role in the design and
conduct of the study; collection, management, analysis and
interpretation of the data; preparation, review or approval of
the manuscript; and decision to submit the manuscript for
publication.
Publisher Copyright:
© The Author(s) 2022. Published by Oxford University Press on behalf of the British Geriatrics Society. All rights reserved. For permissions, please email: [email protected]. | https://pure.eur.nl/en/publications/shared-decision-making-for-the-treatment-of-proximal-femoral-frac |
IEEE Standards Association (IEEE SA) participated in the annual United Nation’s Multi-Stakeholder Forum on Science Technology and Innovation (STI) for the Sustainable Development Goals (SDGs) on 14-15 May 2019. The STI Forum provides a venue for facilitating interaction, matchmaking, and the establishment of networks between relevant stakeholders and multi-stakeholder partnerships in order to identify and examine technology needs and gaps with regard to scientific cooperation, innovation and capacity-building, and also in order to help facilitate development, transfer and dissemination of relevant technologies for the Sustainable Development Goals.
The STI Forum discussed science, technology, and innovation cooperation around thematic areas for the implementation of the sustainable development goals, congregating all relevant stakeholders to actively contribute in their area of expertise. The theme of the STI Forum 2019 was “STI for ensuring inclusiveness and equality, with a special focus on SDGs 4, 8, 10, 13 and 16”.
At the STI Forum, IEEE SA participated as a collaborator of a Special Event presented by the Permanent Mission of Finland to the United Nations, titled “Data Matters: Bridging the Digital Divide for Inclusive and Sustainable Development” focusing on fair and effective use of data as a key driver behind the implementation of each of the Sustainable Development Goals (SDGs). John C. Havens, the Executive Director of The IEEE Global Initiative on Ethics in Autonomous and Intelligent Systems, provided a keynote address where he addressed the three pillars of Ethically Aligned Design, First Edition: A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems. These pillars are:
In addition, the IEEE Power & Energy Society, along with ComEd, National Grid US, Quanta Technology, Southern California Edison and VELCO, hosted an event at the STI Forum titled “SDG 13 and Success Factors for Sustainable Electrical Energy Delivery” focusing on integrating renewable energy resources and energy storage, together with electrification of transportation and innovative approaches to electrifying off grid and near grid communities to assist in setting a path towards decarbonization to address climate change.
The interactive session, moderated by Damir Novosel, Quanta Technologies, brought together renowned experts from the power engineering community to discuss success factors for sustainable electrical energy delivery in the context of climate change that is negatively impacting nations and their citizens. The panelists included Shay Bahramirad, ComEd; Bill Chiu, Southern California Energy; Babak Enayati, National Grid US; and Chris Root, VELCO.
Learn more about IEEE SA Global Engagement program today.
Beyond Standards is dedicated to promoting technology standards and celebrating the contributions of the individuals and organizations across the globe who drive technology development. Beyond Standards is brought to you by the IEEE Standards Association, a leading consensus building organization within IEEE that nurtures, develops and advances global technologies. IEEE standards drive the functionality, capabilities and interoperability of a wide range of products and services that transform the way people live, work and communicate. With collaborative thought leaders in more than 160 countries, we promote innovation, enable the creation and expansion of international markets and help protect health and public safety. | https://beyondstandards.ieee.org/event/ieee-united-nations-science-technology-and-innovation-forum/ |
The prevalence of posttraumatic stress disorder and associated mental health problems among institutionalized orphans in Dar es salaam, Tanzania
Myovela, B.
URI:
http://hdl.handle.net/123456789/575
Date:
2012
Abstract:
Background: Orphanhood is becoming a more common experience for children in Tanzania, in part as a consequence of the AIDS pandemic, trauma and poverty. The number of orphans and risk of psychopathology has been steadily increasing even in regions where the AIDS epidemic has stabilized. Institutional care for orphaned children is uncommon in sub-Sahara Africa and seen as a last resort primarily as orphanages are often seen as a source of unhealthy psychological development and orphans’ ability to survive and thrive as adults is significantly threatened if raised in an orphanage. Research in this area is minimal in Tanzania. The magnitude of PTSD and associated mental health problems among orphans in Dar es Salaam is unknown, Objectives: The aims of the study was to determine the prevalence of post traumatic stress disorder (PTSD), child abuse, depressive symptoms and suicidal tendency among orphans in Dar es Salaam and the associations between PTSD and social demographic characteristics, child abuse, depression and suicidality among orphans in Dar es Salaam. Methodology: A cross-sectional study was conducted among orphans aged 7 to 17 years from 15 orphanages in Dar es Salaam. Ethical clearance was sought from MUHAS and convenient sampling applied to reach 350 eligible participants. A self administered structured questionnaire was used for data collection on socio-demographic characteristics and child abuse events. Three standardized scales were used to collect measures for PTSD, depression and suicidality. Data were cleaned and analyzed by SPSS version 15 windows. Univariate and multivariate statistical analysis with significant level set at p<0.005 were used. Results: Eighty four participants (24%) met DSM 1V criteria for PTSD, 65.7% reported child abuse events, 78% depressive symptoms and 25.7% suicidal tendency. Findings also showed a strong association between PTSD symptoms sexual abuse and suicidality. Analysis indicated that being out of school and being a single orphan was significantly associated with risk to develop PTSD (p < 0.002) (p<0.0009). Multinomial regression analysis revealed the predictors of PTSD among orphans to be sexual child abuse (AOR= 2.5, 95% CI 1.2 - 2.5, p <0.012) suicidal tendency (AOR= 3.7, 95% CI 1.6 - 5.0, p < 0.001) and marginally physical child abuse (AOR= 1.81, 95% CI 0.8 – 60.8, p < 0.07). Conclusion: Orphanhood brings a host of mental health vulnerabilities including PTSD. A cultural recognition of PTSD and its long term negative consequences needs to be developed and interventions to address the vulnerabilities and risks for mental health problems among institutionalized orphans. Recommendation: Caregivers should be trained to recognize PTSD symptoms, child abuse, depressive symptoms and suicidal tendencies among orphans and refer them for early intervention. School enrolment should be considered compulsory for all institutionalized orphans.
Show full item record
Files in this item
Name:
Error Free Disser ...
Size:
2.018Mb
Format: | http://dspace.muhas.ac.tz:8080/xmlui/handle/123456789/575 |
The CGA-IGC's clinical and research focus is hereditary gastrointestinal cancer syndromes, including but not limited to:
-
Familial Adenomatous Polyposis (FAP)
-
MUTYH Associated Polyposis (MAP)
-
Polymerase Proofreading-Associated Polyposis (PPAP)
-
Peutz-Jeghers Syndrome
-
Juvenile Polyposis Syndrome
-
PTEN TumorHamartoma Syndrome
-
Hereditary Mixed Polyposis Syndrome
-
Hereditary Non-Polyposis Colorectal Cancer (HNPCC)
-
Lynch Syndrome
-
Familial Colorectal Cancer Type X
-
Hyperplastic Polyposis/Serrated Polyposis
ABOUT US
The Collaborative Group of the Americas on Inherited Colorectal Cancer (CGA) was established in 1995 to improve understanding of the basic science of inherited colorectal cancer and the clinical management of affected families. In 2018, the CGA-ICC moved to change their name to the Collaborative Group of the Americas on Inherited Gastrointestinal Cancer (CGA-IGC), to be more inclusive of inherited gastrointestinal cancers as a whole.
VISION STATEMENT
The vision of the CGA-IGC is to eliminate morbidity and early mortality of hereditary gastrointestinal cancers.
MISSION STATEMENT
The mission of the CGA-IGC is to advance science and clinical care of inherited gastrointestinal cancers through research and education as the leading authority in the Americas. Through this mission, the CGA-IGC offers the following: | https://www.cgaigc.com/about |
1. Background of the invention
1.1 Field of the invention
1.2 Description of Related Art
2. Summary of the invention
3. Brief description of the drawings
4. Detailed description of preferred embodiments
4.1 Transmitter principles
4.2 Receiver principles
4.3 A fast transform for time-frequency coding
4.3.1 A Generalized Fast Hadamard Transform
4.3.2 A Derivative Expression
4.4 Loop convergence analysis
4.5 Channel Estimation and Equalization
4.6 Performance evaluation
4.7 Digital implementation
4.7.1 Transmitter implementation
4.7.2 Receiver implementation
5. Conclusions
This invention relates to data communications, and more particularly is concerned with a new and improved method for transmitting and receiving complex symbols, based on multicode time and frequency spreading, as well as on a transmitter and a receiver for carrying out the method. The transmitter and receiver are suitable for use in the direct link of a CDMA architecture, for point-to-multi-point transmissions and in a point-to-point communication system.
Detection, Estimation, and Modulation Theory,
The field of CDMA (code-division multiple-access) communications is concerned with multiplexing different symbol streams through the same channel by modulating each stream by a coded waveform whose bandwidth is higher than the symbol rate. The ratio of code rate to symbol rate of a stream is referred to as spreading factor (SF). For a given SF, variable-rate communications are obtained either by devoting a number of codes to the same symbol stream or by varying the informational content per symbol. In order to extract a symbol stream, the receiver has to be aware not only of which codes are used by the transmitter but also of the symbol timing and frequency offsets of its internal synchronization sources with respect to the received signal. At the receiver, these physical parameters have to be initially extracted from the observed signal (acquisition) and their estimates have to be continuously updated (tracking) in order to counteract residual jitters of symbol time and frequency. Many approaches are possible to the synchronization problem, depending on assumptions concerning transmitted signal, channel characteristics and noise/interference structure. In order to help the receiver in this task, the transmitter may forego a limited amount of its capacity by multiplexing data with pilot signals whose only purpose is to aid the receiver in acquiring/tracking the physical parameters. Actually, the MLE (Maximum Likelihood Estimator) of time and frequency offset in an AWGN (Additive White Gaussian Noise) channel for a known signal consists in maximizing the mean square absolute value of its time-frequency cross-correlation with the received signal. In this case the local accuracy of delay offset estimation increases with the bandwidth occupied by the processed signal, while the local accuracy in frequency offset estimation depends on the duration of the processed signal. Global accuracy (i.e. time-frequency gross errors related to local maxima) depends on the considered signal structure, or, more specifically, on the shape of its ambiguity function (refer to H. L. Van Trees, Part III - Radar/Sonar Signal Processing and Gaussian Signals in Noise, Chap. 10, Wiley, New York, 1971).
In the case of SS (Spread Spectrum) radio communications, the channel is usually modelled as a random, linear, frequency-selective and time-invariant with respect to symbol frame duration. Actually, because of non-idealities in the up/down conversion chains and of Doppler shifts, transmitted signals incur a time-variant channel with non-null Doppler band, and receivers require continuous frequency correction and tracking (usually at frame time). In a multipath channel the aforementioned time-frequency MLE (optimal for SS signals in AWGN channels) is no longer accurate as echoes can affect both global and local accuracy (echoes introduce new local maxima in the cross-correlation while their side-lobes alter the main-lobe shape and the global maximum position). As regards symbol detection, matched filter is no longer optimal, as echoes introduce self-interference and inter-symbol interference (ISI), thus degrading receiver noise margins. But, if simple SS communications can tolerate these effects, CDMA systems, and likewise multicode transmissions, may suffer severe performance degradation because of the increase of interference among signals spread by different codes. Actually, even in AWGN channels, multicode transmissions using an embedded pilot code suffer from performance degradation of time-frequency estimation caused by data code interference on time-frequency cross-correlations of the pilot because of non-perfect synchronization. Such inter-code interference (also affecting symbol detection) can be controlled, for a given choice of the code structure, by limiting the number of active codes. Multipath increases such interference and results in an additional reduction in the number of allowable codes for a given quality of the transmission (i.e. lower transmission efficiency).
ad hoc
Wireless Multicarrier Communications,
Overview of Multicarrier CDMA,
It is a main object of the present invention to provide a method for efficiently spreading and despreading complex QAM (Quadrature Amplitude Modulation) symbols in multicode SS communications. The method provides codes for spreading complex symbols both in time and frequency and falls in the broad class of multi-carrier CDMA architectures (see, e.g., Z. Wang, G. B. Giannakis, IEEE Signal Processing Magazine, pgs. 29-48, May 2000, and S. Hara, R. Prasad, IEEE Communication Magazine, pgs. 126-133, December 1997).
It is also an object of the invention to provide a transmitter architecture for effectively spreading in time and frequency complex QAM symbols.
It is also an object of the invention to provide a corresponding receiver architecture, which does not require pilot codes for time-frequency tracking and hence allows maximal transmission efficiency and which can guarantee effective time-frequency tracking also in frequency selective fading channels.
It is a further object of the invention to provide a low complexity implementation of the quasi-optimal MLE for time-frequency tracking, which, exploiting the code structure, requires the processing of only a reduced number of despread data.
It is another object of the invention to provide a receiver architecture which also takes advantage from the equalization introduced to restore orthogonality among data codes for further improvement of the time-frequency tracking.
The invention achieves the above and other objects and advantages, such as will appear from the following disclosure, by a method spreading and despreading complex QAM symbols on time-frequency codes having the features set out in claim 1.
The invention also provides a transmitter having the features recited in claim 6 and a receiver having the features recited in claim 10.
K
M
K
M
k
l
k
l
KM
n
n
MK
C
C
f
g
f
g
u
,
v
u
,
v
T
u
v
u
u
∈{0,...,
K
-1}
u
v
∈{0,...,
M
-1}
2
2
The present invention is a novel and improved method and architecture for transmitting and receiving complex symbols in a multicode CDMA system. A double spreading scheme in frequency and time is provided, where the signal is synthesized as a linear combination of modulated orthogonal pulses (chips) in adjacent time intervals. The spreading codes are represented as × matrices where the coefficient of place (,) modulates the pulse at frequency and position . A set { } of orthogonal codes is presented, where the codes are in product form, i.e. can be expressed as , where {} and {} are orthonormal frequency codes and orthonormal time codes, respectively. The system allows multicode communications where multiple codes modulating different complex symbols are jointly generated and detected. Both time and frequency codes are suitable generalizations of Walsh-Hadamard codes and are generated by an O( log) algorithm. Signal in the time domain is obtained by applying an inverse fast-Fourier-transform (IFFT) to the frequency dimension of the matrix sum of modulated codes. Cyclic symbol extension and time windowing counteract multipath effects and reduce spectral bandwidth, respectively. Multicode spreading and despreading have the same complexity. The digital receiver tracks symbol timing and frequency offset resorting to a DLL-FLL tuned to the multicode received signal, under the assumption of random data. The receiver employs, as loop control, the gradient of the log-likelihood (quasi-optimal MLE in AWGN). The FLL corrects the estimated frequency offset by modulating the received signal in the time domain. In the DLL, sub-sampling time errors are compensated in the frequency domain, while coarse time errors are corrected by delaying the received signal of an integer number of samples. Owing to the code structure, the loop control signals can be implemented from the sole despread data, thus incurring only in a mild additional computational cost. This latter turns out to be proportional to log operations, where, noticeably, the proportionality factor decreases for increasing transmission efficiency.
Because of the time-frequency spreading, DLL and FLL are robust against external interference and multipath; nevertheless, in order to guarantee a good detection performance with higher order constellations and full load multicode transmissions, the proposed receiver provides an adaptive equalizer able to counteract the effects of frequency selective channels by improving both accuracy of time-frequency tracking and orthogonality among data codes. Coherent detection is supported by recovering reference phase and amplitude from a pilot symbol on block by block basis. The superior time and frequency accuracy of the receiver allows higher order symbol constellations to be used. Even if the modulation format experimented is a ¾ 16 TCM, higher order constellations can be envisaged without problems.
Figure 1 is a diagrammatical representation of time-frequency codes used with the invention;
Figure 2 shows a DSP SW architecture implementing a transmitter according to the invention;
Figure 3 shows a DSP SW architecture implementing a receiver according to the invention;
Figure 4 is a diagram illustrating the working principles of the receiver loop;
Figure 5 is a diagram showing a linearized model of the receiver;
Figure 6 is showing the envelope of the transmitted chip symbols.
In section 4.1, time-frequency spreading and multicode modulation are described. In section 4.2, the basic structure of a receiver is disclosed and the inventive code structure is derived. In section 4.3, a multicode fast spreading/despreading is introduced. In section 4.4, the DLL-FLL loop of the invention is analyzed in frequency-selective fading. In section 4.5, the channel estimation algorithm is depicted and analyzed. In section 4.6, the impact of residual time and frequency errors on SNR is analyzed, while in section 4.7 the digital processing for transmitter and receiver is illustrated detailing its complexity and computational load.
T
pr
+ T
po
β
ro
= (1 -
)
T
-
1
Δ
f
c
.
f
T >
f
p
t
T
T
c
c
ro
pr
po
The low-pass complex envelope of the transmitted signals (and the useful part of the received signals, as will become apparent in the following) is expressed as a linear combination of the modulated pulses (or chips): where Δ is the frequency spacing and 1/Δ is the pulse repetition time. With reference to Figure 6, the normalized raised cosine window () is given by: with roll-off β, useful cyclic prefix , cyclic postfix , and
p
t
f
lT
lT
f
on K
M
KM
I
M
k
K
c
c
It is easy to check that, since () comprises a rectangular window of length 1/Δ , if dot products are restricted to the intervals [, +1/Δ ], the set (1) is orthonormal. In the following we shall consider dot products and norms as above. Codes are modulated adjacent frequencies and spread on blocks of successive pulses (time-frequency spreading), i.e. each block is expressed on the functions (1) given by ∈ {0, ..., -1} and ∈ {0, ..., -1}.
X
X.
l
,
k
l
k
With these definitions, the low-pass equivalent of the multicode signal at the transmitter is synthesized as: where: and we denoted by [] the element (,) of the matrix
C
C
C
u
,
v
u
,
v
u
,
v
u
v
KM
MT.
b
n
n
KM
KM
K M
u
,
v
With reference to Figure 1, the matrix represents a time-frequency spreading code identified by the couple of indexes (,). The system provides up to orthogonal codes and the transmission is multicode, i.e. a set of codes modulating different complex symbols is transmitted during the same signaling time In other words, each matrix transports one complex symbol belonging to a different data stream. In (3) represents the set of indexes of codes selected for transmission. For the sake of simplicity we shall denote by () the data symbol modulating the code at the -th signaling interval. Each signaling interval comprises a block of || complex symbols. From (3) we observe that the SF of the signal is /|| and varies according to the number of complex symbols (i.e. time-frequency codes) transmitted per signaling interval. On the other hand, transmission efficiency, defined as the number of bits transmitted per signal dimension, is proportional to ||/, where the proportionality factor depends on the number of bits carried per complex symbol. We want to be free to trade spreading for efficiency depending on the channel conditions, thus maximum transmission efficiency occurs maximizing ||/.
s(t)
n
n
K
M
K
K
M
M
n
K
M
b
n
f
g
C
f
g
S
F
f
f
G
g
g
B
F
G
F
F
F
F
I
G
G
G
G
I
u
,
v
T
H
T
H
T
u
v
0
K
-
1
0
M-1
u
,
v
i
,
j
i
,
j
i
,
j
i
,
j
The discrete representation of in the -th signaling interval in terms of coefficients of the orthonormal set (1), using that = (see Figure 1), can be cast in the form: where () is a × coefficient matrix, ≡ [, ..., ] and ≡ [, ..., ] are the code matrices of sizes × and × , respectively, and () is a × (sparse) matrix whose || non-zero elements are the complex symbols (). The orthonormality of the codes implies that = {} and = {} are unitary, i.e. = * = and = * = .
n
h(t)
f
w
t
N
0
0
0
The multicode received signal from the -th symbol block can be expressed in the form: where is the low-pass equivalent channel impulse response (CIR), the * operator denotes the convolution, is the frequency offset between up/down conversion chains of transmitter and receiver, τ is the first ray delay and () is an additive complex thermal noise white in the band of interest and of PSD 2.
close enough
The processing principles of the loop for delay and frequency tracking and symbol detection are illustrated below, while details of the digital implementation are deferred to the next sections. The time-frequency tracking is joint, i.e. the DLL and FLL cooperate in order to locate the best frequency and delay estimate, provided that their starting points are (in a sense that will be specified hereinafter) to the true values. The loop turns out to be a non-linear vector-state system whose state is the couple of the current estimates of frequency and delay.
The proposed receiver consists in three main digital sections: the first performs time and frequency corrections based on the current time-frequency estimate; the second performs channel estimation and equalization, the third consists in a symbol detector and in a time-frequency discriminator aimed to produce the update of the current state. The channel equalizer and DLL-FLL will not adapt their parameters at the same time but alternatively.
h
In order to fix the principle behind the definition of the code structure, we focus our analysis on frequency non-selective channels, relying, in case, on the channel equalization module. So eq. (5) can be expressed as: where represents flat fading.
χ
u
,
v
,Δτ) =∫
Ω
n
v
(Δ
f
r
(
t
+ τ)
(
t
-
nMT
)
dt
.
s
u,
*
e
-
j
2π
ft
f
f = f - f
0
0
In accordance with the ML criterion, the time-frequency estimates maximize the log-likelihood function of the received signal. Assuming transmitted symbols statistically independent and (approximately) gaussian, owing to orthogonality of codes, the log-likelihood is expressed as the sum of the absolute squared values of the time-frequency cross-correlation between received signal and each transmitted code. We have thus: where τ and represent the current estimates of delay and frequency respectively, Δτ = τ - τ and Δ, is the index set of transmitted codes and where:
In order to exploit orthogonality among base functions (1), we restrict the correlations to the multi-interval (strict optimality is traded off for a significant reduction in computational cost, time guard and orthogonality in the use of FFT at the receiver):
Δτ(
n
+ 1) = Δτ(
n
) + α
λ (Δ
f
(
n
),
Δ
τ(
n
)),
∂
∂τ
Δf
(
n
+1) = Δ
f
(
n
) + β
λ (Δ
f
(
n
), Δτ(
n
)).
∂
∂
f
The maximization of eq. (2) can be pursued by a loop controlled by the gradient of the goal function. The loop equations can be cast as:
T
T
f
f
pr
po
c
Assuming |Δτ| < min {,} and |Δ| << Δ it is easy to verify that:
of r
t
s
t
n
r
t +
e
t
n
K
M
N
u
,
v
k
,
l
k
,
l
0
|χ
u
,
v
(Δ
f
R(
n
,
,Δτ)|
2
|
2
= |f
u
H
)g
v
*
R
W
-
j
2π
ft
Under these conditions, by substituting the expressions () and () in (6), we get: where [()] represents the projection of ( τ) on ψ(). By direct substitution, it is easy to verify that: where () is a × noise matrix of i.i.d. complex gaussian random variables with zero mean and variance 2.
By simple passages, the partial derivative of the goal function can be rearranged as: where we defined: and
The ensemble-average operator in (6) has been omitted in (9) as a time average is implicitly performed by (8) (stochastic gradient approach).
MK
F
G
It is worth to better analyze the right-hand side of (9) when all the codes are transmitted. Under this specific condition, resorting to the sole orthogonality of and and, after some algebra, derivatives can be rearranged as: where tr {.} denotes the trace of its argument.
R
R
RR
R
H
H
f
Both and have real (non negative) eigenvalues, while the two diagonal matrices in (10) are purely imaginary. The trace of their products, as sum of imaginary eigenvalues, results imaginary as well, and the gradient identically null irrespectively of (Δ, Δτ) and .
MK - 1)
MK
As a first consequence, in order for the receiver to be able to estimate time and frequency from (9), at least one code must be left unused in the transmission. It is worth noticing that, even if one does not resort to a pilot code, the maximum channel efficiency results limited to (/.
not
R,
A second implication of (10) concerns the gradient implementation. Let us denote by the complement set of , i.e. the index set of all codes used in the transmission. As the (10) is zero irrespectively of for any number of used codes, we have that:
not transmitted.
MK
MK
As a result, we find that (9) can be also expressed as the opposite of the gradient of the log-likelihood computed with respect to the codes As a consequence, the receiver can freely choose which side of (11) to implement. In particular, when || < /2 the left-hand side is computationally more convenient, while, if || > /2, the right-hand side will require less operations.
f
Rg
ḟ
f
H
u
v
u
,
v
u
,
v
u
k
k
∈
Uu
b̂
b
In order to implement each term of the sum in (9), we recognize that the quantities * represent , ML estimation for the data . As vector is generated by {}, we can express each term of (9) as:
ġ
g
v
l
l
∈
Vv
Dually for time codes, if vector is expressed on {}, we have:
U
V
K
M
u
v
u
v
2
2
ḟ
ġ
The smaller are the set and , the smaller is the computational load required to implement the addenda of (9) from estimated data. Herein we present a class of codes where and can be expressed on 1 + log and 1 + log vectors respectively. This property, together with (11), lessens substantially the computational load required to generate the loop control signal in the proposed architecture.
F
G
ḟ
ġ
f
g
F,
F
ḟ
f
u
v
u
v
2
u
2
u
K
K
K
and have to fulfill the same requirement, i.e. and depend only on a small number of codes different from and respectively. In the following we shall focus only on the definition of using the same class of codes both in time and frequency. Moreover, we still require a structure to such that matrix-vector products be computed with a low computational cost. Herein, we introduce a class of orthonormal codes generable by low complexity transforms (i.e. O( log)) and such that any is linearly dependent on only log vectors different from .
of K
K
A research on
fast Hadamard transform (FHT) digital systems,
2
Hadamard matrices are symmetric and, if suitably scaled, unitary. Due to their particular structure, it is possible to define a fast Hadamard transform (FHT) able to perform matrix-vector products at the cost log additions (see e.g. Chen Anshi, Li Di, Computer, Communication, Control and Power Engineering,1993. TENCON '93, pgs. 541-545, October 1993).
By generalizing Hadamard codes, we define a new class of orthogonal matrixes provided of fast orthonormal transforms with the same order of computational complexity of FHT. The degrees of freedom resulting in code definition can be used in order to discriminate different transmissions in the same region or for other desired applications.
H
h
m
l
,
u
h
K
K
K
i
m
l
,
u
m
i
m
Let = {=} denote a × orthonormal matrix of generalized codes, with = 2, and let be its -th column.
H
m
K
The construction of for any (power of 2) is defined by following recursive relations: with the additional clause
u
l
u
l
i
i
It can be shown that eq. (14) implies: where and where {} and {} represent the digits of the binary representation of and , i.e. we have and
H
m
i
m
Definition (14) also implies that is orthonormal when, for any ∈ {1,...,}, we have:
H
m
In accordance with (14), definition of requires four complex parameters at each step of recursion, but because of (15), only four degrees of freedom remain available.
y
H
x
x
x
x
y
H
x
x
m
m
m
T
m
m
-1
m
-1
T
0
m-
1
T
1
m
-1
i
m
-1
i
m
-1
i
Let = be the direct transform of = [,], and = the direct transform of .
By (14) we obtain the following recursive equation:
K
2
K
S
K
S
S
m
S
m = K
K
P
P
P
P
K
K
K
m
m
-1
m
m
m
m
m
m
-1
m
2
m
m
m
-1
m
2
m
2
H
Equation (16) is the elementary step of the generalized Fast Hadamard Transform. We observe that the computation of a = points transform is broken in the computation of two transforms on /2 = 2 points. If we call the number of sums required for a transform, by (16) we get = 2 + 2, which after iterations leads to = 2 log. Similarly, if the number of (complex) products, as = 2 · 2 + 2 , we have = 2 log. As each step of recursion introduces four degree of freedom in the definition of the transform, the total number of parameters that specifies results 4 log.
m
m
m
T
H
y
x
H
y
y
y
y
H
m
m
-1
i
H
m-1
m
-1
i
m
-1
T
0
m
-1
T
1
It is easy to extend those results to the inverse transform x = , using the recursive equation: where = and = [,].
α
m
= β
m
= γ
m
= δ
m
=
,
1
2
α
m
= β
m
=
1
2
γ
m
= δ
m
=
.
j
2
j
It is worth to focus on two special cases. When the (14) defines the normalized Hadamard code, while the (16) returns the elementary step of the (scaled) fast Hadamard transform. As scaled FHT does not require complex products but only real sums and scaling, another non-trivial case worthy of attention is when and Exploiting the fact that the product for ± implies essentially only a swap between real and imaginary parts of data, the transform requires exactly the same computational load of conventional (scaled) FHT.
m
v
m
u
m
u
m
u
h
h
ḣ
ḣ
v
u
m
K
2
It is easy to prove that depends only on a small number of ≠ . In particular, the number of codes, different from and linearly dependent on , turns out to be exactly = log. A simple proof, herein omitted for brevity, leads to the following expression for : where and we defined
Property (19) provides a first order expression for the amount of linear dependence among different frequency-codes subject to timing errors, and, dually, the amount of linear dependence among different time-codes subject to frequency errors.
This section provides a simplified analysis of the tracking loop for the outlined receiver and criteria for the choice of their feedback parameters. We assume that the channel effects are completely cancelled by the equalization module. With this assumptions the system is equivalent to the non-linear loop in Figure 4, where the feedback LPF's have been omitted in order to lessen the analysis.
T
T
f
f
h
h(t)
t
n
a K
M
2N
h
h
K
h
h
h
h
t
n
L
K
pr
po
c
k
k
,0
0
0
K
-1
L
t
=
n
/(
K
Δ
f
c
)
L
0
L
K
-1
L
n
K
Δ
f
c
W
h
h
T
T
In case of frequency selective fading, the block of received signal prior to the channel equalizer, when |Δτ| < min {,} and |Δ| << Δ, can be expressed as: where is the projection of on ψ() and () is × noise matrix of i.i.d. complex gaussian random variables with zero mean and variance . We note that = [, ... , ] is the channel frequency response (CFR) obtained as normalized discrete Fourier transform (DFT) on bins of the CIR = [,...,,0,...,0] with = ()/|, ∈ {0,..., - 1} and L < .
*
u
,
v
Z
n
=
z
n
K
M
z
n
z
n
N
h
u
k
l
v
k,l
k,l
k
,
l
0
k
2
In case of perfect channel estimation, the ORC module (Orthogonality Restoring Combiner) performs the processing: where: and () {()} results a × noise matrix of i.i.d. complex gaussian random variables with zero mean and autocorrelation E {()()} = 2/||δ( - )δ( - ).
We can now express the non-linear loop of the receiver in terms of the discrete signal quantities obtained by the channel equalizer.
x
x
n
n
Δf
n
T
x(
n
+1) = x(
n
) + DΦ(x(
n
))
By posing () = [Δτ(), ()], the (8) can be expressed in vectorial form as: where we defined and where Φ() is the gradient (9). This latter, expressed in terms of the equalized signal (22), becomes:
x
u
Denoting the useful signal in (22) by: we can split Φ() into a term accounting data and a term comprising both signal and noise. By direct substitution, we find: where can be expressed as:
V
a
a
2,1
1,2
With respect to the signal part of the gradient, resorting to the Maclaurin expansion of for small values of time-frequency errors, it can be arranged as: where and where We note that = (as mixed partial derivatives of the goal function).
u
u
V
S
P
P
I
n
n
u
u
C
N
N
n
1
2
C
N
1
C
With respect to the noise term, {()} turns out to be almost a sequence of i.i.d. random variables. It can be shown that E {()} = 0 and E {} = 0. When = κ × , the variances of its components can be expressed in accordance with the closed form: where, for the sake of simplicity, we neglected the second term of (26), as dominated by the first for high SNR ratio and we assumed ≈ . Moreover, we posed: where is the orthogonal projector on the subspace spanned by vectors whose indexes are in , is the orthogonal projector on the complementary subspace, and is the × identity matrix.
f
During convergence of the non-linear loop, in a neighborhood of the true time and frequency, equation (23) can be locally approximated by disregarding terms o(Δτ) + o(Δ). In this case, we can evaluate the system performance by analyzing the equation:
f
a
a
1
2
T
We note that linearizing the gradient with respect to Δ and Δτ separately does not lead, in general, to linear equations because of the presence of the quadratic terms in [, ].
First, we focus on conditions to have estimators asymptotically unbiased.
E {x(
n
+ 1)} = E {x(
n
)} + D diag {E {
}, E {
}} E {x(
n
)},
a
1,1
a
2,2
0 < α < -
0 < β < -
,
2
E{
},
a
1,1
2
E{
}
a
2,2
a
a
a
a
n
1,2
2,1
1
2
x
By averaging eq. (31), we get: where we used the fact that E {} = E {} = E {} = E {} = 0 because of data symmetry and code structure. (Note that () and (28) are statistically independent, as the former depends on past data, the latter on the current data.) By the same algebra, we have that: when conditions: are fulfilled, where
Hence, conditions (34) guarantee asymptotically unbiased estimators.
a
a
=
a
a
a
u
a
u
1,1
1,2
2,2
2,1
p
p
p
,
q
p
Second, we focus on the computation of the asymptotic variance of our estimators. We can proceed by squaring equations (31) and averaging. Using that E {} E {} = 0 and that E {} = E {} = 0 because of data symmetry and code structure, we get:
K
K
a
a
a
a
a
a
a
1,2
1
2
1,2
1
1,2
1
As appears from (31) (and consequently from (36)), in the general case the transfer equations result coupled and, thus, time and frequency estimators dependent. This implies that errors in time affect frequency estimation and vice-versa. Actually, condition = × is sufficient to guarantee uncoupling and independence of the estimators, in fact, we have that = × implies = = = 0 deterministically. Under this assumption, the sums in the expressions of and in (28) can be split into It is easy to recognize that both and share the term:
a
2
Recalling that we can arrange the argument of the right-hand side as: that, by trivial index renaming, results opposite to its hermitian. Thus, equation (37) results identically null, as hermitian part of an anti-hermitian matrix. Dual passages hold for , where can be arranged as
Now, by simple algebra, when n → ∞ in (36), it is easy to see that, when: the asymptotic estimator variances result:
As expected, comparing (33) with (39), we find that the asymptotic variance of estimators and speed of convergence are inversely proportional. In fact, high values of α and β imply fast convergence of (33) but also high asymptotic variances, and vice versa.
a
i,i
We observe that conditions (38) are in general stricter than (34). Interestingly, we note by (39) that the best asymptotic performance is achieved when results almost deterministic, e.g. when a sole code is employed with PSK data. After squaring and averaging the (28), it can be proved that:
derivatives
Both asymptotic performance and bounds on α and β depend on the ratio of (40) and (41) with (35). Focusing on time codes by analyzing (41), we note that, for a given number of vectors, a lower mean squared value is achieved by a code set which maximize energy of the projection on its own span of each of its .
v
v
v
v
0
1
v1
2
v
2
v
2
3
v
3
v
3
ġ
g
ġ
g
ġ
g
ġ
g
ġ
g
ġ
g
H
v
0
H
v
0
H
v
1
H
v
0
H
v
1
H
v
1
2
v
3
2
2
2
2
2
As a matter of fact, given that vector is transmitted, it is convenient to choose as second code the vector that maximize ||, as third the vector that maximize || + ||, as forth the vector that maximize || + || + ||, and so on. Similar considerations apply to the choice of frequency codes.
h
= h
k
k
In case of AWGN channel, assuming for any , by substituting (40), (41), (35), (29) and (30) in (39), it is obtained:
N
h
N
h
0
0
2
2
We note that, for very small values of α and β (with respect to upper limits of (38)), the (42) and (43) approximately compute to α/ || and β/ ||, respectively.
T
K
h(t)
t
n
K
M
N
pr
k
,0
k
0
R(
n
) = diag {h} FB(
n
+ W(
n
),
)G
T
h
W
This subsection details the channel estimation module and its processing. In case of perfect timing and frequency recovery, under the assumption that the CIR is stationary within a block and that its delay spread is smaller than , the channel effects on the received signal can be expressed as: where is the -dimensional vector of projections of on the set {ψ()} and () is a × noise matrix of i.i.d. complex gaussian variates with zero mean and variance 2 (see (21)).
ĥ
As in the detector MCI has a dominant weight with respect to the thermal noise, the selected equalization strategy is the ORC, which consists in left multiplying the received signal by the inverse of , i.e.:
h
R
n
MT
We observe that the estimation of from () can be easily reduced to conventional estimation problem. The data symbols can be assumed known (during training) or can be estimated from the received signal. In this case, we apply a QAM mapping to the output of a QAM demapper (blind approach), assuming thus where As the CIR is stationary in the block time , in accordance with the maximal-ratio combining approach (MRC), we can coherently sum the energies of all time codes transmitted on each carrier.
Performing time despreading, the received signal can be expressed as:
b
n
K
M
u
,
v
F
Assuming {()} known, the × matrix can be computed at each step of iteration. It is easy to see that the vector combines on each carrier the energies of all useful signal transmitted and represents a sufficient statistic for estimating the CFR when the MSE criterion is applied. It is in fact straightforward to prove that: where ∥·∥ is the Frobenius norm, is equivalent, assuming random zero-mean i.i.d data, to:
h
T
Δf
≈
K
ĥ
ĥ
k
k
pr
c
k
k
We note that, carrier by carrier, noise samples are independent while the 's can be considered independent when the 1. In this case, minimization of (47) can be reduced to scalar parallel problems each aimed to determine a different . The expected SNR in estimating on the -th carrier results:
T
f
T
f
K
L << K
K
K
h
k
f
k,
L
ĥ
K
L
pr
c
pr
c
L
L
k
L
k
,
u
t
k
L
h
Th
T
r
r
Th
F
ĥ
2
It is worth to note that condition Δ ≈ 1 implies an inefficient channel use, as only half transmitted power is available for processing. When ┌Δ┐ = , where ┌·┐ rounds its argument to the nearest integer towards ∞ (ceiling function), we can exploit the fact that = , where is the normalized × DFT matrix transform. In this case direct estimation of the CIR becomes: where is the normalized IDFT (inverse DFT) of and is computed as -th element of . If the code is structured such that || is independent of (as for Hadamard codes), and all codes have the same energy, criterion (49) can be further simplified as: where ε is total mean energy of a data block. Because only time taps are estimated, it is possible to prove the resulting SNR on (computed as DFT of ) results improved of a factor / with respect to (48).
U
Q
r
r
n
b̂
b̂
MK
K
MK
K
K
u
,
v
u
,
v
2
L
2
As, now, any standard approach (e.g. Least Mean-Square-LMS) can be pursued, further developments are left to the implementer. Finally, we observe that () is a by-product of data despreading together with the estimated data and that computing from costs about log operation, while and cost about and log additional operations respectively.
2
Finally, we note that other equalization strategies different from ORC can be easily implemented at almost the same computational cost. For example MRC would have led to: where σ is the noise variance. Naturally, loop convergence analysis of previous section results accurate also for MRC when no deep fades occur.
This section presents an approximate analysis of the effects of the time and frequency errors on the SNR per symbol at the demodulator under condition of perfect channel estimation. For this purpose it is worth to focus on two possible working conditions of the receiver, i.e. when no explicit phase recovery is performed, and when external phase tracking is provided. In principle, phase recovery could be left up to the equalizer, as its convergence time has to be substantially shorter than the period of the residual frequency error in order for the whole loop to converge and be stable. As a matter of fact, a small lag, corresponding to a common phase rotation of the received block, always exists.
b
b̂
n
b̂
n
n
n
n
0,0
u
,
v
u
,
v
*
0,0
H
u
*
v
f
P
g
P
When no explicit phase recovery is performed, the system resorts to a pilot symbol, say , as phase reference for the data in the block. The use of the pilot code as a reference allows a full phase correction on block by block basis, at the cost of a small SNR degradation, and does not rely on the speed of convergence of the channel estimator. In this case the TCM decoder is fed with ()(), where b̂() = (), and () is given by (22). When external phase recovery is performed (e.g. before TCM decoding) the SNR degradation due to the use of a noisy phase reference disappears, at the cost of this extra processing.
After some algebra, assuming the loop to converge, at SNR's of practical interest, it is possible to express the SNR per symbol as: where we neglected terms due to sole noise with respect to terms due to signal and noise.
In case of perfect phase recovery, after same algebra and similar approximations, the SNR per symbol results instead: Hence, assuming equal energy for pilot and data, the SNR loss amounts up to 3dB.
Figures 2 and 3 illustrate the structure of transmitter and receiver. Both processing have been realized in fixed point in C and assembly, in order to work in parallel in the same DSP. FFT, generalized FHT, TCM coder and Viterbi decoder have been moved from the DSP to a small FPGA, in order to reduce the computational load required by the former. In the following the processing of each block is detailed according to the implementation, without regard to its physical arrangement.
This section details the processing in the transmitter depicted in Figure 2.
K
M
n
b
n
B
0,0
Module T1 performs TCM encoding and forms, from a packet of bits, the × sparse matrix () of complex symbols that corresponds to a single time-frequency frame (data block). The number and the position of non-zero elements determine which time-frequency codes are allotted for that transmission. The phases of data symbols are differentially encoded with respect to () which is reserved for pilot purposes and carries no user data.
M
K
M
i
4.3
Y ≡ GB
T
= [Gb
0,•
T
,Gb
1,•
T
, ... ,Gb
K
T
-1,•
]
b
B.
i
,•
Module T2 performs the -dimensional time spreading using the generalized FHT described in subsection . The module performs times the -dimensional fast matrix-vector products: where is the -th line of the sparse matrix
M
K
i
S ≡ FY
T
] = FBG
T
= [Fy
0,•
T
,Fy
1,•
T
, ... ,Fy
M
T
-1,•
y
Y.
i
,•
Module T3 performs times the -dimensional frequency spreading resorting to the same generalized FHT accordingly to: where is the -th line of the sparse matrix
K
N
K
N
S
Module T4 allocates each -dimensional column of into the appropriate carriers synthesized by the following IFFT module. In order to relax the requirements of the D/A section and to ease the reduction of the out of band spectral leakage, the number of synthesized carriers, , is larger than . The signal elements are symmetrically allocated into the first and last IFFT bins, thus defining a low pass complex signal, while central IFFT bins corresponding to high frequencies are zero padded. In contrast to (1), but without affecting the analysis in the previous sections, the 0 frequency is left unused in order to avoid DC biases. Each -dimensional vector resulting by the arrangement of module T4 is time-converted by IFFT in module T5 according to the standard radix-4 algorithm.
N
N
N
N
N
2N
N
F
f
F
N
= T
F
, N
T
F
N
β
TF
N
N
N
tot
pr
po
ro
s
c
s
pr
pr
s
po
po
s
ro
ro
s
ro
ro
ro
Module T6 cyclically extends the sample chip to a = + + + samples, where = /Δ is the useful part of the pulse and is the sampling frequency, and = and = . The first and last samples are shaped accordingly to the window (2). The last samples from the current pulse are added to the first samples of the next pulse according to the conventional OFDM time windowing processing.
N
M
tot
This section details the processing in the receiver depicted in Figure 3. After AGC and A/D conversion of the I&Q components, the signal is buffered in bulks of complex samples that are jointly processed.
intra-chip
inter-chip
It is worth to dwell upon the structure of the presented frequency correction processing. For the purpose of reducing the computational load, the processing is split in an (module R2) and in an processing (module R7).
intra-chip
N
inter-chip
inter-chip
K
N
The processing is pursued in time domain and consists in modulating the useful part of each chip by an sample segment of a suitable complex tone (prefix and postfix are skipped). The same tone segment is employed for all chips in the same bulk of samples, thus introducing an phase error. As concerns each chip, the phase error acts as a complex gain and, thus, can be compensated after the FFT. In this way only out of samples per chip need to be re-phased.
inter-chip
M
inter-chip
The correction module generates a suitable sample segment of complex tone to compensate phase errors. To guarantee phase continuity between adjacent data blocks (required by channel estimation module R9), block by block, the phase of the last sample is stored and used to generate the initial phase of the next tone segment.
N
M
tot
With respect to the delay correction procedure, the processing is split in coarse and fine correction. The former consists in moving the boundaries of the data block of an integer number of samples in the receiver buffer (module R1). The latter performs sub-sample time corrections by interpolating samples in frequency domain (module R5).
In the following, we run into details of each module of the receiver in Figure 3, pointing out, whenever relevant, the number of (complex) operations (per data block) required by its processing.
n
F
s
Module R1 performs in time domain the function of delay correction at sample-time (coarse correction). The boundaries of the sample bulk to be processed are moved sample by sample in the memory buffer according with └τ() ┘, where └·┘rounds its argument to the nearest integer towards -∞ (floor function). It does not affect the computational cost.
intra-chip
M
n
N
N
n
NM
e
,
-
j
2π
f
(
n
)
F
s
e
e
sp
sp
Module R2 performs the function of time-domain frequency correction. At the beginning of each chip data block, the tone segment: is computed. Each chip in the data block is modulated by (). Only the useful samples of each chip are processed skipping prefixes and postfixes. R2 requires complex products to build () from and complex products for re-modulating samples.
N
M
N
N
NM
2
2
Module R3 performs the FFT: the useful samples from each chip are transposed to frequency domain by applying times an -dimensional FFT. The resulting cost is about log.
M
K
Module R4 collects the frequency bins corresponding to useful signal, reducing the data to a × complex matrix spread in time and frequency. No cost is associated to this operation.
e
r
(
n
) = [
e
e
...
e
j
2π
ν(
n
)
N
j
2π
2ν(
n
)
N
j
2π
K
ν(
n
)
2
N
]
T
e
j
2π
ν(
n
)
N
e
n
n
v
n
n
F
n
F
= └
n
┘ + v
n
n
n
K
n
n
K
KM
l
r
s
s
s
r
l
r
l
e
e
e
e
e
Module R5 performs the fine delay correction in the frequency domain. The vectors and () = reverse {()*} are computed once per data block. We denoted by () the fractional part of τ(), such that τ()τ()F(), and by reverse {·} the function that reverses the positions of the elements of its vector argument. Vectors () and () multiply their corresponding /2 code bins in each chip of the block. The cost for computing () and () from consists in /2 products, while the time corrections requires operations.
inter-chip
M
n
e
sy
(
n
) =
[ 1
...
e
-
j
θ(
n
)
e
-j
2π
f
(
n
)
T
e
-j
2π
f
(
n
)(
M
-1)
T
]
T
θ(
n
) = mod{θ(
n
- 1) + 2π
f
(
n
- 1)
MT
.
}
2π
Module R6 performs the function of frequency correction. The vector: is computed and stored at the beginning of each chip data block. Each chip is then multiplied by its corresponding phase. The phase term θ(), required to guarantee inter block phase continuity, is updated in accordance with
M
n
KM
e
sy
The computational cost consists of products for () and in operations for re-phasing data.
M
K
i
KM
M
U = RG* = [G
H
, G
H
,... , G
H
]
T
r
0,•
T
r
1,•
T
r
M
T
-1,•
,
r
R.
l
,•
2
Module R7 performs the -dimensional time despreading resorting times to the inverse generalized FHT: where is the -th row of The computational cost consists log operations.
Module R8 performs the ORC processing using the CFR estimated by module R9. Each column of the data block is equalized according to (44), thus leading to:
KM
The computational cost per data block consists in operations.
Q(
n
]
) = FB = [Fb
0,•
,Fb
1,•
,..., Fb
M
-1,•
M
K
MK
K
K
B
B̂
r
r
QAM
QAM
2
L
Module R9 performs the channel estimation processing using the detected data (from module R10) and the time-despread received signal (from module R7). Any approach to implement minimization (50) or (47) (e.g. component by component) can be pursued. In both cases matrix needs to be computed from training or detected data applying times a -dimensional frequency spreading. When blind approach is used, we assume ≅ Map [Demap []]. In case of minimization of (47), in order to compute from (45), about operations are required, while in case of minimization of (50) additional log operations are needed to get . The final computational cost per data block may vary depending on the selected strategy.
M
K
i
KM
K
B
ˆ
= F
H
Z = [F
H
,F
H
,...,F
H
z•
,0
z•
,1
z•
,
M
-1
],
z
Z
•,
i
2
Module R10 performs times the -dimensional frequency despreading resorting to the inverse generalized FHT: where denotes the -th column of the matrix from ORC module R8. The cost, thus, results log operations.
I
KM
I
KM
Modules R11 and R13 compute the control signals in (9). Owing to code properties, by simple passages, the control signals in (9), when || < /2, result: or, when || ≥ /2,
U
V
I
KM
u
v
2
MK
2
The coefficients of development (54) (and (55)) and sets and are given by equation (19). The total number of operations per data block, for both time and frequency control signals, thus, results about min {||, } log.
B̂
From eq. (54) and (55) it can be appreciated that module R11 controls modules R1 and R5 to decrease or increase the delay of the received signal whilst module R13 controls modules R2 and R6 to change the reference frequency. It also follows from the same equations that the energy content of the matrix of reconstructed QAM complex symbols, , is maximized when the output of modules R11 and R13 is zero.
Modules R12 and R14 are low-pass loop filters working at data block rate, thus, their computational cost can be considered negligible.
b
0,0
Module R15 is fed with symbols extracted from each received block. Its tasks are to scale and re-phase symbols on the basis of pilot symbol , and to decode the TCM symbols by resorting to the Viterbi algorithm. It can comprise a phase estimation module when the channel estimation module is not active. As TCM decoding and phase restoration are standard subjects, they fall outside the scope of the invention and will not be further discussed.
N
K
I
MK
MK
KM
K
K
M
3
2
5
2
2
2
In conclusion, assuming = 2· and disregarding channel estimation and TCM decoding, the worst case cost (i. e. when || = /2) per data block turns out to be ≈ [ log + log + 6] + + operations.
We presented a new strategy of time-frequency spreading based on orthogonal codes and fast transforms. The main feature of the proposed signal structure is that the receiver can effectively evaluate and compensate the symbol timing and frequency errors of the multicode transmission by exploiting signal structure in time for frequency tracking and signal structure in frequency for time tracking. The receiver implements a quasi-optimal time-frequency MLE.
ad hoc
KM
KM
KM
2
2
A class of codes is introduced. Both spreading and despreading require log operations while the control signals of the tracking loop require an additional computational cost proportional to log. The proportionality factor depends on the transmission load and is minimized when all allowable codes are transmitted.
The channel equalizer integrated in the receiver loop is able to compensate the channel distortion due to multipath in order to improve both symbol detection and time-frequency estimation. The channel estimation problem from time-frequency spread data is reduced to a standard vectorial estimation problem, hence allowing the use of conventional strategies in frequency or in time.
Loop convergence analysis and closed form SNR are presented in case of perfect channel estimation (clairvoyant approach) in order to evaluate system performance and to dimension its free parameters.
While preferred embodiments of the invention have been disclosed, several details may be changed within the scope of the invention by substituting equivalent arrangements or by dispensing with unessential functions. By way of example, the frequency correction feedback loops comprising blocks R2-R6-R13-R14 in the receiver might be replaced with an analog loop installed upstream of the digital processor. The channel estimation loop comprising block R9 could be dispensed with in special cases. Also, although the invention finds an especially advantageous application with TCM encoding, it can operate with any type of communication using constellations of QAM complex symbols generally. Other modifications will occur to persons skilled in the art within the scope of the attached claims. | |
I had the opportunity last March to speak to a group of business executives at the Standard Charter Bank in Singapore. I don’t often have the chance to address to such an audience and I really enjoyed it. They were all obviously intelligent, talented people and were very attentive and at the end asked really good questions.
The talk was structured around a teasing out of the several possible meanings of SCB’s motto: “Here for good.” In this segment, I explore one connotation of the phrase. We’re all here to try and live the good life – a life that is fulfilling and satisfying.
The “good life” in this sense includes what I call the Big Five: 1) having enough money and material possessions; 2) having a rewarding job; 3) having good and meaningful personal relationships; 4) having health and vitality; and 5) having entertaining and relaxing experiences that restore us.
These are five aspects of what constitutes the “good life” and it is desirable to have some version of each of them. The problem is thinking that any one or a combination of them will bring us what we’re really looking for in life. They are necessary, but not sufficient.
So what is that we really want when it comes to the “good life” in this sense? Isn’t what we really desire when we desire enough money, a good job, nice relationships, physical health, and recreation a sense of satisfaction and the end of perpetual desire? Isn’t what we really want just contentment, which is the real meaning of “happiness?”
Have a listen and see if you agree. | https://lamamarut.org/2014/06/really-want/ |
I wrote yesterday about the morning session of this EFT training day in the post "Emotion-focused therapy workshop series (sixth post): a method for understanding puzzling reactions". In the afternoon we explored "Working with self-criticism/depressive splits". As Greenberg & Angus write in their book "Working with narrative in emotion-focused therapy" (p.9-10) ... "A hallmark of EFT is that therapists' interventions are sensitive to the in-session context of the therapeutic interaction, and particular client states are viewed as opportunities for facilitating specific types of client emotional processes. To date, six major types of marker-guided interventions have been identified and studied in EFT". The six problem-markers and accompanying interventions are: "Problematic reactions" which are viewed as opportunities to use "systematic evocative unfolding" (as described in yesterday's post about this seminar's morning session); "Unclear felt sense" typically explored using "focusing"; "Conflict splits" which I already blogged about last month in "EFT workshop series (fifth post): two chair conflict dialogues" and which I'm going to discuss further in today's post; "Self-interruptive splits" with potential accompanying "two chair enactment"; "Unfinished business" markers with possible "empty chair interventions"; and "Vulnerability" with associated therapist "empathic validation". As an aside, Greenberg & Angus go on to propose a further four kinds of problem marker that can be helpful when integrating EFT and narrative therapy ... "same old stories, empty stories, unstoried emotions, and broken stories" ... but this is a potential future blog subject.
This afternoon's seminar focused particularly on self-criticism and depressive splits. Robert Elliott, our course trainer, said something that I found very interesting here. He pointed out that the results of the first research trial comparing EFT with standard person-centered counselling (PCT) were pretty under-whelming with EFT not really being much more helpful than PCT. So the relevant research study (York I) - "Experiential therapy of depression: Differential effects of client-centered relationship conditions and process experiential interventions" - reported "This study compared the effectiveness of process-experiential psychotherapy with one of its components, client-centered psychotherapy, in the treatment of (34) adults suffering from major depression. The client-centered treatment emphasized the establishment and maintenance of the Rogerian relationship conditions and empathic responding. The experiential treatment consisted of the client-centered conditions, plus the use of specific process-directive gestalt and experiential interventions at client markers indicating particular cognitive-affective problems. Treatments showed no difference in reducing depressive symptomatology at termination and six month follow-up. The experiential treatment, however, had superior effects at mid-treatment on depression and at termination on the total level of symptoms, self-esteem, and reduction of interpersonal problems. The addition, to the relational conditions, of specific active interventions at appropriate points in the treatment of depression appeared to hasten and enhance improvement." These aren't results that would make many therapists want to rush out to learn EFT. Robert however went on to say that the outcomes in York II - "The effects of adding emotion-focused interventions to the client-centered relationship conditions in the treatment of depression" - were a good deal more supportive of EFT and that he thought a key reason for this was improved understanding of how to best work with client "collapse" (agreement with the internal critic) - more on this in the next post.
Having just sat here at my desk reading through the York II research paper, I can understand that EFT supporters would be heartened by the reported client outcomes. Actually the outcomes for both EFT and for PCT on its own are excellent and stand up very well when compared to results with somewhat similar populations treated with CBT or interpersonal psychotherapy (IPT). Both York I and York II involved quite small numbers of patients (34 and 38 depression sufferers respectively). Combining the data from the two studies gives the research more statistical power and, as reported in Greenberg & Watson's fine book "Emotion-focused therapy for depression" (p.12) "Statistically significant differences among treatments were found on all indices of change for the combined sample, with differences maintained at 6- and 18-month follow-up ... In addition, and of great importance, 18-month follow-up showed that the process experiential group (EFT) were doing distinctly better at follow-up. Survival curves showed that 70% of process experiential clients survived to follow-up - that is, did not relapse - in comparison to a 40% survival rate for those who were in relationship-alone (PCT) treatment." Exciting results (I wonder if York II follow-up results were better than York I's). Unpicking this further only increases my respect for these outcomes. If you look in the "Participants" description (p.104-105) of the 2009 paper, it is stated "None of these clients reported having been diagnosed with more than three previous depressive episodes". Possibly somewhat counter-intuitively, these clients with few or no previous depressive episodes are the group that CBT and mindfulness-based cognitive therapy (MBCT) struggle to make an impact on when trying to reduce relapse rates - see, for example, Ludgate on "Cognitive behavioral therapy and relapse prevention". And remember, reductions in relapse achieved with CBT/MBCT typically involve additional treatment on top of acute phase treatment. In contrast the EFT relapse reduction is achieved (with quite probably a more difficult client population due to fewer previous depressive episodes) simply as a bonus of effective acute phase treatment. Gosh. My cautions are the small numbers involved, the current lack of replication, and a probability that the outcomes are somewhat contaminated by treatment allegiance effects - the therapists almost certainly believed more in EFT than in PCT and this is likely to have contributed to the differences found between the two therapies. However the 18-month follow-up differences are pretty startling - see "Maintenance of gains following experiential therapies for depression". It's also important to point out that EFT has stood up well when compared head-to-head with CBT - "Comparing the effectiveness of process-experiential with cognitive-behavioral psychotherapy in the treatment of depression" - with further details highlighted in the more recent paper "Clients' emotional processing in psychotherapy: a comparison between cognitive-behavioral and process-experiential therapies".
While discussing these research papers, I'll just mention a comment - in the discussion section of the York II report comparing EFT with PCT - that I found particularly helpful as practical advice "Findings suggest that a good empathic relationship was present in both treatments. We also know that emotion-focused tasks were performed in about 28% of (EFT) sessions after Session 3. Previous studies of the EFT treatment process (Goldman et al., 2005) suggest that themes tend to emerge fairly early in treatment (typically around Session 4) and that they center around the two major therapeutic tasks: the two-chair task, which is designed to target the specific problem of self-criticism, and the empty-chair task, which targets unresolved dependence, injury, and loss ... The two-chair task helps clients identify self-criticisms, become aware of the emotional impact on the self of the criticisms, differentiate their feelings and needs, and use these to combat the negative cognitions. The empty-chair task helps clients resolve past losses, hurts, and anger toward significant others by expressing and processing their unresolved feelings. Watson and Greenberg (1996) found that these specific interventions are related to deeper in-session emotional process and stronger outcome."
In the next post in this series I'll look more at practical issues that come up when working with two-chair internal critic dialogues. | https://goodmedicine.org.uk/stressedtozest/2012/02/emotion-focused-therapy-workshop-series-seventh-post-internal-critic-dialogue |
Challenge: First Nations Development Institute wanted to develop a leadership program that would help Native non-profit leaders excel in the years ahead. They needed insight into the opportunities and challenges facing these leaders operating in Indian Country. The difficulty of the task is that Indian leaders operate in a very unique political and cultural environment. They have limited resources and struggle to balance multiple constituents’ interests and needs.
Solution: The director selected our team to facilitate a convening of the top 40 Native non-profit leaders in the United States. We prepared and planned the event, facilitated the framing of key issues, and supported the group in reaching conclusions concerning the direction the program should take.
Results: Our staff was able help these American Indian leaders identify the critical results, knowledge requirements, and preferred learning styles for a new Native American Leadership program. The results from the work-shop became the basis for the development of a National American Indian Leadership program.
National Congress of American Indians
Challenge: Tribal leaders wanted to change the process of tribal, state, and federal policy-making from one of a “reactive, problem-driven” approach to a process informed by research and data. At their request, NCAI launched a national Policy Research Center designed to collect, coordinate, and make available the information, data, and analyses that could serve public policy decisions.
Solution: Our team facilitated the Strategy Development process for the Center’s Advisory Council. The council created an alternative approach which allowed for the identification of multiple policy options. This approach is characterized by critical debate in Indian Country, especially among tribal leaders.
Results: The resulting work of the Policy Research Center will serve to:
- Organize existing available data into useful formats to improve its accessibility to tribal leaders, government officials, academics and the public
- Serve as an information clearinghouse to connect Native institutions through a comprehensive website
- Connect leading thinkers and institutions so they may develop proactive models for data collection and analysis
- Identify priorities for research and policy development
- Educate Tribal leadership, academic entities, Congress, the Administration and the public by publishing and disseminating the results of the Institute’s research
Large Oklahoma Native Nation – Economic Development
Challenge: The Tribal government was paralyzed. Tribal employees were disillusioned and discouraged. In the preceding ten years, the operating budget had tripled and the number of citizens served more than doubled. Tribal services had deteriorated and had not kept pace with the growing demand.
Solution: The newly elected Chief of the Nation requested assistance to significantly transform the tribal service organization and culture. One Fire assembled an outstanding group of organizational and cultural transformation experts to partner with the Nation’s leadership. One Fire won a Ford Foundation grant and initiated a four-year project.
Large Alaskan Native Village – Five Year Organizational Plan
Challenge: The Alaskan Native Village is presently in a growth phase and is in need of a comprehensive plan to guide the village forward. Many attempts to create an Organizational Five Year Plan in the past have not succeeded. The Native Village is a traditional tribal government created by Congress in 1949.
Solution: The One Fire team facilitated the leadership of the Native Village through a step-by-step strategic planning and organization design process. The approach consists of five phases designed to build a comprehensive organizational plan. The project entails a review and assessment of the existing governmental organization and each of the eleven departments.
Results: Developed first-ever Native Village Five Year Organizational Plan and 11 departmental plans for the following: Administration, Environmental Protection Agency, Gaming, Housing/NAHASDA, Indian Reservation Roads, Realty, Social Services, Tribal Courts, Tribal Operations, Wildlife, Workforce Development, and Economic Development. | https://onefiredevelopment.org/native-american/ |
Hey I am an undergrad student working on my thesis. I very new to psychopy, and know very little about python (but I can learn and know a little about coding).
I am doing a cognitive condition task. Where a word appears in the centre of the screen, followed by a target stimuli located at the top or bottom of the screen depending on word valence ratings from a previous experiment. Each trial is followed by a test to see if participants actually read the word that appear by asking them to select one of two words.
What I am trying to do-
For the word reading test after each trial, I want to choose what the other word may be, but randomly choose 1 word out of the other 39 in that category. For example, if I have a list of 40 positive words and on this specific trial happy is the stimuli presented, followed by the target stimuli appearing up. I want the other words that could be chosen, also be out of that list of the remaining 39.
What did you try to make it work?
Well I tried brute forcing it first to make sure that it actually works, but the issue is that based on my other conditions (timer, word stimuli, L/R for where the target word appears in the recognizing task, and the pool of other words. My excel file begins to have way too many trials (40k+). What I want is to be able to tell psychopy to choose 1 of the 39 other words, and still have know which was the word stimuli being presented for each trial. Each target word may appear only 10 times, but the other word that they have to choose from needs to be unique each iteration.
I could also continue to bruteforce it, and have a unique iteration of the experiment for each participant, where I randomly pick which words will be paired with with conditions. I know there is an easier way, but not sure how. | https://discourse.psychopy.org/t/question-about-loops-and-randomized-conditions/9179 |
HOA Fence Requirements Asheville, NC
We all know that fences and decks are a visible part of your home, and they affect the overall look of the HOA (Homeowners Association) you live in. For this exact reason, HOAs often have very detailed rules about what you can and can’t do when building these external features.
If you’re considering adding a fence or deck to your home, the first step is to read your HOA’s Covenants, Conditions, and Restrictions, and Architectural Control Guidelines. These important documents map out the rules you’ll need to follow closely when planning and building your fence or deck.
HOA Fence Guidelines
In this section, we have provided a few bulleted examples to give you an idea of how narrow a margin you typically have when building a fence or deck in an HOA. Remember, the list below is just a detailed example of the kind of guideline you can expect to see.
Here are some simple guidelines you might see.
Your fence must:
- Be rectangular in design
- Be a white vinyl design
- Be no more than 4 feet tall
- Be installed by a professional fence contractor
- Have five minimum vertical spacing of 3.5 inches between slats and not have privacy slats
- Not extend beyond 30 feet from the rear of your house
- Be setback 20 feet from the side-yard property line if your home is on a corner lot
Homeowners must sign a binding maintenance agreement to maintain the fence and surrounding landscaping and attach it to their application.
Every HOA has different regulations about fences and decks, so it’s important to read the CC&Rs and Architectural Control Guidelines for your specific HOA. You probably received copies of these documents when you bought your home.
If not, ask an HOA board member to provide them. If you purchased your home many years ago, request the board’s latest version of these governing documents, as some rules may have changed.
Get Project Approval First
Before anyone digs a post hole, you must submit the project proposal paperwork that your board requires. The HOA board might be in charge of approving deck and fence projects.
Or your HOA might have a special Architectural Committee responsible for monitoring and enforcing HOA guidelines for exterior elements like decks and fences. They might review your proposal and approve it as is, or may require changes before they approve it.
Avoid Expensively Missteps
Owners will sometimes overlook or even intentionally disregard HOA CC&Rs or Architectural Guidelines. Even if you have your heart set on a specific fence style that your HOA doesn’t allow, you might feel tempted to go ahead and build it anyway. Or, the approval process might seem like too much of a hassle, and you might decide to skip it.
However, these missteps can lead to costly mistakes because your HOA has the authority to force you to fix non-compliant decks and fences. If you build a fence or deck that the HOA decides is a violation of guidelines, they might:
- Demand that you remove or fix the fence or deck
- Charge you monthly fines until the problem is fixed
- Take legal actions against you for violating the CC&Rs
In summary, avoid unnecessary problems and expenses by following the HOA rules carefully.
Check Your City’s Guidelines
In addition to your HOA’s rules, your city may require permits for exterior work like decks or fences. Many HOAs include the city requirement into their CC&Rs or Architectural Control Guidelines. If you don’t see city requirements in the documents provided, contact your city or check with your HOA board.
Asheville Fence
Most neighborhoods within Asheville, NC, are managed by HOA’s or Homeowners Associations. Due to this, installing a fence or any other outdoor improvements requires the HOA board’s approval. While these rules and laws can seem frustrating, they also help preserve property values for you and your neighbors.
For helpful information, you can visit this website for specific codes in the city of Asheville.
There is a lot to consider when installing a new fence on your property. Our team at Asheville Fence is here to help you. We have decades of experience installing beautiful fence products throughout western North Carolina and are familiar with many of the HOAs guidelines and regulations in the Asheville area.
If you are ready to learn more about your fence installation options, contact Asheville Fence today. | https://ashevillefence.com/hoa-fence-requirements-asheville-nc |
After 7 years of service and commitment, the current MRBTA Chair, Trent Bartlett, will be stepping down in 2021, creating an opportunity for a consummate and proven governance leader to take on the position of Chair.
We are seeking a dynamic and committed individual to lead the organisation as we grow the contribution tourism makes economically, environmentally, socially and culturally to our community. The successful applicant will have a strong connection to, and good understanding of, the Margaret River Region and its communities in order that he/she can play an effective role in representing, promoting and advocating for tourism in the region.
Applications are encouraged from appropriately qualified and proven senior executives and non-executive directors with demonstrated experience leading strategy and growth initiatives across the tourism sector or comparably challenging and dynamic industries. Experience across small business, stakeholder and community engagement, regional development and Federal, State and Local Government will be highly regarded. Strong levels of contemporary corporate governance, commercial and political acumen, complemented by exceptional leadership, communication, negotiation and stakeholder management capacity are all essential qualities sought.
If you, or anyone you know, could add value to our organisation through this key remunerated position, please email [email protected] for more information.
Applications close 23 July 2021
Leading Hand – Conservation and Technical Services (CATS) Team (Full-Time)
About the Conservation and Technical Services (CATS) Team
MRBTA employs a small team of experienced staff whose role is to carry out conservation, maintenance and improvement of the natural and man-made assets under the management and care of MRBTA. The work of the CATS team is essential for MRBTA to fulfil its responsibilities to ensure smooth all-year operation of MRBTA tourism attractions, whilst protecting and maintaining the valuable assets in its care. The CATS team are based at Lake Cave but work across all MRBTA sites.
About the Role
MRBTA is looking to fill a newly created full-time position of CATS Leading Hand. This role will entail working alongside other members of the team whilst also undertaking a planning and supervisory role to ensure that projects and tasks are completed safely and efficiently, according to their priority. The Leading Hand will work alongside the Asset Development Officer who is a skilled carpenter, and report to the Asset & Environment Manager.
The diverse nature of MRBTA’s operations offers a high level of interest and variety, and the opportunity to work in some beautiful, unique and sensitive settings. MRBTA staff, especially the CATS team, are passionate about maintaining the natural & heritage values of its sites and ensuring that visitors experience them at their best. If, as Leading Hand, you don’t start the role with a background in caves, lighthouses, and natural & environmental assets, that knowledge will develop rapidly as you apply your existing practical skills and experiences.
The role of Leading Hand will include the scheduling of work to utilise the time and skills of the CATS team and working alongside the other team members to achieve a wide range of tasks and projects. The coordination of input from external specialist trades and consultants, such as Geotech and Arborist, would form part of the role responsibilities.
As Leading Hand, you would liaise with the Asset Development Officer to identify short and longer term priorities for maintenance, repair, replacement and improvements to physical infrastructure at MRBTA sites, and schedule staff and other resources for work to be undertaken in an efficient manner. The Leading Hand will be expected to implement IT based task scheduling and reporting systems and play a key role in documenting and reporting on asset conditions, requirements and improvements, as well as safety and compliance.
This role would entail taking responsibility for the reliable operation of a range of mechanical & technical systems, such as lighting, waste treatment and fire sprinkler systems, as well as site specific equipment such as torches, radios, etc.
Skills and Experience
To be successful in this role you should possess:
- Practical experience and skills across one or more technical or trade areas
- Experience of prioritising and planning work schedules in complex situations, and monitoring work outcomes
- Strong IT computer skills
- Experience in supervising, managing and leading a small team.
- Experience of working under pressure to tight timelines to complete tasks/projects
- Experience of maintaining a range of mechanical and/or technical machines/equipment
- Skills in working with a range of tools
- Good communicator
The successful applicant will
- Demonstrate a positive work ethic, lead by example and hold high standards
- Be willing to work unsociable/flexible hours if urgent tasks need attention or to minimise interruption to visitors to MRBTA sites
- Be well organised and able to delegate tasks effectively
- Have a good level of physical fitness
- Analyse situations in a careful and balanced way and seek positive solutions to problems
- Demonstrate an appreciation of natural assets and the conservation of fauna & flora
- Hold a current C class driver’s licence
- Hold, or be willing to obtain, a senior first aid certificate
How to Apply
Please apply via Seek. Please submit your CV and cover letter explaining how your skills and experience would enable you to succeed in this role.
Your CV and letter must be received by 5pm, Sunday 4th July 2021.
Visitor Services Consultants (Part-Time)
About the Role
The Visitor Services Consultant works in the visitor centre providing face-to-face assistance, as well as working in the Central Reservations office answering visitor queries via email, phone, webchat and social media platforms.
They will be responsible for the following key areas:
- Response to visitor enquiries in a timely manner
- Presentation, inventory management and cleanliness of visitor centres
- Assistance with holiday planning, quotes and bookings
- Facilitation of daily reconciliations and office administration
- Coordination and participation in training schedules & knowledgebase updates
- Support and demonstration of MRBTA’s values and professional standards and adherence to policies and procedures
Skills and Experience
To be successful in this role you must possess:
- Outstanding customer service and resolution skills
- Local knowledge and passion for the Margaret River Region (from Busselton to Augusta)
- Strong written and oral communication skills and the ability to tailor information to suit different audiences, cultures and information platforms
- Knowledge of Microsoft Office Suite, CRM systems, content management systems and reservation systems (desirable).
- The ability to work autonomously and with team members to achieve outcomes
The successful applicant will
- Be able to commit to 7 – 8 shifts per fortnight on a permanent part-time basis including weekends and holidays, with consideration for further opportunity to the right candidate
- Be able to work from a Margaret River Visitor Centre location with reliable transport and ‘C’ class license to also work from Busselton or Dunsborough as operational requirements arise
- Enjoy great industry benefits as well as a remuneration package, above award
- Participate in industry networking events, workshops and training, including the Association’s familiarisation program, some which may be outside of normal hours
- Possess a keen sense of adventure – we provide abundant opportunity to experience many of the region’s incredible experiences first-hand as part of your ongoing training
How to apply
Please email MRBTA Visitor Services Manager Peta Fussell via [email protected] with a CV and cover letter outlining why you would be perfect for this position. | https://corporate.margaretriver.com/staff/careers/ |
HKMA Implementation Guidance on Securitization Framework for Banks
HKMA published a set of questions and answers (Q&A) on the revised securitization framework under the Banking (Capital) Rules (BCR). This set of Q&A supersedes the guidance on securitization set out in pages 52 to 66 of the revised Questions and Answers on BCR, which were issued on December 31, 2014.
These Q&A were built on the existing Q&A, with modifications to align with the amendments made to Part 7 of the BCR and to clarify the policy intent in respect of specific issues. New guidance is provided on the notification requirement under section 230(3), (4), and (5) in Part 7 and the assessment of significant credit risk transfer for obtaining capital relief for the underlying exposures of a securitization transaction under the BCR. HKMA responses to the industry during the consultations conducted in 2017 on the implementation of the revised securitization framework in Hong Kong had mentioned that supplementary guidance will be provided to assist authorized institutions in interpreting Part 7 of BCR (as amended to implement the framework) at a more detailed level in a number of specific areas. This supplementary guidance is now being provided in the form of this set of Q&A.
The Q&A have been drafted, as far as possible, in simple non-legal language to facilitate consistent interpretation and application of the capital or disclosure requirements. They are, however, explanatory and supplementary in nature and do not seek to replace any requirements in the BCR. Also, the Q&A are inevitably general in scope and do not take into account the particular circumstances of individual authorized institutions.
Keywords: Asia Pacific, Hong Kong, Banking, Securitization, Q&A, Banking Capital Rules, HKMA
Previous ArticleBoE Undertakes Proof of Concept to Understand Renewed RTGS Service
Related Articles
|
|
News
EC Amends Regulation Supplementing Solvency II Directive
EC published the Delegated Regulation (EU) 2019/981 that amends the Regulation (EU) 2015/35, which supplements Solvency II Directive (2009/138/EC) on the taking-up and pursuit of the business of insurance and reinsurance.
June 18, 2019 WebPage Regulatory News
|
|
News
IOSCO Report Examines Application of International Cyber Standards
IOSCO published a final report that examines the application of the three internationally recognized cyber standards and frameworks by IOSCO member jurisdictions.
June 18, 2019 WebPage Regulatory News
|
|
News
PRA Launches 2019 Stress Test Exercise for Life and General Insurers
PRA has launched the biennial insurance stress test and is asking the largest regulated life and general insurers to provide information about the impact of a range of stress tests on their business.
June 18, 2019 WebPage Regulatory News
|
|
News
PRA Finalizes Reporting Amendments to Pillar 2 Liquidity Framework
PRA published the final Policy Statement PS13/19 on regulatory reporting amendments and clarifications to the Pillar 2 liquidity framework for banks in UK.
June 17, 2019 WebPage Regulatory News
|
|
News
FSB Assesses Implementation of Compensation Standards and Principles
FSB published the sixth progress report on the implementation of its principles and standards for sound compensation practices in financial institutions.
June 17, 2019 WebPage Regulatory News
|
|
News
IMF Publishes Reports on 2019 Article IV Consultation with Ireland
IMF published its staff report and selected issues report under the 2019 Article IV consultation with Ireland.
June 17, 2019 WebPage Regulatory News
|
|
News
EBA Updates Data on Deposit Guarantee Schemes in EU
EBA published the 2018 data on two key concepts in the Deposit Guarantee Schemes Directive (DGSD)—namely, available financial means and covered deposits.
June 17, 2019 WebPage Regulatory News
|
|
News
SNB Updates Form for Reporting Solvency Risk of Counterparties
SNB published the survey (Form Release 5.01) and related documentation for reporting solvency risk of counterparties in the interbank sector (ARIS).
June 17, 2019 WebPage Regulatory News
|
|
News
ISDA Studies Variation in Global Implementation of Margin Requirements
ISDA published a paper that highlights the main areas of difference in the implementation of margin requirements for non-cleared derivatives across jurisdictions and makes recommendations on how to resolve these variations.
June 17, 2019 WebPage Regulatory News
|
|
News
US Agencies Finalize Rule to Streamline Reporting for Small Banks
US Agencies (FDIC, FED, and OCC) adopted a final rule to streamline the regulatory reporting requirements for small institutions. | https://www.moodysanalytics.com/regulatory-news/Mar-26-18-HKMA-Implementation-Guidance-on-Securitization-Framework-for-Banks |
Good Evening All,
As many have already seen, Little League International has released an update that follows the most recent guidance from the Centers for Disease Control and Prevention (CDC). With that, we will be delaying the Spring season until early May, with more updates to come as we get closer.
I know at this point there are many questions coming up about season length, number of games, allstars, etc. I assure you that we are working through all those same questions internally and will post updates as soon as possible. Today I requested our website company to clear all the currently loaded schedules, and will be uploading a new revised schedule as we set the dates. Of course there will have to be some modifications with a shortened season, but I believe we can get everyone a fair amount of games considering the circumstances. We will take advantage of every possible game slot and get teams as many games as possible, likely playing regular season into early June.
I encourage everyone to keep working with their kids at home in anticipation of getting to start back up, and wish everybody good health in these tough times. It’s like they have said, “we’ll never know if we overreacted, but we will know if we didn’t do enough.”
Below is the most up-to-date message from Little League International
Best,
Fernando
Fernando A. Martinez, Ph.D.
GHLL Executive Board, President
UPDATE AS OF MARCH 16, 2020:
This is much bigger than Little League®.
The COVID-19 (coronavirus) pandemic is rapidly changing the way that we, as global citizens, think, act, gather, learn, and live our daily lives. And, yes, that also means how we play Little League.
With the most recent guidance from the Centers for Disease Control and Prevention (CDC), the Little League International Board of Directors and staff is now strongly advising all its local Little League programs to suspend/delay their Little League seasons through no earlier than Monday, May 11. We implore you to follow this recommendation and suspend all Little League activities through no earlier than May 11.
We recognize that this is the heart of the traditional Little League season, and we share in the great disappointment that many are feeling surrounding this additional pause in the 2020 season. However, it is our hope that by doing this, we will all play a small, but important part in flattening the curve in the spread of the coronavirus pandemic.
We will continue to consult with appropriate medical advisors, government health officials and our volunteer leaders around the world, and we are committed to doing the best we can for the safety and well-being of our players, families, volunteers, and fans.
As this situation evolves, Little League International is committed to sharing the best guidance possible for all of our 6,500 leagues in more than 84 countries. It is our sincere hope that we can find ways to bring everyone back to the Little League fields this season, whether that’s later this spring or throughout the summer.
Currently, Little League International is working through all possible scenarios for the 2020 Little League International Tournament and tournament eligibility for our leagues and players in our various divisions of play.
Little League will continue to provide additional guidance on the impact of delaying the season and has developed a series of FAQs available at LittleLeague.org/Coronavirus. We are committed to sharing information as it becomes available on issues like player eligibility and tournament participation, charter and insurance status for the year, and A Safety Awareness Program plan deadlines.
We also will be sharing guidance on how to resume operations when appropriate, best practices for handling the financial implications, and how you can communicate with parents and families in your communities about this delay in Little League activity. This information will continue to be developed and shared on LittleLeague.org/Coronavirus and through all of our communications methods.
There are countless resources available, and we urge you to follow the information available through World Health Organization, Centers for Disease Control and Prevention (CDC), your state’s public health department (click here for a listing of state public health departments), and other county and/or local authorities including precedents set by area school districts and government agencies.
We encourage you to stay in touch with Little League and share any additional feedback or questions from your local communities by emailing [email protected].
Thank you all for your support, understanding, and community leadership.
We will be thinking of our global Little League community during this difficult time. | https://ghll.teampages.com/leagues/8838/announcements/2171258-SPRING-SEASON-UPDATE-AS-OF-MARCH-16-2020 |
What are you looking forward to this Advent?
“O Come, O Come, Emmanuel” is a popular advent carol, translated from the series of ancient Latin prayers known as the O Antiphons. Advent is the season in the church calendar leading up to Christmas, when Christians not only remember what Christ has already accomplished through his first coming, but also long for his promised return and the full realisation of his kingdom.
Each of the seven verses meditates upon an aspect of Jesus’ ministry – as prophesied by Isaiah – and addresses him by a different title. In the original Latin, the first letters of these titles form an acrostic “ERO CRAS”, which means “Tomorrow, I will come”.
This hymn recognises that the world we live in is a woeful mess, and longs for Jesus to return to make things right. However, the refrain – sung after each verse – calls us to rejoice because He will keep his promise, He will come.
In one sense, this verse has already been fulfilled. The prophecy of Isaiah 7:14 – that the virgin would bear a son, and he would be called Emmanuel, that is, ‘God with us’ – has already come to pass. By his death and resurrection, the Lord Jesus has ransomed us from our captivity to sin and has made us his people, Israel, no longer exiled from God. However, this verse is sung in the present tense. This reminds us that although Jesus Christ’s salvation work is finished, we do not yet fully experience its benefits – we still struggle with sin, we may at times feel distant from God and we mourn the suffering of this fallen world. We look forward to Jesus’ return to put these things right, that we might experience ‘God with us’ at its fullest.
We pray for God’s truth to be known throughout the world, that ignorance and fear would be dispersed, as prophesied in Isaiah 9:2: “the people who walked in darkness have seen a great light”. We also pray that people would be transformed, that sin – the dark shadow of death – would be put to flight in our lives.
The Lord Jesus has already opened the way to our heavenly home – as Isaiah 22:22 prophesied: “I will place on his shoulder the key of the house of David. He shall open, and none shall shut”. However, we are not there yet. This is a prayer that Jesus would keep us in the faith and bring us safely home.
Isaiah 11:1 promises that a king will come from the line of David, son of Jesse. That king will deliver his people from their foes – the ultimate of which is death. Unless the Lord Jesus returns first, every one of us will die. However, we trust him to give us victory over the grave.
In the words of Isaiah 33:22: “the Lord is our judge, the Lord is our lawgiver, the Lord is our King: he will save us”. This stanza highlights God’s power and rule, demonstrated by the giving of the law at Sinai. It is to the powerful and sovereign God that we look for salvation.
This is a prayer for help to live rightly. In Isaiah 11:2, he said of the Lord Jesus: “the Spirit of the Lord shall rest upon him, the Spirit of wisdom and understanding, the Spirit of counsel and might, the Spirit of knowledge and the fear of the Lord.” Therefore we look to Jesus for wisdom, understanding and guidance.
During this advent season, help us both to remember all that the Lord Jesus has accomplished for us and to look forward, with joy and with certain hope, to his promised return. | https://theminster.org/o-come-o-come-emmanuel/ |
Introduction
============
More than 90% of bladder cancers are transitional cell carcinomas and roughly 60% of bladder cancers are low-grade, superficial transitional cell carcinomas. After endoscopic resection, the majority of patients with these cancers develop cancer recurrences, 16% to 25% with high-grade cancers. Approximately 10% of patients with superficial bladder cancers subsequently develop invasive or metastatic cancers. Almost 25% of patients with newly diagnosed bladder cancer have muscle-invasive cancers, the vast majority being histologically high grade cancers. Almost 50% of patients with muscle-invasive bladder cancer already have occult distant metastases ([@b36-tog-2-2007-035]). Therefore, the frequent recurrence after transurethral resection of superficial bladder cancer and subsequent cancer progression are problems for both patients and urologists.
Cystoscopic examination is the gold standard to diagnose bladder cancer, but is costly, incurs substantial patient discomfort, and has variable sensitivity. The development of highly reliable, noninvasive tools for bladder cancer diagnosis would facilitate early detection and help to define the role of molecular markers in prognostic evaluation at the time of initial diagnosis of patients with bladder cancer.
All of the urothelial cells, proteins, and metabolites in the urinary tract can be isolated noninvasively from the urine and analyzed to detect disease in the urinary tract. Cytological analysis of voided urine has been the standard noninvasive method for cancer detection. However, the sensitivity is low, especially for low grade transitional cell carcinomas. More sensitive, non-invasive methods are required and many urine-based tumor markers have been developed for use in detecting and monitoring bladder cancers ([@b47-tog-2-2007-035]; [@b41-tog-2-2007-035]; [@b33-tog-2-2007-035]; [@b19-tog-2-2007-035]; [@b42-tog-2-2007-035]; [@b25-tog-2-2007-035]; [@b6-tog-2-2007-035]; [@b15-tog-2-2007-035]; [@b13-tog-2-2007-035]; [@b31-tog-2-2007-035]; [@b37-tog-2-2007-035]; [@b17-tog-2-2007-035]). The Food and Drug Administration (FDA) has already accepted some of these tumor marker tests for use in routine patient care. Initial studies with new markers are usually promising, but subsequent reports often fail to show comparable results ([@b50-tog-2-2007-035]).
The challenge for the urologist is to develop rational surveillance protocols that provide cost-effective, noninvasive monitoring for low-risk patients, while using a more active approach to identify high-risk refractory cancers before they metastasize. Methylation patterns are established during development and, normally, are maintained throughout the life of an individual. Consequently, DNA methylation is a key regulator of gene transcription and genomic stability, and alteration of DNA methylation is one of the most consistent epigenetic changes in human cancers. Hypermethylation of promoter regions of tumor suppressor genes is now the most well categorized epigenetic change in human neoplasias ([@b28-tog-2-2007-035]). In many cases, aberrant methylation of CpG island within genes has been correlated with a loss of gene expression, and it is proposed that DNA methylation provides an alternate pathway to gene deletion or mutation for the loss of tumor suppressor gene function. Markers for aberrant methylation may represent a promising avenue for monitoring the onset and progression of cancer. Aberrant promoter methylation has been described for several genes in various malignant diseases, and each tumor type may have its own distinct pattern of methylation ([@b10-tog-2-2007-035]; [@b12-tog-2-2007-035]). In transitional cell carcinoma of the bladder, hypermethylation of CpG islands near the promoter region and decreased expression of tumor suppressor genes such as *RUNX3*, *p16,* and *E-cadherin* have been reported ([@b27-tog-2-2007-035]; [@b35-tog-2-2007-035]; [@b38-tog-2-2007-035]). Several studies have demonstrated that hypermethylation of various gene promoters was detectable in DNA isolated from bodily fluids, including urine sediment DNA from bladder cancer patients ([@b9-tog-2-2007-035]; [@b49-tog-2-2007-035]). This article focuses on the prognostic relevance of DNA promoter hypermethylation detected in urine obtained from bladder cancer patients.
Conventional Biomarkers in Urine
================================
In bladder cancer patients, lifelong surveillance is required to detect subsequent tumor recurrences. Many potential tumor markers for bladder cancer have been evaluated for detecting and monitoring the disease in serum, bladder washes, and urine specimens. Development of accurate and noninvasive bladder tumor markers is essential for screening, initial diagnosis, monitoring for recurrence, detection of early progression, and prediction of prognosis, without increasing the frequency of invasive and costly diagnostic procedures. Current patient monitoring protocols generally consist of cystoscopic evaluations and urine cytology every 3--4 months for the first two years and at longer intervals in subsequent years. Cytological examination of voided urine is a highly specific, noninvasive adjunct to cystoscopy. It has good sensitivity for detection of high-grade bladder cancers, but poor sensitivity for low-grade cancers. Furthermore, the accuracy of cytology is dependent upon the level of expertise of the pathologist ([@b46-tog-2-2007-035]). Thus, noninvasive, objective, and accurate biomarkers are needed not only for the primary detection of bladder cancer, but also for monitoring the disease. The recent emergence of sensitive markers for bladder cancer has provided new opportunities for early bladder cancer detection. There are currently more than 20 urinary markers from various stages of disease progression. The FDA has already approved several urine tests for monitoring patients with bladder cancer, including the bladder tumor antigen (BTA) *stat* test, the BTA TRAK test, the fibrinogen--fibrin degradation products (FDP) test, UroVysion, ImmunoCyt, and the nuclear matrix protein-22 (NMP22) assay ([Table 1](#t1-tog-2-2007-035){ref-type="table"}). In general, each of these markers has better sensitivity but lower specificity than cytology, and must still be used as an adjunct to cystoscopy. Discrepancies among laboratories in sample handling, cutoffs, and the issue of specificity in nonmalignant urological diseases still pose a dilemma for application of these assays as routine tests in the clinical setting ([@b32-tog-2-2007-035]). None of the biomarkers reported to date has shown sufficient sensitivity and specificity in detecting the spectrum of bladder cancer diseases assessed in routine clinical practice ([@b50-tog-2-2007-035]). The limited value of the established prognostic markers requires analysis of new molecular indicators having the potential to predict the prognosis of bladder cancer patients, particularly, high-risk patients at risk of cancer progression and recurrence.
Methylation Markers in Urine
============================
Tumorigenesis is a multistep process that results from the accumulation and interplay of genetic mutations and epigenetic changes. The inheritance of information on the basis of gene expression levels is known as epigenetics, as opposed to genetics, which refers to the information inherited on the basis of the gene sequence. DNA methylation is an epigenetic mechanism used for long-term silencing of gene expression. It can maintain differential gene expression patterns in a tissue-specific and developmental-stage-specific manner. The direct relationship between the density of methylated cytosine residues in CpG islands and local transcriptional inactivation has been widely documented ([@b26-tog-2-2007-035]). Transcriptional repression by DNA methylation is mediated by changes in chromatin structure. Specific proteins bound to methylated DNA recruit a complex containing transcriptional corepressors and histone deacetylases ([@b2-tog-2-2007-035]). Histone deacetylation results in chromatin compaction and, hence, transcriptional inhibition.
Inactivation of gene expression by abnormal methylation of CpG islands can act as a "hit" for cancer generation ([@b26-tog-2-2007-035]; [@b3-tog-2-2007-035]). Thus, alteration of DNA methylation in CpG islands is emerging as a key event in the inheritance of transcriptionally repressed regions of the genome. Many tumor suppressor genes contain CpG islands and show evidence of methylation specific silencing. Several genes, including *p16*, *RARß*, *E-cadherin*, *AUTHOR*---*E-cadherin, correct? DAPK*, and *RASSF1A*, have been reported to undergo methylation in bladder cancer ([@b48-tog-2-2007-035]; [@b9-tog-2-2007-035]; [@b11-tog-2-2007-035]; [@b29-tog-2-2007-035]; [@b35-tog-2-2007-035]; [@b8-tog-2-2007-035]; [@b39-tog-2-2007-035]). In some of these tumors, hypermethylation is associated with loss of heterozygosity; in others, hypermethylation affects both alleles.
Aberrant hypermethylation events can occur early in tumorigenesis, predisposing cells to malignant transformation. Moreover, promoter hypermethylation of CpG islands is strongly associated with tumor development, stage, recurrence, progression, and survival in transitional cell carcinomas of the urinary bladder ([@b27-tog-2-2007-035]; [@b7-tog-2-2007-035]). We have demonstrated that *RUNX3* methylation confers a 100-fold increase in the risk for bladder cancer development (OR, 107.55). *RUNX3* methylation was also associated with cancer stage (OR, 2.95), recurrence (OR, 3.70), and progression (OR, 5.63), suggesting that RUNX3 is required not only to inhibit cancer initiation but also to suppress aggressiveness in primary bladder cancers ([@b27-tog-2-2007-035]). Although various diagnostic markers for bladder cancer development, recurrence, and progression have been reported, none are adequate to predict the behavior of most tumors. The methylation status of *RUNX3* could be a better diagnostic marker for bladder cancer than previously described markers.
[@b7-tog-2-2007-035] have analyzed hypermethylation at 11 CpG islands in a large cohort of upper urinary tract transitional cell carcinomas (UTTs) and lower tract (bladder) urothelial carcinomas (UCs), and have provided some interesting insights into the differential epigenetic features of the two types of malignancies. Despite morphological similarities between these cancers, more extensive promoter hypermethylation was found in UTTs (96%) than UCs (76%). Compared to tumors without methylation, the presence of methylation was significantly associated with advanced stage, high tumor progression rates, and increased mortality rates. These findings strongly suggest that patterns of promoter hypermethylation are causally associated with bladder cancer and that methylation status could be a useful diagnostic and prognostic marker for bladder cancer in the clinical setting, as well as a therapeutic target for treatment of bladder cancers.
Because some genetic and epigenetic events will occur early in the disease process, molecular diagnosis may allow detection before symptomatic or overt radiographic manifestations. Thus, from a clinical point of view the most promising application for methylation analysis is the early detection of cancers or the utilization of methylation as a prognostic marker. Screening of bodily fluids such as urine may ultimately provide a truly noninvasive diagnostic modality, thereby limiting the need for current imaging techniques that provide anatomical details without definitive pathological correlations. Bodily fluids surrounding or drained from organs from patients with various solid malignancies have been successfully used in methylation specific PCR based detection. These epigenetic changes have been detected in serum ([@b43-tog-2-2007-035]), sputum ([@b4-tog-2-2007-035]), and urine ([@b18-tog-2-2007-035]). Hypermethylation of several gene promoters correlated with bladder cancer in DNA isolated from cancer tissues and from urine sediment has been reported ([Table 2](#t2-tog-2-2007-035){ref-type="table"}). These studies revealed that detection of aberrant promoter methylation in urine is feasible and appears to be more sensitive than conventional cytology.
[@b9-tog-2-2007-035] investigated the methylation state of 7 genes (*RARß, DAPK, E-cadherin, p16, p15, GSTP1,* and *MGMT)* in 22 voided urine samples from bladder cancer patients and 17 control samples. The frequency of methylation in the bladder cancer samples was 45.5% for *DAPK*, 68.2% for *RARß*, 59.1% for *E-cadherin*, and 13.6% for *p16*. Methylation of any one of these four genes could be detected in 90.9% of the urine samples, whereas urine cytology was able to detect cancer cells in only 45.5% of the samples. This difference was more striking in low-grade cases (100% versus 11.1%), where conventional urine cytology is known to have a low sensitivity. Methylation could only be detected in those patients whose tumor tissue also showed gene methylation and no false positives were detected. No methylated copies of *E-cadherin, DAPK,* or *p16* were detected in normal urine (100% specificity). Detecting combinations of methylation markers, however, had a lower specificity, which was related to the presence of methylated *RARß* in the normal urine controls (23.5%).
[@b11-tog-2-2007-035] examined the methylation status of tumor suppressor genes (*APC, RASSF1A*, *p14^ARF^*) in matched samples of sediment DNA from urine specimens obtained before and after surgery from 45 bladder cancer patients, and in normal and benign control DNA samples. Hypermethylation of at least one of the three tumor suppressor genes was found in the matched urine DNA from 39 of 45 patients (87% sensitivity; 100% specificity), including 16 cases that had negative cytology. Hypermethylation (91%) was found more commonly than positive cytology (50%) in urine samples.
[@b16-tog-2-2007-035] investigated DNA methylation of apoptosis-associated genes (*DAPK*, *BCL2*, *TERT*, *EDNRB*, *RASSF1A*, and *TNFRSF25*) in urine sediments. The combined methylation analysis of three genes (*DAPK*, *BCL2*, and *TERT*) provided a high sensitivity (78%) and specificity (100%) for detection of bladder cancer. However, methylation markers such as *EDNRB*, *RASSF1A*, and *TNFRSF25* may not be useful in detection of bladder cancer, since these regions were also methylated in cancer-free individuals.
The feasibility of detecting DNA hypermethylation in voided urine and its potential role as a tumor marker for bladder cancer has been recently reported ([@b21-tog-2-2007-035]). In this study, a quantitative real-time PCR assay was introduced to examine urine sediment DNA from 175 patients with bladder cancer and 94 age-matched control subjects for promoter hypermethylation of nine genes (*APC*, *p14^ARF^*, *CDH1*, *GSTP1*, *MGMT*, *CDKN2A*, *RARβ2*, *RASSF1A*, and *TIMP3*). Compared to conventional methylation-specific PCR, the quantitative analysis of PCR products was critical for reproducible interpretation of the results, and the quantitative methylation-specific PCR assay used in this study provided a highly sensitive automated approach for the detection of methylated alleles. The combined methylation analysis of four genes (*CDKN2A*, *p14^ARF^*, *MGMT*, and *GSTP1*) displayed 69% sensitivity and 100% specificity. For patients without aberrant methylation of any of these four genes, addition of a logistic regression score based on the remaining five genes improved sensitivity from 69% to 82% but decreased the specificity from 100% to 96%. With regard to the association between clinicopathological parameters and the methylation patterns identified in the urine sediment DNA, promoter methylation of both *p14^ARF^* and *MGMT* was significantly associated with increasing tumor stage. Promoter methylation of *p14^ARF^*, *MGMT*, *GSTP1* and *TIMP3* was significantly associated with invasive tumors. Promoter methylation of *GSTP1* and *RASSF1A* was significantly associated with positive cytology. Aberrant methylation of the nine genes examined in the urine sediment DNA of bladder cancer patients was not associated with any other clinical or demographic characteristics, including age at the time of diagnosis, sex, histological subtype, or tumor recurrence. The combined methylation marker approach provides evidence that increasing the number of markers in the assay panel increases the sensitivity, but also decreased the specificity of the assay while increasing the cost. These results imply that only an extension of the selected panel of methylation markers might result in higher sensitivity and specificity in methylation analysis of the urine. These studies suggest, therefore, that the combined methylation marker assay is a promising noninvasive diagnostic and monitoring tool for detection of noninvasive bladder cancers.
There are, however, a number of criticisms of the clinical and prognostic relevance of assays detecting promoter hypermethylation in the urine. First, the exact molecular mechanisms of DNA methylation in health and cancer remain to be elucidated. One uncertainty is the extent of aberrant DNA methylation in nonmalignant tissue and its increase with aging. In contrast to early reports indicating a lack of aberrant hypermethylation in normal tissues, promoter hypermethylation is often found in histologically normal tissues, and is correlated with aging ([@b22-tog-2-2007-035]). Aging-related methylation has been demonstrated for the *ER* gene by [@b24-tog-2-2007-035] and for several other genes by [@b1-tog-2-2007-035], which perhaps partly explains the direct correlation between aging and increased incidence of cancer. Susceptibility to aging-related aberrant hypermethylation differs among genes and tissues ([@b1-tog-2-2007-035]). Although little is known about the causes of aging-related aberrant hypermethylation, the phenomenon might be related to the accessibility of the tissue to exogenous chemicals ([@b23-tog-2-2007-035]). In addition, endogenously-produced chemicals, including reactive oxygen species, are suspected as possible causes ([@b45-tog-2-2007-035]). Thus, tumor suppressor gene methylation can be found in exfoliated urinary cells from patients without cancer and increases in frequency with aging ([@b16-tog-2-2007-035], [@b51-tog-2-2007-035]). In the near future, it may be possible that technology will be developed to discriminate between methylation patterns related to aging versus cancer. Secondly, several studies of promoter methylation produced different results. Eventually, new methodologies might explain these discrepancies, but these dissimilarities emphasize the need for standardized methodological protocols if molecular diagnostic tools are to be a useful component of routine clinical practice.
Third, some markers, such as *GSTP1*, can be methylated in both bladder and prostate cancer. *GSTP1* promoter methylation is an attractive prostate cancer biomarker because it is seldom observed in non-cancerous prostate tissue ([@b5-tog-2-2007-035]; [@b20-tog-2-2007-035]). Furthermore, *GSTP1* hypermethylation is more frequent than bladder cancer ([@b9-tog-2-2007-035]; [@b35-tog-2-2007-035]). Even though a specific methylation marker in urine might be promising, the most powerful established detection tools, such as cystoscopy in bladder cancer and PSA in prostate cancer, should be first considered and methylation markers must still be used as adjuncts.
Additionally, discrepancies between methylation profiles in urine and surgical samples have been reported. In several published studies, paired urine and surgical specimens had a very high level of concordance ([@b9-tog-2-2007-035]; [@b8-tog-2-2007-035]). However, [@b40-tog-2-2007-035] demonstrated that urine methylation profiles were quite different from those of the corresponding follow-up biopsy specimens. The most likely explanation for this discrepancy lies in the different sensitivities of the assays used. Urine is obviously a more heterogeneous sample than a bladder biopsy, since it may contain cells from different neoplastic clones throughout the bladder. Utilization of multiplex methylation specific PCR (MSP) or quantitative PCR assays on urine may lead to detection of rare cells from clones remote from the biopsy site. Conversely, methylation events detected in biopsy samples but not in urine may represent rare events within the biopsy sample, possibly in the deep layers of the urothelium, that are not represented in urine. Future studies should be focused on optimizing the sensitivity of MSP assays for the detection of clinically relevant urothelial lesions from urine samples. However, detecting DNA hypermethylation in voided urine remains promising in terms of early detection and surveillance of bladder cancer.
Conclusions
===========
Transitional cell carcinomas of the urinary bladder have diverse biological and functional characteristics. Although current pathological and clinical variables provide important prognostic information, these variables are still limited in assessing the true malignant potential of most bladder cancers. A better understanding of the molecular mechanisms involved in carcinogenesis and cancer progression has provided a large number of molecular markers of bladder cancer having potential diagnostic and prognostic value. Cystoscopy is the mainstay for diagnosing bladder cancer, but is associated with high cost and patient discomfort. Cytology and many urine-based tumor markers provide minimal information for detecting and predicting the prognosis of bladder cancers. In contrast, promoter hypermethylation of CpG islands is strongly associated with tumor development, stage, recurrence, progression, and survival in transitional cell carcinomas of the urinary bladder. Detection of DNA methylation in voided urine is feasible and is also more sensitive than conventional urine cytology. Ultimately, all types of urological cancers may be screened in urine with a larger panel of hypermethylated genes. The panel could be easily extended in the future to simultaneously provide early detection and prognostic stratification, as well as providing novel targets for therapy. The epigenetic silencing of tumor suppressor genes is interesting from a clinical standpoint because it is possible to reverse epigenetic changes and restore gene function to a cell. In terms of treatment and prevention of bladder cancer, methylation markers might be more useful than conventional molecular markers. Treatment with DNA methylation inhibitors can restore the activities of dormant genes such as *RUNX3* and decrease the growth rate of cancer cells in a heritable fashion. It should therefore be possible to partially reverse the cancer phenotype by the use of methylation inhibitors. This will eventually lead to personalized target therapy tailored toward specific molecular defects, thereby significantly lowering the morbidity associated with bladder cancer.
The present study was supported by Regional Industry Technology Department Project, Ministry of Commerce, Industry and Energy (Grant No. 10018327), and Ministry of Health and Welfare, Korea.
######
Currently available urinary markers for bladder cancer.
**Test** **Marker** **Sensitivity (%)** **Specificity (%)** **References**
------------ ------------------------------------------- --------------------- --------------------- -----------------------------------------------------------------------------------------------
BTA *Stat* Human complement factor H related protein 60--70 50--75 ([@b47-tog-2-2007-035]; [@b41-tog-2-2007-035]; [@b33-tog-2-2007-035]; [@b19-tog-2-2007-035])
BTA TRAK Human complement factor H related protein 60--70 50--75 ([@b47-tog-2-2007-035]; [@b41-tog-2-2007-035]; [@b33-tog-2-2007-035]; [@b19-tog-2-2007-035])
FDPs FDPs 78--91 75--90 ([@b42-tog-2-2007-035]; [@b25-tog-2-2007-035])
UroVysion Chromosomal probes 70--100 90 ([@b6-tog-2-2007-035]; [@b15-tog-2-2007-035])
ImmunoCyt High molecular weight CEA and mucins 70--95 70--85 ([@b13-tog-2-2007-035]; [@b31-tog-2-2007-035]; [@b37-tog-2-2007-035])
NMP22 Nuclear mitotic apparatus Protein 60--75 70--85 ([@b47-tog-2-2007-035]; [@b29-tog-2-2007-035]; [@b17-tog-2-2007-035]; [@b42-tog-2-2007-035]).
######
Potential epigenetic markers in bladder cancer.
**Marker** **Chromosomal locus** **References**
---------------- ----------------------- -------------------------------------------------------------------------------------------------------------------
*APC* 5q21--q22 ([@b11-tog-2-2007-035])
*BCL2* 18q21.3 ([@b16-tog-2-2007-035])
*CDH1*
*(E-cadherin)* 16q22.1 ([@b35-tog-2-2007-035]; [@b9-tog-2-2007-035])
*CDKN2A* 9p21 ([@b21-tog-2-2007-035])
*DAPK* 9q34.1 ([@b9-tog-2-2007-035]; [@b7-tog-2-2007-035]; [@b16-tog-2-2007-035])
*FHIT* 3p14.2 ([@b35-tog-2-2007-035])
*GSTP1* 11q13 ([@b21-tog-2-2007-035])
*LNMA3* 18q11.2 ([@b44-tog-2-2007-035])
*LNMB3* 1q32 ([@b44-tog-2-2007-035])
*LNMC2* 1q25-q31 ([@b44-tog-2-2007-035])
*MGMT* 10q26 ([@b21-tog-2-2007-035])
*p14^ARF^* 9p21 ([@b11-tog-2-2007-035]; [@b21-tog-2-2007-035])
*p16^INK4A^* 9p21 ([@b9-tog-2-2007-035])
*RASSF1A* 3p21 ([@b7-tog-2-2007-035]; [@b11-tog-2-2007-035]; [@b8-tog-2-2007-035]; [@b35-tog-2-2007-035]; [@b34-tog-2-2007-035])
*RUNX3* 1p36 ([@b27-tog-2-2007-035])
*TERT* 5p15.33 ([@b16-tog-2-2007-035])
*TIM3* 5q33.2 ([@b14-tog-2-2007-035])
| |
Q:
Intercept is in the error term (dropping?)
The model is $$y_{it} = \delta_0d2_t + \delta_1 crm_{it} + (\alpha_i+u_{it})$$
Here the intercept is placed in the error term. Therefore if $\alpha_i$ is correlated with an independent variable would cause bias problems.
What are the consequences of dropping $\alpha_i$?
Why is it important to keep it?
Why do we delete it under fixed effects or first differencing $\alpha_i$ would be eliminated.
A:
When buidling a sum you can easily change the order (Cummutative property of addition) and the brackets (Associative property of addition).
$y_{it} = \delta_0d2_t + \delta_1 crm_{it} + (\alpha_i+u_{it})$
is the same as
$y_{it} = \delta_0d2_t + \delta_1 crm_{it} + \alpha_i+u_{it}$
and the same as
$y_{it} = \alpha_i + \delta_0d2_t + \delta_1 crm_{it} +u_{it}$.
Nonetheless I would like to outline why theequation is written in the way it is written:
These are random effects which depend on the individual and not on the time component. The first part of the equation equals the expected value und certain conditions.
$y_{it} = \delta_0d2_t + \delta_1 crm_{it}$
However the part in the bracket represents the noise. These are random effects which depend on the individual and not on the time component. The variance which does not depend on the regression $\delta_0$ and $\delta_1$ is captured by the term in the brackets. Thus neither time nor crm have an impact on the dependent variable.
$\alpha_i+u_{it}$
In other words if $\delta_0$ and $\delta_1$ are both zero than the following equation is always true.
$y_{it} = \alpha_i+u_{it}$
Consequences of dropping $\alpha_i$:
You delete any effect which is typical for the individual. For example is your sample consists of Sweden, Italy, Spain, Portugal and Malta and you delete the individual specific component. Than you do not have the "Sweden effect" (or the effect for the other countries any more. So you will not anymore capture the effect that Sweden is a northern european, protestant country with long winters while the other countries are different.
If we apply a FD estimation we are not interest in the effect of Sweden, but in the effect of the difference between the years. Therefore we do not have a close look at the Sweden effect at a certain point of the procedure.
| |
I am interested in this particular theory because it truly is an interesting theory. And i think that this theory is true or can be applied to a lot of criminals. Most criminal acts are done because people need to commit the crime to stay on the level they are at this could mean robbing so they can stay afloat, or committing some sort of embezzlement to try and make more money for that persons family or just simply because that particular person wants more money. And then there are those who do it for the thrill. I think these types of people can be broken down into different categories.
The less opinion and emotional description are used, the more room for the audience to think. The controversial issue raised is whether to sacrifice the individual privacy in return for national security. The ones usually support the idea with the reason of how effective it is in safety keeping. According to a survey conducted by Horne in 1998 (as cited in Isnard 2001, p.3-4), there is an interrelated link between the decreasing crime rate and the presence of CCTVs. The population is socialized to behave well according to laws under conspicuous surveillance.
Often, a conviction of a certain crime will cost the defendant their career, certifications, and family, so allowing them to take a plea bargain will avoid losing what is important to them. The majority will learn from their mistake and never repeat the offense again. Some will not learn from their mistake or second chance and become repeat offenders. The repeat offenders will then sustain harsher punishment and a plea bargain is usually off of the table. Although I like ideas from both of the models, as a conservative and knowing how the court system works, I feel that the crime control model is the most effective.
The criminal justice system is not blind to the wealthy, in fact it favors them. The second rule of the Pyrrhic defeat theory states, “Failure to treat as crimes the harmful acts of the rich and powerful” (179) Rich people also commit crimes, yet the perception of a criminal is usually perceived as both a minority and poor. Rich people also commit crime, but they receive more of a slap on the wrist and second chances than less privileged classes. People in power who happen to be rich have the authority and influence to create a narrative of the threat of poor people to the masses. The consequences of the narrative of the threat of the poor creates a broken system where there become two courts, one for people with money, and one for those without.
‘’The principle of manipulability refers to the predictable ways in which people act out of rational self-interest and might therefore be dissuaded from committing crimes if the punishment outweighs the benefits of the crime, rendering the crime an illogical choice.’’(http://www.biography.com/people/cesare-beccaria-39630) Beccaria believed that the criminal justice system needed to be changed, he thought the present criminal justice system was ‘barbaric and antiquated’. Beccaria also believed that certain laws should be changed and who they should benefit. He believed the system should establish the appropriate punishment for each crime committed. Unlike many of the other theories ‘’On Crime and Punishment’’ wanted to help and protect the rights of the criminals as well as the rights of the victims, he believes that punishment of the criminals should be that which serves the greatest public good. Beccaria also put forward in his theories the first modern argument against the death penalty.
The police also works inefficiently in these countries and the criminals easily escape from punishments. Punishments in public especially capital punishment highly controls the crime rate in most of the countries with low crime rate. Capital punishment is also linked with complicated illustrations. Carrying out capital
In Dostoevsky’s Crime and Punishment, Dostoevsky challenges the concept of crime. Through Raskolnikov’s ability to rationalize murder and evil, Dostoevsky challenges the concept of what a crime is. By depicting Raskolnikov in a way that he rationalizes his acts, it can be understood that the concept of crime is dependent on the situation and the outcome. With this, one can question whether crime will remain as a crime even if it results in the benefit of the majority of the population. In this paper, I will be arguing the concept of what crime is through the situations and the outcomes shown in Crime and Punishment, with the help of true to life crimes. | https://www.ipl.org/essay/Analysis-Of-Robert-Agnew-The-General-Strain-FCZGKG4GZT |
The invention belongs to a traditional Chinese medicine composition, and relates to Chinese herba external lotion for treating swelling of an upper limb of an affected side after breast cancer surgery. The Chinese herba external lotion is prepared by following raw materials including, by weight, from 9 to 11g of loofah, from 9 to 11g of hematoxylon, from 9 to 11g of angelica sinensis, from 9 to 11g of turmeric, from 9 to 11g of red paeony roots, from 9 to 11g of rhizoma ligustici wallichii, from 9 to 11g of cortex phellodendri, from 9 to 11g of rhizoma atractylodis, from 14 to 16g of radix sophorae flavescentis, from 14 to 16g of safflower carthamus, from 14 to 16g of glabrous greenbrier rhizome, from 14 to 16g of pericarpium zanthoxyli, from 14 to 16g of speranskia tuberculata, from 14 to 16g of common clubmoss herb and from 14 to 16g of suberect spatholobus stems. The Chinese herba external lotion has the advantages that the Chinese herba external lotion is fine in curative effect, low in cost, free of toxic and side effects and convenient in use and takes effects fast, a preparation method is simple, and raw materials are simple and easy to obtain. | |
By Huynh Thi Bich Ngoc,
M.A. in TESOL,
The University of Queensland, Australia
Over the past few years, together with magnificent technological advances, computer network technology is now exerting its influence on various aspects of life including government, business, economics, and undoubtedly, education as well. Under such a circumstance, there has been a significant increase of interest in using computers and it applications not only in Information Technology classrooms but also in the field of language teaching and learning. The role of computers in language instruction has become an important ...view middle of the document...
Besides, with an abundance of interactive activities on the Internet and the World Wide Web, our students can now play games and learn the language at the same time. This kind of learning experience was impossible before the development of the computer network technology.
Next, computer network technology provides students with opportunities to have access to authentic materials and information about the target language culture, which may be missing from many course books. As an understanding of culture is vital in language learning and may help enhance understanding of the target language, current pedagogical theories stress the importance of integrating culture into the language classroom (Canale & Swain, 1981). In this circumstance, computer network technology offers great advantage as it allows easier access to the target language and culture. It has the potential to bring people and places to the classroom, thus adding realism, authentic sociocultural and sociolinguistic information and help students have a real sense of immersion. It also provides students with a multimedia mirror on the target culture in that “it can bring the sounds, words, and images of the foreign language, embedded in their culture, into the classroom” (Atkinson, 1992, cited in Hackett, 1996, p. 17), and thus, can help expose students to international communication and new cultures as well as break down stereotypes. Computer Mediated Communication (CMC), and particularly e-mail and tele-conferencing in the language classroom can “provide authentic communication, which helps develop students’ communicative, literacy, and critical thinking skills” (Kelm, 1992; Kern, 1995, in Singhal, 1998).
Besides, the emergence of computer network technology has also made a significant contribution to language teaching and learning. CALL (Computer Assisted Language Learning) software and programs have the potential to improve learner autonomy in that they provide students with the power to control the speed, rate, timing, and order of tasks in a language program, and allows students to work at their own level. Furthermore, Little (1996) states that information technology can play an important role in the development of learner autonomy as it facilitates the students’ learning and provides students with the opportunity to use what they have learned. CALL software programs have been designed for the purpose of language teaching while other tools such as the Internet, e-mail, etc. also promote student-centered language learning (Gonglewski, Meloni, & Brandt, 2003) and help students develop their communicative skills as well. What is more, CALL programs also provide learners with a variety of choice in terms of which aspects of the target language such as grammar, vocabulary, pronunciation, etc. they want to practise or what skills (listening, speaking, reading, writing) they want to develop, and which topics they are interested in. Thanks to this kind of new technology, learners... | https://www.avsabonline.org/papers/advantages-and-disadvantages-of-using-computer-network-technology-in-language-teaching |
Self-defensive war uses violence to transfer risks from one’s own people to others. We argue that central questions in just war theory may fruitfully be analyzed as issues about the morality of risk transfer. That includes the jus ex bello question of when states are required to accept a ceasefire in an otherwise-just war. In particular, a “war on terror” that ups the risks to outsiders cannot continue until the risk of terrorism has been reduced to zero or near zero. Some degree of security risk is inevitable when coexisting with others in the international community, just as citizens within a state must accept some ineradicable degree of crime as a fact of community life.
We define a conception of morally legitimate bearable risk by contrasting it with two alternatives, and argue that states must stop fighting when they have achieved that level. We call this requirement the Principle of Just Management of Military Risk. We also argue that states should avoid exaggerated emphasis on security risks over equivalent risks from other sources—the Principle of Minimum Consistency Toward Risks. This latter principle is not a moral requirement. Rather, it is a heuristic intended to correct against well-known fallacies of risk perception that may lead states to overemphasize security risks and wrongly export the costs of their security onto others. In conclusion, we suggest that states must invest in non-violent defensive means as a precondition for legitimately using force externally.
Publication Citation
Ethics (forthcoming)
Scholarly Commons Citation
Blum, Gabriella and Luban, David, "Unsatisfying Wars: Degrees of Risk and the Jus ex Bello" (2014). Georgetown Law Faculty Publications and Other Works. 1370. | https://scholarship.law.georgetown.edu/facpub/1370/ |
At the heart of the impeachment inquiry undertaken by the U.S. House is a complaint from a federal whistleblower about a phone call President Trump made to the president of Ukraine in July.
President Trump and allies have demanded release of the whistleblower's name. One name of an individual alleged to be the whistleblower is in public circulation.
In our view, that's disturbing. Why? Because we believe whistleblowers should be viewed through a wide lens, not a narrow, partisan lens.
Passed in 1989, the federal Whistleblower Protection Act provides protections for employees of the federal government who report possible wrongdoing such as violations of laws, rules, or regulations, mismanagement, abuse of authority, waste, or danger to public health or safety.
Retaliatory action is considered a violation. Isn't outing the whistleblower, in fact, a form of retaliation?
You have free articles remaining.
Debate rages today about whether it's illegal to publicly identify the Ukraine whistleblower. To that, we say: If it isn't against the law to name this, or any, federal whistleblower, it should be - for obvious reasons.
If the intent of the law is to provide protections necessary for individuals to step forward, at great risks, and shine light on possible improper activities of our federal government the rest of us should want to know about, those who do deserve a guarantee of confidentiality - regardless of who the president is or what political party is wielding power in Congress.
Complaints from whistleblowers have saved lives, saved billions of dollars and produced important reforms. Protected whistleblowers represent an invaluable tool in helping keep the federal government honest and accountable.
In 1988, the year before passage of the Whistleblower Protection Act, the Office of Special Counsel (the agency tasked with protection of federal whistleblowers from retaliation) received 120 whistleblower disclosures. Last year, the office received 1,559 of them.
With any complaint, including the one about the Ukraine call, an investigation to determine the merits of what is alleged by a whistleblower should be conducted without forcing the individual into the public spotlight. To do otherwise sets a troubling precedent for the future.
Our concern is calls for public release of the Ukraine whistleblower's name and the cavalier release of one name by some creates the potential for a chilling impact on others who may be considering blowing the whistle within a federal department or agency. In light of what they have read and heard about the Ukraine matter, they might choose to remain quiet.
And if whistleblowers stop talking, all Americans lose. | https://siouxcityjournal.com/opinion/editorial/our-opinion-federal-whistleblowers-deserve-protections/article_4dc137cc-40bd-5aad-a123-370716229192.html |
CROSS-REFERENCE TO RELATED APPLICATION(S)
FIELD OF THE INVENTION
SUMMARY
DETAILED DESCRIPTION
The present application claims the benefit of priority to Provisional Patent Application No. 61/506,214 filed Jul. 11, 2011, the contents of which are incorporated herein by reference in their entirety.
Disclosed embodiments pertain to an inventive method and apparatus that confers the ability to image using Magnetic Resonance Imaging (MRI) to an optical microscope.
The following presents a simplified summary in order to provide a basic understanding of some aspects of various invention embodiments. The summary is not an extensive overview of the invention. It is neither intended to identify key or critical elements of the invention nor to delineate the scope of the invention. The following summary merely presents some concepts of the invention in a simplified form as a prelude to the more detailed description below.
Disclosed embodiments pertain to an inventive method and apparatus that confers the ability to image using Magnetic Resonance Imaging (MRI) to an optical microscope.
The description of specific embodiments is not intended to be limiting of the present invention. To the contrary, those skilled in the art should appreciate that there are numerous variations and equivalents that may be employed without departing from the scope of the present invention. Those equivalents and variations are intended to be encompassed by the present invention.
In the following description of various invention embodiments, reference is made to the accompanying drawings, which form a part hereof, and in which is shown, by way of illustration, various embodiments in which the invention may be practiced. It is to be understood that other embodiments may be utilized and structural and functional modifications may be made without departing from the scope and spirit of the present invention.
Moreover, it should be understood that various connections are set forth between elements in the following description; however, these connections in general, and, unless otherwise specified, may be either direct or indirect, either permanent or transitory, and either dedicated or shared, and that this specification is not intended to be limiting in this respect.
Disclosed embodiments pertain to an inventive method and apparatus that confers the ability to image an object using magnetic resonance imaging (MRI) to an optical microscope. Alternatively, when chemical information about the object is required, the invention permits the collection of such information through magnetic resonance spectroscopy (MRS). Through implementation of the disclosed embodiments, it is possible to collect spectroscopic information as well as anatomic information using the objective structure and/or MRI-enabled or MRS-enabled stage. For the purpose of this disclosure, the objective conferring MRI or MRS capability to the microscope is referred to as MRI, consistent with the practice in the MRI industry (in which a single MRI instrument may be used to perform imaging and/or spectroscopy). It is understood that in this invention disclosure, the term magnetic resonance is used broadly, referring to signals from protons, electrons, and/or other particles.
FIG. 1
100
105
is an illustration of one disclosed embodiment of the MRI microscope adapter provided in combination with a conventional optical microscope .
105
105
110
135
140
The microscope may be, for example, but not limited to a compound microscope that uses lenses and light to enlarge an image of a sample/specimen. Accordingly, the microscope may have two systems of lenses for greater magnification, the ocular, or eyepiece lens that one looks into, and the objective lens , or the lens closest to the object. It should be understood that the term “objective lens” generally refers to and encompasses any structure that physically approaches an object or sample in order to assist in providing information about the sample or sample holder .
FIG. 1
105
110
115
110
105
120
105
As shown in , the microscope includes an eyepiece . That eyepiece is optionally coupled to a digital camera to record the data generated by viewing through the eyepiece . The microscope also includes an arm that supports the components of the microscope and connects them to the base of the microscope.
125
105
125
105
125
In accordance with at least one embodiment, the conventional optical objective lens is one of several objective lenses, each of which includes a variety of lens elements that confer various degrees of magnification to microscope . In accordance with at least one embodiment, optical objective lens can be swung out of the optical path of the optical microscope so as to enable an MRI imaging component to be provided, or to permit a different objective lens of different optical magnification to be employed.
145
Also included is an illumination element which may be included or be implemented as a mirror or other source of light (whether visible or not); thus, it should be understood that illumination is meant to be general, including laser sources and/or elements required for single-photon or dual-photon, or confocal microscopy, or other forms of microscopy. A mirror may be used to reflect light from an external light source up through the bottom of the stage. Alternatively, a steady light source may be used in place of a mirror.
Conventional objective lenses usually include three or four objective lens elements on a microscope. They almost always consist of 4×, 10×, 40× and 100× powers. When coupled with a 10× (most common) eyepiece lens, the total magnifications of 40× (4× times 10×), 100×, 400× and 1000× is provided. The microscope may also optionally include chromatic, parcentered, parfocal lenses and a condenser lens (which focuses light onto the specimen)
100
130
130
135
130
135
140
In accordance with at least one embodiment, the MRI microscope adapter includes an MRI-enabled objective lens which replaces the conventional optical objective lens. The MRI-enabled objective lens includes one or more conventional optical objective elements as well as one or more coils within or attached to the MRI-enabled objective lens . The coil is (are) in close proximity to a sample and/or sample holder .
FIG. 2
130
135
140
145
140
is an expanded illustration of an embodiment of the MRI-enabled objective lens , coil apparatus , sample or sample holder , and stage , which provides magnetic resonance images of the object of interest (i.e., sample or sample-holder ) but which do not simultaneously provide an optical image of the object of interest.
FIG. 2
FIG. 2
135
210
215
210
220
130
140
215
140
145
225
In , coil apparatus is shown to comprise planar gradient coil assembly and RF coil assembly . Planar gradient coil assembly may comprise two- or three-dimensional gradient coils, and may include shim functionality. Alternatively an additional coil and/or permanent magnet may be present in the objective structure to provide shim functionality and/or to establish a uniform magnetic field that is present while the MRI-enabled objective is in close proximity to sample or sample-holder . RF assembly may either comprise separate transmit and receive coils or coils that combine both functions. Optically-transparent sections of sample-holder and stage are denoted as feature in .
210
215
220
It is understood that power supplies and connecting cables attach to the various components of the MRI-enabled objective lens. It is also understood that currents through gradient coil assembly and/or RF coil assembly may be pulsed in order to collect images. It is also understood that shim coil and/or permanent magnet may provide pulsed or static magnetic fields.
FIG. 3
FIG. 3
130
140
210
215
310
315
140
145
is an expanded illustration of an embodiment of the MRI-enabled objective lens , which provides magnetic resonance images of the object of interest (i.e., sample or sample-holder ). In , some or all of the functions of gradient coil assembly and/or RF coil assembly are provided through permanent or electromagnetic structures and embedded in sample-holder and/or stage , respectively.
FIG. 4
130
140
410
135
is an expanded illustration of an embodiment of the MRI-enabled objective lens , which provides magnetic resonance images of the object of interest (i.e., sample or sample-holder ) and which may simultaneously provide an optical image of the object of interest, as a result of optically-transparent sections of the components comprising coils .
140
Note, in accordance with at least one other embodiment, the MRI imaging component may include an MRI objective lens that is actually separate from the optical objective lens (rather that being combined to provide an MRI-enabled objective lens) and may include one or more conventional optical objective elements as well as one or more coils within or attached to the MRI objective lens. Thus, the coil(s) is (are) in close proximity to a sample or sample holder .
Likewise, it should be understood that the term coil is used herein to refer in general to any set of electrical conductors arrayed to create an electromagnetic field.
In accordance with at least one disclosed embodiment, the MRI-enabled objective structure may be equipped with a radio frequency (RF) coil that is brought in close proximity to the sample to be imaged. Accordingly, it is possible to retain the optical elements of the objective structure and also to include the RF coil in such a manner that it does not always interfere with the optical path of light through the sample to be imaged.
Moreover, in accordance with at least one disclosed embodiment, a gradient coil is also added to the RF coil that resides on, or replaces, the MRI-enabled objective lens in order to form the MRI-enabled objective structure. In such an embodiment, the stage of the optical microscope may contain (or be replaced by) coils and/or permanent magnets that establish magnetic fields. Such magnetic fields, in turn, introduce at least one magnetic field gradient, which may be used to implement imaging of the sample. Thus, the term “MRI-enabled stage” should be understood to refer generally to and encompass an optical microscope stage.
In accordance with at lest one embodiment, the gradient coil added to the objective structure adds to, or replaces, one or more of the coils on the stage.
In accordance with at least one embodiment, it is possible to employ coils used to create a gradient field without the need for a separate apparatus to create a static field.
In accordance with at least one embodiment, it is possible to employ superconductors in the coils.
100
155
100
105
155
155
135
145
135
145
In accordance with the disclosed embodiments, the MRI microscope adapter also includes or is coupled to one or more computational processing units (CPUs) and/or controllers that operate under the control of one or more software algorithms (stored, for example, on computer readable media, to enable and control operation of at least some of the aforementioned components of the MRI microscope adapter and/or the optical microscope . Such CPUs and/or controllers may be implemented in one or more general purpose or special purpose computers that may be coupled to and/or include memory for storing software that enables superimposing, mapping, enlarging, and/or analyzing the electronically on digital representations of the optical image or images generated by the microscope. The controllers may also include such software algorithms configured to control operation and/or positioning of the coils and positioning of the stage if positioning may be implemented using motors or the like (not shown). Furthermore, the coil(s) and stage may be connected or coupled to electronic equipment, e.g., including amplifiers, digitizers, power sources, and other computer implemented equipment and peripherals such as printers), as needed to create, record and analyze optical and MRI image data.
FIG. 5
FIG. 5
500
505
510
515
520
525
530
illustrates one example of a method for imaging a sample in accordance with at least one disclosed embodiment. As shown in , the method begins at and control proceeds to , at which the optical microscope is used to select a region of interest in the sample to be imaged. Subsequently, at , the MRI-enabled objective structure is positioned into place so that the sample is located between the objective lens and the stage. Then, at , the coils and associated readout electronics in the MRI-enabled stage are energized to form an image of the region of the sample that has been selected. Control then proceeds to , at which an MRI image is generated or MRS data is collected. Control then proceeds to at which the generated MRI image is optionally superimposed electronically on digital representations of the optical image or images generated by the microscope. Control then proceeds to , at which the operations end.
Disclosed embodiments of the MRI microscope are inventive over conventional MRI microscopes in various ways. For example, conventional MRI microscopes have employed small RF and/or gradient coils in close proximity to a sample, but have relied on a large magnet to create an environment that would enable MRI microscopy. An example of such a use is the publication in the Journal of Magnetic Resonance, volume 200, pages 38-48, in 2009, by Andrey V. Demyanenko, Lin Zhao, Yun Kee, Shuyi Nie, Scott E Fraser, and J Michael Tyszka, entitled “A uniplanar three-axis gradient set for in vivo magnetic resonance microscopy.”
Demvanenko et al. disclosed an optimized uniplanar magnetic resonance gradient design for MR imaging applications. That design decreased the size of the uniplanar gradient set to improve gradient uniformity for high gradient efficiency and slew rate. Demvanenko et al.'s design provides a three-axis, target-field optimized uniplanar gradient coil design that is designed for microscopy in horizontal bore magnets, e.g., a horizontal bore 7 Tesla magnet. As a result, many of the design considerations relate to improvements for cooling and insulation for reducing sample heating for the three axis, target-field optimized uniplanar gradient coil design.
However, disclosed embodiments of the MRI microscope replace the large magnet with a small stage, which fits in an optical microscope and facilitates correlation between the optical and MRI images and/or measurements. As a result of the elimination of the large magnets, the fundamentally different approach provided by the presently disclosed embodiments do not require compensation or design to reduce the resulting heating of samples that comes along with use of such magnets. It should be understood, however, that various components and/or techniques disclosed in that publication may be incorporated in combination with the presently disclosed embodiments. Accordingly, that publication is incorporated by reference in its entirety.
Another conventional MRI system is the single-sided MRI system, an example of which being published by Jeffrey L Paulsen, Louis S Bouchard, Dominic Graziani, Bernhard Blümich, and Alexander Pines, in the Proceedings of the National Academy of Sciences, volume 105, number 52, pages 20601-20604, entitled “Volume-selective magnetic resonance imaging using an adjustable, single-sided, portable sensor.” It should be understood, however, that various components and/or techniques disclosed in that publication may be incorporated in combination with the presently disclosed embodiments. Accordingly, that publication is incorporated by reference in its entirety.
However, disclosed embodiments of the MRI microscope differ and improve upon these conventional systems as well because the current innovation integrates MRI components within an optical microscope, and thereby facilitates and enables correlation between optical data generated by the optical components of the microscope and MRI images and/or measurements generated by the MRI-related components.
While this invention has been described in conjunction with the specific embodiments outlined above, it is evident that many alternatives, modifications and variations will be apparent to those skilled in the art. Accordingly, the various embodiments of the invention, as set forth above, are intended to be illustrative, not limiting. Various changes may be made without departing from the spirit and scope of the invention.
For example, it should be understood that the disclosed embodiments may be configured as a kit that can convert a commercially available and/or conventional optical microscope to the MRI-enabled microscope as described in this disclosure.
Moreover, it should be understood that the MRI-enabled microscope adapter and resulting MRI-enabled microscope are not limited to use with a compound optical microscope or the like. Therefore, MRI-imaging adapters may be used with various other types of microscopes as well.
Furthermore, in accordance with at least one embodiment, it is possible to replace the RF coil with a sensitive magnetometer.
Additionally, it should be understood that the functionality described in connection with various described components of various invention embodiments may be combined or separated from one another in such a way that the architecture of the invention is somewhat different than what is expressly disclosed herein. Moreover, it should be understood that, unless otherwise specified, there is no essential requirement that methodology operations be performed in the illustrated order; therefore, one of ordinary skill in the art would recognize that some operations may be performed in one or more alternative order and/or simultaneously.
Various components of the invention may be provided in alternative combinations operated by, under the control of or on the behalf of various different entities or individuals.
Further, it should be understood that, in accordance with at least one embodiment of the invention, system components may be implemented together or separately and there may be one or more of any or all of the disclosed system components. Further, system components may be either dedicated systems or such functionality may be implemented as virtual systems implemented on general purpose equipment via software implementations.
As a result, it will be apparent for those skilled in the art that the illustrative embodiments described are only examples and that various modifications can be made within the scope of the invention as defined in the appended claims.
BRIEF DESCRIPTION OF THE DRAWINGS
A more compete understanding of the present invention and the utility thereof may be acquired by referring to the following description in consideration of the accompanying drawings, in which like reference numbers indicate like features, and wherein:
FIG. 1
is an illustration of one disclosed embodiment of the MRI microscope adapter provided in combination with a conventional optical microscope in accordance with at least one embodiment of the invention.
FIG. 2
is an expanded view of the MRI objective lens in accordance with at least one embodiment of the invention.
FIG. 3
is an expanded view of the MRI objective lens in accordance with a separate embodiment of the invention.
FIG. 4
is an expanded view of the MRI objective lens in accordance with a separate embodiment of the invention.
FIG. 5
illustrates one example of a method for imaging a sample in accordance with at least one disclosed embodiment. | |
GRANT OF NON-EXCLUSIVE RIGHT
BACKGROUND
SUMMARY
DETAILED DESCRIPTION OF THE EMBODIMENTS
This application was prepared with financial support from the Saudia Arabian Cultural Mission, and in consideration therefore the present inventor has granted The Kingdom of Saudi Arabia a non-exclusive right to practice the present invention.
Wireless networks are the most prevalent among users of the Internet or private networks. The wireless network service provides a cost-effective way to access the networks. However, the wireless network service has some disadvantages as well. One of the disadvantages is the difficulty in controlling an extent of a wireless network service.
For example, to provide a wireless network service within a building, one or more wireless access points may be arranged over an area of the building where the wireless network service is provided. The wireless access points are set up so that any mobile device can communicate with the wireless access points as long as the mobile device is located inside the building.
Concrete walls of the building usually function as electromagnetic shielding members against the wireless network signals whereas glass windows are nearly transparent to the wireless network signals. Thus, in practice, the strengths of wireless network signals do not drop down uniformly along the boundary of the building, and some of the wireless network signals may leak across the boundary of the building. This makes it difficult to limit the extent of a wireless network service and confine the wireless network signals inside an intended boundary of a wireless network service. This may pose security concerns such as eavesdropping and an out-of-boundary connection to a device located outside the intended boundary of a wireless network service.
The “background” description provided herein is for the purpose of generally presenting the context of the disclosure. Work of the presently named inventors, to the extent it is described in this background section, as well as aspects of the description which may not otherwise qualify as prior art at the time of filing, are neither expressly or impliedly admitted as prior art against the present invention.
According to one aspect of the present disclosure, a device for limiting an extent of a wireless network service is provided. The device includes a directional antenna disposed at or near an intended boundary of the wireless network service, and a signal generator configured to drive the directional antenna to transmit a jamming signal in a direction substantially parallel to or away from a closest portion of the intended boundary of the wireless network service. The jamming signal degrades the quality of a wireless network signal, and has a signal strength equal to or larger than that of the wireless network signal at a preselected location at or near the intended boundary of the wireless network service.
The device allows forming an artificial electromagnetic interference zone of the jamming signal outside the intended boundary of the wireless network service, and preventing a requester that sends a new connection request from connecting to the wireless network service when the requester is located outside the intended boundary of the wireless network service. Accordingly, the device provides for limiting the extent of a wireless network service.
According to another aspect of the present disclosure, a method for limiting an extent of a wireless network service is provided. The method includes transmitting a jamming signal from a location at or near an intended boundary of the wireless network service in a direction substantially parallel to or away from a closest portion of the intended boundary of the wireless network service. The jamming signal degrades the quality of a wireless network signal, and has a signal strength equal to or larger than that of the wireless network signal at a preselected location at or near the intended boundary of the wireless network service.
As is the case with the foregoing device, the method also provides for limiting the extent of a wireless network service.
The foregoing paragraphs have been provided by way of general introduction, and are not intended to limit the scope of the following claims. The described embodiments, together with further advantages, will be best understood by reference to the following detailed description taken in conjunction with the accompanying drawings.
Referring now to the drawings, wherein like reference numerals designate identical or corresponding parts throughout the several views.
FIG. 1
100
100
110
120
130
10
31
31
100
100
is a plan view of a building in which a wireless network system according to one embodiment is installed. The building has an entrance door , windows , and walls . The wireless network system includes a wireless network access point that allows a mobile device to access a public network or a private network for exchanging data using radio waves when the mobile device is located inside the building . In the present embodiment, it is assumed that a floor/level of the building is an intended area in which the wireless network service is provided.
31
The mobile device may be any portable device equipped with wireless communication capability, and may be, for example, a smartphone, a tablet computer, a laptop computer or the like.
20
20
100
32
100
The wireless network system further includes a plurality of jammer antennas . Each jammer antenna transmits a jamming signal to degrade the quality of wireless network signals and reduce signal-to-noise ratio (SNR) at an area outside the building to prevent a mobile device , which is located outside the building , from connecting the wireless network service across an intended boundary of the wireless network service.
The jamming signal may be any electromagnetic wave or waves that degrade the quality of the wireless network signals, and may include, for example, but not limited to, electric or electromagnetic noise or a high frequency electromagnetic radiation at a frequency different from the wireless network signal frequencies.
20
20
20
100
FIG. 1
The jammer antenna is a directional antenna, and forms an artificial electromagnetic interference zone in front of the directional antenna , where the reception quality of the wireless network signals is degraded. Arranging such jammer antennas as illustrated in allows for collectively forming a buffer zone around the building and limiting the extent of the wireless network service.
20
20
100
The jammer antenna may be any form of directional antenna. For example, the jammer antenna may include a feed antenna and a reflector or may be a vertical antenna positioned in front of a signal-reflecting member fixed on a wall of the building .
The signal-reflecting member may be a pole, a plate, a sheet, or a paint made of or including a metallic material such as aluminum or any other material capable of reflecting electromagnetic waves.
20
110
120
130
120
20
130
130
The jammer antennas may be fixed close to the entrance door and the windows through which the network signal may easily passes. When the walls are made of concrete, the walls are less transparent to the wireless network signals compared with the windows made of glass that is more transparent to the wireless signal. Alternatively, the jammer antennas may be additionally fixed near the walls to prevent highly sensitive eavesdropping across the wall .
20
100
20
As described above, the jammer antennas limit the extent of the wireless network service, which is the inner space of the building in the present embodiment. An appropriate arrangement of the jammer antennas provides for confining the wireless network to an intended shape.
100
20
20
In one embodiment, the strength of the jamming signal can be set so that the SNR at each one of a plurality of preselected locations outside the building become equal to one or less. This is because a connection to a mobile device located inside the artificial electromagnetic interference zone formed in front of each jammer antenna may be prevented or the data transfer rate to the mobile device may be greatly reduced when SNR is equal to one or less. The values of SNR at the preselected locations may be measured in advance, and the strength of the jammer antenna may be adjusted in advance accordingly.
10
FIG. 1
Although only one wireless access port is included in the wireless network system illustrated in , the wireless network system may include a plurality of wireless access points for providing the wireless network service.
FIG. 2
10
210
220
230
is an exemplary block diagram of the wireless network system in the present embodiment. The wireless network system includes the foregoing wireless access point , a network controller , a network , and a wireless network extent-limiting device .
210
220
220
220
The network controller may be, for example, such as an Intel Ethernet PRO network interface card from Intel Corporation of America, for interfacing with the network . As can be appreciated, the network can be a public network, such as the Internet, or a private network such as an LAN or WAN network, or any combination thereof and can include PSTN or ISDN sub-networks. The network can also be wired, such as an Ethernet network, or can be wireless such as a cellular network including EDGE, 3G and 4G wireless cellular systems. The wireless network can also be Wi-Fi, Bluetooth, or any other wireless form of communication that is known.
230
20
2310
2320
2320
20
2320
20
2310
2320
210
2310
2320
210
210
The wireless network extent-limiting device includes a plurality of the foregoing jammer antennas , a jamming signal controller , and a jamming signal generator . The jamming signal generator is coupled to the jammer antennas . The jamming signal generator drives the jammer antennas to transmit the jamming signal. The jamming signal controller is coupled to the jamming signal generator and the network controller . The jamming signal controller controls the jamming signal generator based on information obtained from the network controller , and instructs the network controller to prevent a connection to a mobile device when the mobile device is located outside the intended boundary of the wireless network service.
230
20
2310
10
2310
2320
20
2310
2320
10
130
2310
2320
20
130
10
The wireless network extent-limiting device may limit the extent of the wireless network service in a plurality of ways. The first way is to form the buffer zone around the intended boundary of the wireless network service as described above to prevent a mobile device from connecting to the wireless network service when the mobile device is located outside the intended boundary. Here, the plurality of jammer antennas can be configured via the jamming signal controller to transmit the jamming signal whenever the wireless network service is available. Thus, in response to detecting wireless signals from a wireless access point the jamming signal controller may transmit signals to the jamming signal generator to drive the jammer antennas to transmit jamming signals. Alternatively, or in addition to, the jamming signal controller may be configured to instruct the jamming signal controller to activate the jamming signal generator at predetermine intervals or at preset times during the day. Further, as weak signals generated by a wireless access point may not be enough to excessively penetrate through walls , the jamming signal controller may be configured to determine a signal strength of the wireless access point and only cause the jamming signal generator to activate the jammer antennas when the signal strength is above a predetermined threshold. This threshold can be set based on materials known to be included in the walls such as concrete or other construction materials. This may provide for the saving of energy when it is unlikely that signals generated by the wireless access point will exceed the confines of a particular floor, room or boundary.
2310
20
2320
100
20
Specifically, in one embodiment, the jamming signal controller controls the powers of the jammer antennas through the jamming signal generator so as to reduce SNR of the network signals down to, for example, one or less at the preselected locations outside the building . For example, the jammer antenna may transmit the jamming signal with the signal strength equal to or larger than that of the wireless network signal at or near the intended boundary of the wireless network service. When the jamming signal and the network signal have substantially the same strength, the network signal drops out continuously, making it difficult to maintain the connection to the wireless network service.
The first way may be implemented easily and inexpensively. The first way enables to limit the extent of a wireless network service in an existing wireless network system.
The second way is to use the jamming signal to determine whether a mobile device requesting a new connection is located inside or outside the intended boundary of the wireless network service, and allow the connection when the mobile device is determined as being inside the intended boundary of the wireless network service, thereby in effect limiting the extent of the wireless network service.
FIG. 3
210
310
2310
210
210
is an algorithmic flow chart of a jamming signal control process that is an exemplary implementation of the second way. The jamming signal control process starts when the network controller receives a new connection request from a mobile device at step S. For example, the jamming signal controller may be configured to receive a notification from the network controller when the network controller receives a new connection request from a mobile device.
310
312
2310
When the new connection request is received (Yes at step S), the process proceeds to step S. There, the jamming signal controller obtains the current retransmission rate on a connection to the requester, the mobile device that sends the connection request. Note that the mobile device is allowed to have a temporary connection to the wireless network service until its location is determined.
314
2310
2320
20
Subsequently, at step S, the jamming signal controller controls the jamming signal generator to start a jamming signal transmission from one or more of the jammer antennas . The strength of the jamming signal may be set arbitrary as long as the change in retransmission rate due to the jamming signal transmission is distinguishable between a mobile device located inside the intended boundary of the wireless network service and a mobile device outside.
20
2310
2310
2310
2320
20
In one embodiment, all jammer antennas may be activated or the jamming signal controller may receive (or detect) information related to the location of the mobile device. At this point, the jamming signal controller may only control jamming signal controller to cause the jamming signal generator to generate signals via one or more particular jammer antennas that are located in close proximity to the location of the mobile device.
100
100
20
100
When the building is isolated from other buildings or structures, larger artificial electromagnetic interference zones of the jamming signal may be permissible to form around the building . In such a case, the strength of the jamming signal may be set, for example, so that the jamming signal and the wireless network signal have the same or substantially the same signal strength at a location at or near each jammer antenna . When there are other buildings around the building , it is possible to use smaller jamming signal strength.
316
2310
Subsequently, at step S, the jamming signal controller obtains again the current retransmission rate on the connection to the mobile device.
318
312
316
At step S, two values of the retransmission rate obtained at step S and step S are compared and determined whether the retransmission rate increases after the start of jamming signal transmission. During the jamming signal transmission, the mobile device located outside the intended boundary of the wireless network service is exposed to the jamming signal radiation. This causes a frequent drop-out of the wireless network signal and an increase in retransmission rate. Accordingly, the mobile device may be determined as being outside the intended boundary of the wireless network service when the retransmission rate increases after the start of jamming signal transmission.
318
320
320
2310
210
210
322
310
When the retransmission rate increases after the start of jamming signal transmission (Yes at step S), it is determined that the mobile device locates outside the intended boundary of the wireless network service, and the process proceeds to step S. At step S, the jamming signal controller instructs the network controller to terminate the connection to the mobile device. Further, the MAC address of the terminated mobile device is obtained and stored in the network controller for filtering out by MAC address filtering to prevent the terminated mobile device from connecting to the wireless network service. Subsequently, at step S, the jamming signal transmission is stopped and the process returns to step S.
318
324
2310
210
When the retransmission rate does not increase after the start of jamming signal transmission (No at step S), it is determined that the mobile device is located inside the intended boundary of the wireless network service, and at step S the jamming signal controller instructs the network controller to maintain the connection to allow the mobile device to connect to the wireless network.
2310
20
Although the retransmission rate is used in this example, any other parameter or index utilized in monitoring wireless network activity may be used in place of the retransmission rate, provided that such a parameter or index can indicate directly or indirectly a change in the quality of connection to an individual mobile device. For example, a signal strength of the connection between the mobile device and the network is lower than a predetermined threshold, it may be determined by the jamming signal controller that the mobile device is located outside of the boundary of the wireless access point established via the jammer antennas . This could be determined by comparing the signal strength to an average threshold determined based on mobile devices already connected to the network and within the established boundaries.
In the second way, the jamming signal is used only for determination of the location of a mobile device requesting a new connection. This reduces a time of the jamming signal transmission, and thus reduces any possible electromagnetic interference to neighboring areas.
210
It is possible that a user of the terminated mobile device is a legitimate user of the wireless network service and turned on his/her mobile device before entering the wireless network area by mistake. To cope with such a situation, in another embodiment, the network controller may store a list of MAC addresses of mobile devices of legitimate users in advance, and the MAC address of the terminated mobile device may be filtered out only when the MAC address of the terminated mobile device is not included in the list of MAC addresses of mobile devices of legitimate users.
In another embodiment, instead of obtaining the retransmission rate before and after the start of jamming signal transmission, the retransmission rate may be continuously monitored while the jamming signal transmission is being modulated in strength. A mobile device may be determined as being outside the intended boundary of the wireless network service when a measured retransmission rate change correlates with the jamming signal transmission modulation.
FIG. 4
20
is an algorithmic flow chart of the jamming signal control process according to another embodiment. In this embodiment, the jamming signal is sequentially transmitted from the respective jammer antennas to further reduce possible interference effects of the jamming signal.
FIG. 4
2310
210
410
410
412
2310
The jamming signal control process of starts when the jamming signal controller detects that the network controller receives a new connection request from a mobile device at step S. When the new connection request is received (Yes at step S), the process proceeds to step S. There, the jamming signal controller obtains the current retransmission rate on a connection to the requester, the mobile device that sends the connection request. Note that the mobile device is allowed to have a temporary connection to the wireless network until the location of the mobile device is determined.
414
2310
20
2320
20
416
2310
Subsequently, at step S, the jamming signal controller selects one of the plurality of the jammer antennas and controls the jamming signal generator to start a jamming signal transmission from the selected jammer antenna . Subsequently, at step S, the jamming signal controller obtains again the current retransmission rate on the connection to the mobile device.
418
412
416
418
420
420
2310
210
210
422
410
At step S, two values of the retransmission rate obtained at step S and step S are compared and determined whether the retransmission rate increases after the start of jamming signal transmission. When the retransmission rate increases after the start of jamming signal transmission (Yes at step S), it is determined that the mobile device locates outside the intended boundary of the wireless network service, and the process proceeds to step S. At step S, the jamming signal controller instructs the network controller to terminate the connection to the mobile device. Further, the MAC address of the terminated mobile device is obtained and stored in the network controller for filtering out by MAC address filtering to prevent the terminated mobile device from connecting to the wireless network. Subsequently, at step S, the jamming signal is turned off and the process returns to step S.
418
20
424
20
424
426
2310
210
When the retransmission rate does not increase after the start of jamming signal transmission (No at step S), it is further determined whether all the jammer antennas are already selected at step S. When all the jammer antennas are selected (Yes at step S), it is determined that the mobile device is located inside the intended boundary of the wireless network service, and at step S the jamming signal controller instructs the network controller to maintain the connection to allow the mobile device to connect to the wireless network.
20
424
414
20
When all the jammer antennas are not selected (No at step S), the process returns to step S to select another jammer antenna that has not been selected and start another session of jamming signal transmission.
20
In one embodiment, the jamming signal may be transmitted only from one jammer antenna at a time. This reduces possible interference effects that may affect neighboring areas.
20
32
20
110
20
32
110
32
20
FIG. 1
Further, in one embodiment, it is possible to identify the individual jammer antenna that directly caused the increase in retransmission rate. Based on such information, the location of a mobile device outside the intended boundary of the wireless network service may be estimated. For example, in , the mobile device is located in front of the jammer antenna at left hand side of the entrance door . Thus, when the jamming signal is transmitted from that jammer antenna , it is highly possible that the retransmission rate increases on a connection to the mobile device . Whereas, when the jamming signal is transmitted from the jammer antenna at right hand side of the entrance door , it is less possible that the retransmission rate increase. The location of the mobile device is estimated to be somewhere inside the artificial electromagnetic interference zone formed in front of the jammer antenna that directly caused the increase in the retransmission rate.
20
The jammer antenna that caused the increase in retransmission rate may be kept turned on for a preset period for preventing any additional attempts of out-of-boundary accesses from the same area. Further, the information may be collected over a preset period to identify an area or areas where the out-of-boundary access occurs frequently. This enables to improve the security of wireless network service.
FIG. 5
FIG. 6
FIG. 1
FIG. 2
230
500
500
20
20
Referring to and , a wireless network system according to another embodiment is described. The present embodiment differs from the foregoing embodiment of and in that a wireless network extent limiting device A of the present embodiment further includes a plurality of monitoring antennas A to D for monitoring both the strengths of jamming signals transmitted from a plurality of jammer antennas A to D and wireless network signals in use for the wireless network service.
FIG. 5
100
20
20
100
100
500
500
100
is a plan view of the building in which the wireless network system according to the present embodiment is installed. The plurality of jammer antennas A to D are directional antennas and fixed to respective walls of the building in such a way that their antenna directions substantially parallel to their closest walls of the building , which form an intended boundary of the wireless network service. The plurality of monitoring antennas A to D are fixed to respective walls of the building , and measures the strengths of the jamming signals and the wireless network signals at their locations.
FIG. 6
10
210
220
230
is an exemplary block diagram of the wireless network system according to the present embodiment. The wireless network system includes the wireless access point , the network controller , and the network for providing users a wireless network service. The wireless network system further includes a wireless network extent-limiting device A.
230
20
20
500
500
600
2310
2320
The wireless network extent-limiting device includes the jammer antennas A to D, the monitoring antennas A to D, a SNR detector , a jamming signal controller A, and a jamming signal generator A.
600
500
500
2310
2310
600
2320
The SNR detector receives signals from the monitoring antennas A to D, calculates the values of SNR at the respective monitoring antenna locations, and outputs calculated values of SNR to the jamming signal controller A. The jamming signal controller A receives the calculated values of SNR from the SNR detector , and controls the jamming signal generator A based on the calculated values of SNR received so as to prevent an out-of-boundary connection to a mobile device located outside the intended boundary of the wireless network service.
2310
2320
20
20
20
20
100
2310
2320
The jamming signal controller A may control the jamming signal generator A to drive the respective jammer antennas A to D so that the artificial electromagnetic interference zones formed in front of the jammer antennas A to D substantially surround the building . Specifically, the jamming signal controller A may control the jamming signal generator A to drive each jammer antenna so that the value of SNR at the location of the corresponding monitoring antenna fixed on the same wall is equal to one or less.
100
This enables to ensure that the value of SNR outside the intended boundary of the wireless network service is sufficiently low to prevent the connection from outside the building .
2310
FIG. 3
FIG. 4
Alternatively, the jamming signal controller may perform the jamming signal control process as illustrated in or to identify a mobile device outside the intended boundary of the wireless network service and prevent connection thereto.
2310
2310
700
702
704
a
FIG. 7
FIG. 7
Next, a hardware description of the jamming signal controller and/or jamming signal controller , hereafter “signal controller,” according to exemplary embodiments is described with reference to . In , the signal controller includes a CPU which performs the processes described above. The process data and instructions may be stored in memory . These processes and instructions may also be stored on a storage medium disk such as a hard drive (HDD) or portable storage medium or may be stored remotely. Further, the claimed advancements are not limited by the form of the computer-readable media on which the instructions of the inventive process are stored. For example, the instructions may be stored on CDs, DVDs, in FLASH memory, RAM, ROM, PROM, EPROM, EEPROM, hard disk or any other information processing device with which the signal controller communicates, such as a server or computer.
700
Further, the claimed advancements may be provided as a utility application, background daemon, or component of an operating system, or combination thereof, executing in conjunction with CPU and an operating system such as Microsoft Windows 7, UNIX, Solaris, LINUX, Apple MAC-OS and other systems known to those skilled in the art.
700
700
700
CPU may be a Xenon or Core processor from Intel of America or an Opteron processor from AMD of America, or may be other processor types that would be recognized by one of ordinary skill in the art. Alternatively, the CPU may be implemented on an FPGA, ASIC, PLD or using discrete logic circuits, as one of ordinary skill in the art would recognize. Further, CPU may be implemented as multiple processors cooperatively working in parallel to perform the instructions of the inventive processes described above.
FIG. 7
706
220
210
220
220
The signal controller in may also include a network controller , such as an Intel Ethernet PRO network interface card from Intel Corporation of America, for interfacing with network or network controller . As can be appreciated, the network can be a public network, such as the Internet, or a private network such as an LAN or WAN network, or any combination thereof and can also include PSTN or ISDN sub-networks. The network can also be wired, such as an Ethernet network, or can be wireless such as a cellular network including EDGE, 3G and 4G wireless cellular systems. The wireless network can also be WiFi, Bluetooth, or any other wireless form of communication that is known.
724
704
726
The general purpose storage controller connects the storage medium disk with communication bus , which may be an ISA, EISA, VESA, PCI, or similar, for interconnecting all of the components of the signal controller.
The foregoing discussion discloses and describes merely exemplary embodiments of the present invention. As will be understood by those skilled in the art, the present invention may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. Accordingly, the disclosure of the present invention is intended to be illustrative, but not limiting of the scope of the invention, as well as other claims. The disclosure, including any readily discernible variants of the teachings herein, define, in part, the scope of the foregoing claim terminology such that no inventive subject matter is dedicated to the public.
BRIEF DESCRIPTION OF THE DRAWINGS
A more complete appreciation of the disclosure and many of the attendant advantages thereof will be readily obtained as the same becomes better understood by reference to the following detailed description when considered in connection with the accompanying drawings, wherein:
FIG. 1
is a plan view of a building in which a wireless network system according to one embodiment is installed;
FIG. 2
is an exemplary block diagram of the wireless network system according to one embodiment;
FIG. 3
is an algorithmic flow chart of a jamming signal control process according to one embodiment;
FIG. 4
is an algorithmic flow chart of a jamming signal control process according to another embodiment;
FIG. 5
is a plan view of a building in which a wireless network system according to another embodiment is installed;
FIG. 6
is an exemplary block diagram of the wireless network system according to another embodiment; and
FIG. 7
721
is an hardware diagram of a signal controller according to an embodiment . | |
Welcome to ASHI's Volunteer Portal where you can volunteer for projects that fit your interests, expertise and schedule. ASHI offers a variety of ways to get involved, from short-term volunteer projects to committee, council, and board service. We encourage you to deepen your engagement with ASHI and your HLA peers by volunteering today!
1. Complete the online volunteer application. Volunteer applications are accepted year round. Volunteers must be active ASHI members with no outstanding membership dues. To complete the volunteer application, please visit the Important Documents and Links section at the bottom of this page.
2. An ASHI staff member will submit your application to the committee chair. If there is no availability on the committee of your first choice, your application will be passed on the next committee in the order of your stated preference.
3. You will receive an email with further instructions. If you are accepted onto a committee, you will be contacted via the email address on your volunteer application with important committee details including committee chairs' contact information, recent meeting minutes to review and an invitation to participate in the next committee conference call. You will also be notified if a committee is at capacity and will receive instructions on how to reapply once there is availability.
Why should I volunteer? Volunteering is a great way to enhance your CV. It also allows you to connect with your colleagues in HLA while maintaining flexibility in your busy schedule.
Do I have to be an ASHI Member to volunteer? Yes, volunteers must be active ASHI members for committee positions and some micro-volunteer opportunities. International Associate member types may not serve on ASHI committees.
Is there a list of all ASHI committees and the members who serve on each committee? Yes! Please visit the Important Documents and Links section at the bottom of this page.
Which committee should I choose? ASHI committee responsibilities vary in nature. Each committee is responsible for furthering ASHI’s mission and strategic planning initiatives in a distinct way. Nonetheless, all positions are important. Duties may include writing an article for the ASHI Quarterly, posting to the online forums, helping at the annual meeting or promoting the value of ASHI membership on social media. Members should consider their personal preference and availability when selecting a committee.
What is the time-commitment?All ASHI committees hold conference calls throughout the year to meet the objectives outlined in ASHI’s Strategic Plan. However, some committees require a bit more work throughout the year than others. Visit the Important Documents and Links section at the bottom of this page to learn more about each committee and their role and responsibilities.
How long can I serve on an ASHI Committee? Committee positions are available for a term of three years and are usually filled two months prior to the annual meeting each year and on a rolling basis as needed. Should you desire, you can choose more than one committee.
This committee has two (2) open committee seats. Volunteers must have a PhD and be a Lab Director.
This committee is looking for international members to serve on the committee.
Committee Publications Committee This committee is looking for volunteers to assist in a series of ASHI-U module reviews before they are released to members.
Committee Bylaws Committee This committee has one (1) open committee seats.
Committee Technologists' Affairs Committee This Committee has one (1) open committee seat.
Committee Education Committee The Committee Chair seat is available, as well as the Educational Initiatives Committee Vice-Chair*. The committee is also looking for committee members.
*Education Chair and Vice-Chair applicants should have prior experience serving on an ASHI committee and/or significant participation in educational activities in ASHI or elsewhere. Applicants must be interested in serving in other leadership positions on the Education Committee after their term as Vice-Chair. Applicants should provide a CV and a short paragraph on their vision and plans for this subcommittee. Applications should be sent to Shena Seppanen at [email protected] no later than Wednesday, May 1st for consideration.
The ASHI Directors' Mentor / Mentee Program is dedicated to helping new Directors advance in their careers. The program pairs new Directors with more experienced Directors on an informal basis for the purpose of providing one-on-one guidance and advice to the newer Directors (mentees.) An experienced Director will volunteer to serve as a mentor to a less experienced Director for at least a period of one year, during which time the mentor and mentee are encouraged to communicate on a monthly basis for discussions on mutually agreed topics.
The mentor will be encouraged to share experiences, feedback, and provide insights related to career guidance or career advancement. The mentee should enter into the mentor relationship with the goal of career enhancement and professional development.
To complete a Mentor application, please click here.
To complete a Mentee application, please click here. | https://www.ashi-hla.org/page/ASHIWantsYouVoluntee |
Customer experience plays a more critical role in healthcare than in any other industry.
Couple sky-high patient expectations with the complexities involved with rapidly growing health systems, serving an aging population, regulation, billing, and electronic health records, and the situation becomes daunting. More so than any other business or organization, health systems and providers are constantly measured against the experience and quality of outcomes they deliver.
We view this complexity as a tremendous opportunity for organizations to differentiate by focusing on digital customer experience and improving patient satisfaction. Hero Digital’s specialized teams work with organizations across the healthcare landscape, helping them to conceive and build personalized patient experiences to reduce friction and serve critical needs. We combine institutional knowledge with insights gleaned during discussions with patients, practitioners, and administrators. We then design new models to engage, convert, and communicate, all enabled by technology that paves the way for future enhancement while ensuring the strongest levels of security. | https://herodigital.com/capabilities/verticals/healthcare/ |
Linking technologies and social environment is the central aim of the Research Unit Active & Assisted Living at Carinthia University of Applied Sciences. The unit develops concepts, products and services in the areas of smart home, smart health and smart interaction to improve the quality of life of older adults. It uses an innovative participatory Living Lab methodology and has proven excellence in intelligent sensor technology, interoperable interfaces, ADL algorithms, user interfaces, and socio-technical evaluation.
Our research is based on a participatory user-centered approach. The entire R&D process from the idea via testing to market launch follows the Living Lab methodology which involves all relevant stakeholder includes infrastructure, appropriate tools, methods and processes. We develop solutions together with future end users and including the views of all dimensions of the quadruple helix: government, industry, academia and civil participants. Our cooperation with national and international research and business partners allows us to put our ideas into practice and make it available to interested end users.
Our vision is to enable a longer, happier and more self-determined life for older adults within their own homes. This is why we cooperate with future end users to develop new technological solutions. This not only reduces the burdens on our health care systems but more importantly increases the well-being of older adults. | http://blog.fh-kaernten.at/aal/en/ |
FAT president Pol Gen Somyot Poompanmuang last week confirmed that the first trial for the VAR use will be held at the Thailand Champions Cup game between Thai League 1 champions Buriram United and FA Cup winners Chiang Rai United early next year.
Somyot last Thurday (Dec 21) attended a presentation of the technology at the FAT head office in Bangkok.
The FAT is expected to introduce the technology in next year's Thai League 1, which will kick off on Feb 9.
The plan to implement the system will be reviewed by the FAT at the Thailand Champions Cup clash on Jan 19 at Supachalasai Stadium before a final decision is taken.
Somyot said the national governing body needs to study the effectiveness of using the technology in depth first.
It will also take into consideration readiness of each Thai League 1 team's home ground and the costs to be incurred by its implementation, he said.
The system has been in use in some big leagues including Germany’s Bundesliga and Italy’s Serie A.
Fifa has been testing the technology, which allows doubtful decisions to be reviewed by a video referee, for some time with mixed results.
It has not yet decided whether to use the system at next year’s World Cup.
The FAT has faced increasing calls to improve the standard of its referees.
The FAT recently accused 12 people – including five players, two referees and a club director – of their involvement in match-fixing. | http://th.thephuketnews.com/fat-to-give-var-system-a-try-65309.php |
FRANKLIN, Tenn. -- Construction crews digging a sewer line made a historic discovery in Franklin on Thursday.
While digging near a Burger King restaurant at the corner of Columbia Pike and Southeast Parkway, the body of a Civil War Union soldier was uncovered. The remains of the soldier were found scattered in a 2-foot grave. Curators and historians from the Carter House and Lotz House arrived at the location to analyze the body. Bones and well-preserved buttons were recovered from the site.
"He likely would have been killed on the retreat out of Franklin. He was buried in a hurry, likely buried by his own troops about 2 feet down in the ground," said J.T. Thompson, Lotz House Curator. State archaeologists will take the remains to a lab and then have them reburied. Archaeologists are looking for more remains and artifacts at the site and have stopped the building project. | http://www.northshorecivilwarroundtable.org/2009/08/civil-war-soldier-found-by-work-crews.html |
So, I have this theory. And the reason I have it is because I’m a part of several (and even run my own) autoimmune support groups and communities, where I’ve been privileged enough to connect with such amazing people and hear so many of my fellow autoimmune warriors’ stories.
It’s through these stories and connections that have led me to begin noticing patterns. These patterns have nothing really to do with the condition or disease itself though; instead, these patterns have everything to do with common characteristics that keep coming up amongst us within the community.
EVOLUTION & BACKGROUND OF THE THEORY
I think these common characteristics are part of why I think all of you are my spirit animals (aka my tribe) – because we have so much in common and can relate to how each other are wired. But it also got me thinking: are these common personality traits amongst those of us who struggle with autoimmunity more than just coincidence? Could the way we’re wired actually contribute to our conditions? My gut/intuition tells me yes, and there are studies out there (<<just some Google examples) that corroborate my thinking.
In fact, there is even a relatively new study of science that is gaining traction and attention called Psychoneuroimmunology (PNI), which is defined by Professor Kavita Vedhara, world-renowned expert in PNI as, “the science of the connections between the mind and the body.” Another definition of PNI that Professor Vedhara gives that taps into the relationship to autoimmunity is “the study of the interaction between psychological processes and the nervous and immune systems of the human body.”
By now, I think that this is something we all are at least semi-aware of – that stress and our mental states can, and do, affect the progression (and existence) of our conditions. But what I don’t see is a lot of discussion about what those certain personality traits, characteristics and belief sets actually are. That’s why I want to start this discussion, because I believe that understanding ourselves is the first step in being able to unravel some of the puzzle that is chronic illness.
FULL DISCLOSURE…
Before I dive into giving the 15 commonalities I believe may exist amongst those of us with autoimmunity conditions, I need to make three disclosures (please don’t skip over these):
- I am not a doctor, a social scientist (although I suppose I do have a college “minor” in Sociology…hehe), or a psychologist, and I make no claims to be one. This is not an official theory or one that I’ve tested or proven or anything like that. It’s just simply something I’ve been thinking about for a while and wanted to bring into the limelight. So, basically what I’m saying is that I have no credentials to support this theory whatsoever, except my own curiosity and tendency to over-analyze the $hit out of most things, while being really observant and sharply inclined to pick up on the idiosyncrasies of most people I come into contact with. 😉
- Sort of ties in with #1 but again – this is not a tried and tested theory. I don’t even know if you all will find ONE of these true for you, let alone all of them. So, take it with a grain of salt. If I do happen to be super spot-on, then that’s great, but don’t overthink it. All of these traits are amazing qualities to embody and make us all freakin’ superheros! It’s just that when applied to our physical systems, my theory is that these idiosyncrasies may have a rougher time being “digested” by our bodies. In other words, this list is not meant to invoke more anxiety or additional feelings of shame, insecurity or self-blame; it’s just meant to start to unveil some commonalities that we all share. Through this connection and self-empowerment through awareness, I’m hoping that we can all help each other go a little bit deeper into ourselves and use this as a jumping-off point for more advanced healing.
- This list is also not meant to say if you do exhibit some of these traits, that you DO or WILL (or that your children, family members, loved ones, etc. do or will) have an autoimmune condition or chronic illness. Now, I suppose my theory is meant to say if you have MOST of these traits, then yes maybe you’d be more inclined towards those conditions (otherwise why would we be talking about this right now? :)) But still, this is not intended to diagnose, pre-diagnose, or lead to any self-fulfilling prophecies. 🙂
Without further ado, here are the traits/characteristics/beliefs that form the backbone of what I’m calling the “Autoimmunity + Personality” theory. I may add or subtract to this list in the future, but this is just where my head’s at right now.
THE AUTOIMMUNE + PERSONALITY THEORY: COMMON THREADS
I believe that people with autoimmunity or chronic illness may possess a number of the following characteristics or belief systems:
- A tendency towards perfectionist tendencies
- Are Highly Sensitive Persons (HSPs)
- Feel more comfortable “playing it safe”
- Lean toward being “Type A” personalities
- Have an ongoing fear of letting people down, or being wrong
- Have a tendency toward introversion (still may be social introverts though)
- Are “Maximizers” or over-achievers
- Have trouble saying “no”
- Tend to be very hard on themselves; expect and push themselves to be able to meet their own high self-expectations
- Self-conscious about who they are; afraid to show people who they really are or let people in
- More prone to anxiety and worry
- Inclined to be more open to having spiritual and “alternative” healing experiences
- Dislike feeling out of control
- Have spent a majority of their life feeling like they “don’t quite fit in” or like they are “misunderstood”
- Have endured some sort of memorable traumatic event or upbringing (remember, “trauma” is different to the beholder; some may experience trauma going to war, some may classify trauma as being in a car wreck, while others may internalize trauma from growing up around alcoholic parents, being bullied at school, or falling down the stairs in front of a large group of people. Chances are, if the event or situation is seared into your memory and can still elicit a psycho-physical reaction in you, then your body and mind may be holding onto it as a traumatic event)
FEEDBACK & FINAL WORDS
So, what do you think? Did any of these resonate with you? Could you relate to any of these? I’d truly love to hear your thoughts about my list, as well as the general topic of how (or if) common personality traits and experiences correlate to a propensity toward chronic illnesses like autoimmunity, chronic fatigue syndrome, etc.
I truly think that if we can continue to get to know ourselves, explore the relationship between personality and disease, and find the links within this “psychoneuroimmunology” area of study, that we are going to be able to take our awareness and understanding of autoimmunity and how to heal it to a whole new level.
And my hope is that by gaining more awareness about the characteristics that make up who we are, we can start to love the heck out of them and find more productive outlets for some of these traits. In this way, I believe we can help our traits and experiences to actually serve our bodies, instead of causing it to fight against itself. | http://www.instinctualwellbeing.com/autoimmunity-personality-theory-common-personality-traits-linked-autoimmunity/ |
Hang on for a minute...we're trying to find some more stories you might like.
Email This Story
Send email to this address Enter Your Name Add a comment here Verification
At 83% in 2018, Sacramento State Athletics had the highest percentage of its budget come from allocated funds of any athletic department in the Big Sky Conference.
Allocated funds include state and university contributions as well as student fees. Student fees made up 31% of the athletic department’s overall budget.
This means that revenue generated by Sac State Athletics itself (ticket sales, sponsorships, donations, etc.) accounted for 17% of its overall budget.
Athletics has relied heavily on campus operating funds in past years and during budget meetings last spring, the University Budget Advisory Committee (UBAC) pushed for the department to generate more revenue to offset its deficit and become more self-supported.
The UBAC said in the 2019 spring budget report that Athletics was carrying a $2.6 million deficit that needed attention.
The concern the UBAC addressed in their recommendation was that the deficit had been ongoing and that Athletics was using money from the campus operating fund to maintain operations, reducing funding available for additional courses, emergencies, infrastructure and other campus needs.
Story continues below data visualization.
Athletic Director Mark Orr said his goal is to generate more revenue and depend less on campus funds, but that their current financial standing is not unique from other Division I athletic programs.
“None of them are self-supported, everybody’s depending on the university to supplement the programs,” Orr said.
According to Associate Vice President for Budget Planning and Administration Rose McAuliffe, it is the expectation of all departments to stay within their authorized budgets. McAuliffe said the UBAC is currently working with departments and looking at “how (they) can help optimize that.”
Orr told UBAC in April that an outside consulting company, College Sports Solutions, had been hired to research the department’s budget and provide solutions for its deficit. Orr said that an analysis would be made available by Oct. 31.
Assistant Athletic Director Brian Berger said last Tuesday that the report was not ready for release and that the department was providing College Sports Solutions with more information.
Story continues below podcast.
Orr said that he is optimistic about the future of Sac State Athletics.
“We do have some plans already in place for this school year that (are) helping (to) close the gap,” Orr said. “One is a successful football program.”
Sac State football tied for the Big Sky Conference championship and will begin play in the Football Championship Subdivision playoffs Saturday at home.
Orr confirmed that Sac State’s 1996 entrance into the Big Sky Conference resulted in elevated expenses, requiring all but a few Sac State teams to regularly travel outside of California to play conference opponents. However, Orr is not considering withdrawing any Sac State teams from the conference.
“We’re committed to being in the Big Sky,” Orr said.
Orr said the revenue streams that Athletics depends on is corporate sponsorships, fundraising and ticket sales. Revenues from concessions, merchandise and parking don’t support Athletics.
“Currently, the way we’re set up as a university, Athletics (doesn’t see) any of the concessions revenue, it does not impact Athletics’ budget,” Orr said. “University Enterprises runs those operations.”
This year’s Causeway Classic football game against UC Davis drew in 19,000 fans, more than double the average football attendance number of 7,000 to 8,000 according to Berger.
McAuliffe said all CSUs have non-profit auxiliaries that run concessions and merchandising and that at Sac State, the revenues support the university and students as well as departments, including Athletics.
Orr said the venues need to improve in order to help increase attendance at games and corporate sponsorships.
In 2017, ticket sales accounted for 0.72% of Athletics’ income, and corporate sponsorship accounted for 0.92%, according to an analysis done by the Knight Commission.
The analysis also said “Institutional/Government Support” accounted for 52% of the department’s income while student fees accounted for 29% in 2017.
Story continues below data visualization.
“Our venues aren’t the best for generating revenue,” Orr said. “We really need to get an event center built, our basketball facility is the smallest in Division I.”
The Nest, which holds up to a little over 1,000 people, is home to the men’s and women’s basketball teams as well as volleyball and gymnastics. The Nest is the smallest basketball court in the Big Sky, but the smallest basketball court in Division I play is actually the G. B. Hodge Center in Spartanburg, South Carolina at 818 seats, according to sports news site Sportrige.
“I feel like (The Nest) is just really old, so a lot of people don’t come to the games,” said Brenda Osavande, a Sac State psychology major. “I feel like maybe if we had a newer stadium that it would be something that people could look forward to coming to.”
Orr’s goal of increased ticket sales is looking promising with ticket revenue jumping from $220,000 in 2017 to $369,000 in 2018 following Orr’s hiring in 2016, and there is discussion in the department about ways to get funding for a new event center.
However, McAuliffe said increased ticket sales would be a minor portion of the new revenue needed to balance the Athletics budget and that the priority for the school is projects involving safety and more classrooms.
For now, Athletics is expected to continue to find ways to become more solvent in the coming year and UBAC is waiting for a report on Athletics soon.
“Once a plan and timeline is implemented, there needs to be some sort of accountability and reporting measures to ensure progress is made towards eliminating this deficit,” UBAC said in their annual report. | |
COVID-19 pandemic has taken a toll on the world economy. However, it’s not all bad news, when looking at the world of gaming and esports. Many figures have come out showing how the gaming industry may actually be booming in these troubled times.
According to a dossier prepared by statista.com the online viewership of the ESL Pro League (Counter Strike: Global Offensive event) has increased significantly when compared to 2019.
In 2019 the viewership on Twitch was at 1,15,000 while in 2020 it has reached 1,46,000. This is an increase of 31,000 viewers which is roughly 27 per cent.
The dossier states this increase is partially a result of people being forced to stay at home due to the pandemic.
Similarly, day 2 of this event saw a record 1.4 million hours of watchtime. However, the report did predict a drop in revenue for the Modern Times Group’s gaming vertical for the first two quarters of 2020. This drop comes due to losses from events that are being converted to online only or are being cancelled and/or postponed. But this decline is slightly lower than expected.
Related news: All in the game as COVID-19 fear keeps people indoors
According to Verizon, Americans are now spending more time gaming when compared to pre-coronavirus times. The peak usage is up 71 per cent for gaming. The usage for downloads as well as for video has also increased by 56 per cent and 26 per cent respectively. This is a consistent change with little variance between weeks during the pandemic.
As per a report by Google-KPMG, the online gaming segment will be pegged at $1.1 billion by 2021. India is among the top five mobile gaming markets with around 300 million gamers.
A report in the Economic Times recently, stated, Paytm First Games, the gaming arm of the digital payments platform Paytm has experienced a 200 per cent increase since the second half of March.
COO for Paytm First Games, Sudhanshu Gupta was quoted as saying, “Our app sees more than half a million daily active gamers on the platform spending anywhere between 30 to 45 minutes,” adding that the platform has seen 75,000 new users in the weeks leading up to the second half of April.
Related news: eSports in India is turning nerds into millionaires
Another article in the Economic Times stated that Mr. Pavaan Nanda, Co-Founder of Winzo Games, had seen a 3x increase in games played and a 30 per cent increase in traffic on March 15th. This increase in traffic became an hourly incident once the country entered lockdown.
Mr. Dayanidhi M.G, founder of nCORE games (LiveOps partner for Vainglory) told The Federal, “Many industries have witnessed massive tail winds that have changed the landscape of business in those industries. Online education and Gaming are among the very few that got benefitted in these unprecedented times of the last couple of months.”
“In gaming, particularly, the industry witnessed encouraging key performance indicators such as increased daily active users, better session lengths, spike in installs and increased spending in some genres. In some genres, users have gone into grinding as a result of more time on hand than in-app purchases. But overall, it has been very encouraging,” he said.
“As a bigger take away for the gaming industry, this will result in a habit formation for occasional gamers to play frequently, explore different games and build their own universes in the game or social circles to stay connected, but apart,” he added.
Related news: What is it about gaming that draws people to it?
Dayanidhi said the statistics are still evolving. But as illustrated in the infographics above, India’s own Ludoking has smashed all records to become fifth in the all time downloads on mobile platform.
“The graph in March and April says it all,” he said.
Key Performance Indicators (KPI) are metrics that help measure the performance of games, products or even a company. In the case of gaming they help measure — success of the game, broad understanding of the current situation of the game in its market, and progress towards the key goals set by the company.
Daily Active Users (DAU) is one of the main KPIs utilized by gaming related companies to measure the performance of a game. Other common KPIs include — monthly active users, cost per installation, lifetime value, return of investment, average revenue per user, average revenue per paid user, and average revenue per daily active user.
Related news: Online gaming addiction: How much is too much?
However, the choice of the indicators used varies based on the needs of the organisation and the specific game in question.
So far, the pandemic has benefitted the gaming industry as a general trend, however, as we head towards the end of the fourth stage of the lockdown in India we still need to wait to see the final impact of this on gaming. | https://thefederal.com/sports/it-is-a-field-day-for-esports-gamers-even-amid-covid-19-shutdown/ |
The month of April brings good news and bad news for stargazers in the northern hemisphere. The good news is that our weather is getting warmer. The bad news is that our nights are getting shorter. Nevertheless, there’s still plenty to see in the night skies of April. Here are the highlights.
Planets
April 3rd will be your last chance to use the Moon to locate Uranus. The planet will be at the lower right side of a thin waxing crescent Moon during the evening hours. You’ll need binoculars or a small telescope to observe the planet.
On April 15 there will be four planets visible in the morning sky: Jupiter, Venus, Mars, and Saturn. Look for them in the southeast horizon along the same path followed by the Sun.
Finally, on the morning of April 29, Mercury (the most elusive planet) will be at its greatest distance from the Sun making it much easier to observe. Look for it in the horizon just before Sunrise.
Moon
There will be two new Moons this month. The first on April 1st and the second on April 30. The full Moon will be on April 16. The Moon will be at apogee (i.e., farthest from the Earth) on April 7. At that time, the Moon will be roughly 251,283 miles away from us. The Moon will be at perigee (i.e., nearest to Earth) on April 19 when its distance from our planet will be 226,863 miles.
For those in the southern hemisphere, a partial solar eclipse will occur on April 30.
Stars
The most notable stars this month are Arcturus and Spica. Locating these stars is fairly simple. First find the Big Dipper which is high in the northern sky. Follow the curve of its handle as you “arch to Arcturus.” This is the fourth brightest star observable from Earth, having a magnitude of -0.1. It has about the same mass as the Sun and is located just 37 light years from our solar system. It’s the brightest star in the constellation Boötes.
Continue to follow the same path as you “spike to Spica” in the constellation Virgo. Spica is a magnitude 1 binary (i.e., double) star located 250 light years from Earth. The star’s brightness increases and decreases every four days. It’s extremely volatile and, along with Betelgeuse in the Orion constellation, is a prime candidate for the next supernova. Thankfully, Earth is outside the 50-light-year danger zone if such an event occurred.
Constellations and Galaxies
At least three remarkable constellations will be high in the sky this month. The first is Leo, located between Virgo and Cancer. This constellation has two pairs of spiral galaxies that can be seen through binoculars or a telescope. The first pair is M65 and M66 located near Leo’s back leg. The other pair, M95 and M96, are found around the lion’s chest or stomach. The constellation is also home to the star Regulus, one of the brightest in the sky, and a dwarf star named Wolf 359 which is the third closest to us at a distance of just 7.8 light years.
Next is Coma Berenices, a small but bright constellation that represents the hair of the Egyptian Queen Berenice. The main attractions in this constellation are the “Coma Star Cluster” comprised of at least 50 stars that are about 285 light years from us and the “Coma Cluster” that has the massive true diameter of 20 million light years. You can also find M64, the “Black Eye Galaxy”, inside Coma Berenices.
Lastly, Hercules, the fifth largest constellation, will be high overhead all month. Look for the M13 globular cluster located around the right side of Hercules’ torso. This is a loose, irregular cluster of stars approximately 500 million light years away. It’s true diameter is nearly 6 million light years across.
Meteor Shower
The Lyrid meteor shower will be active this month from April 16 -25. Its peak will be from April 22 to April 23 (though some sources say April 21 to 22). Most meteors will come from the area of the sky between Vega and Hercules. As with all meteor showers, the Lyrids are best observed after midnight and before dawn. Although the Lyrid meteor shower is not prolific (it averages 10 to 15 meteors per hour), it does produce some of the brightest and faster streaks across the sky. The meteors are associated with Comet Thatcher and the shower is the oldest one ever recorded, being documented by Chinese astronomers in 687 BC. | https://spartan-weekly.com/2022/04/01/%F0%9F%8C%A0-stargazers-almanac-april-2022/ |
Warning:
more...
Fetching bibliography...
Generate a file for use with external citation management software.
The titer of the scrapie agent was determined by measurements of time intervals from inoculation to onset of illness and from inoculation to death. Both intervals were found to be inversely proportional to the size of the dose injected intracerebrally into random-bred weanling Syrian hamsters. The logarithms of the time intervals minus a time factor were linear functions of the logarithm of the inoculum size. The time factors were determined by regression analysis in order to maximize these linear relationships. An equation relating the titer of the inoculum to the dilution of the sample and the length of the time intervals was developed. This equation facilitates the use of a computerized data base. Validation of these relationships was provided by comparing samples for which the agent was measured both by end-point titration and by time interval assay. Agreement between the two methods was generally within +/-0.5 log10 median infective dose units. No differences between the molecular properties of the agents from hamster and murine sources were observed using primarily the incubation time interval method with the former and end-point titration with the latter. The advantages of this new approach based on time interval measurements are considerable with respect to time and resources.
National Center for
Biotechnology Information, | https://www.ncbi.nlm.nih.gov/pubmed/6808890?dopt=Abstract |
Introduction {#sec1-0300060520922649}
============
Diabetes mellitus is a common metabolic disorder linked to different complications in various organs, such as the heart, eyes, lower-limb blood vessels, lungs, and brain.^[@bibr1-0300060520922649]^ Persistent hyperglycaemia in patients with diabetes mellitus always promotes increased oxidative stress that is characterized by cognitive impairment and memory loss.^[@bibr2-0300060520922649]^ This may be linked to the fact that brain tissues are extremely vulnerable to oxidative damage, possibly due to the high oxygen consumption rate of 20%, the presence of abundant polyunsaturated fatty acids in cell membranes, and the high iron (Fe) content, as well as low enzymatic activity of antioxidants.^[@bibr2-0300060520922649]^ Therefore, diabetes mellitus is a crucial risk factor for cognitive dysfunction.
Several drugs are useful in the management of diabetes mellitus and its related complications, but they are known to be associated with various side effects.^[@bibr3-0300060520922649]^ As a result, a number of herbs are used to manage diabetes mellitus and its associated complications, particularly neuropathy, including an African eggplant (*Solanum macrocarpon*) that is normally consumed as a vegetable, especially in Nigeria.^[@bibr4-0300060520922649]^ Few studies have been performed with this plant, and there is currently little or no information regarding the effect of *S. macrocarpon* on the brain of rats with alloxan-induced diabetes. Thus, the aim of the present study was to evaluate the protective effect of *S. macrocarpon* Linn leaf aqueous extract in the brain of an alloxan-induced rat model of diabetes.
Materials and methods {#sec2-0300060520922649}
=====================
Plant collection and identification {#sec3-0300060520922649}
-----------------------------------
The *S. macrocarpon* leaves were purchased from the Oleh market in Delta State, Nigeria, and were then authenticated at the Forestry Research Institute of Nigeria (FRIN), Ibadan, Nigeria (voucher number FHI: 111316).
Extract preparation {#sec4-0300060520922649}
-------------------
The *S. macrocarpon* leaves were dried at room temperature for 4 weeks and processed to powder form using an electric blender. Thereafter, a known weight of the powdered sample was soaked in distilled water (1: 10 w/v) for 72 h. The solution was then filtered and the obtained filtrate was freeze-dried. In order to use an ethnobotanical dose in this study, an equivalent of the cup normally used in the home was freeze-dried separately to obtain the yield, which was then used to calculate three different doses.
Experimental animals and induction of diabetes mellitus {#sec5-0300060520922649}
-------------------------------------------------------
A total of 36 Wistar albino rats, age between 7 and 8 weeks (weight range, 130--140 g) were obtained from the Animal House of Afe Babalola University, Ado-Ekiti, Ekiti, Nigeria. The animals were acclimatised for 2 weeks at room temperature with free access to feed and water, with a 12-h light/12-h dark cycle. This work was approved by the Animal Ethical Committee of Afe Babalola University, Ado-Ekiti, Ekiti State, Nigeria (Ethics approval number ABUAD/SCI/19/016), and was performed according to the Committee's ethical standards.
Diabetes mellitus was induced in 30 of the experimental animals by a single intraperitoneal injection of 150 mg/kg body weight of alloxan monohydrate (Sigma-Aldrich; St Louis, MO, USA). At 48 h following alloxan-induction, the fasting blood glucose level of each animal was checked using an Accu-check glucometer (OneTouch Glucometer, supplied by Central Diagnostic Laboratory, Ilorin, Kwara State, Nigeria) to confirm that fasting blood glucose levels were ≥250 mg/dl, as previously described.^[@bibr5-0300060520922649]^
Experimental design {#sec6-0300060520922649}
-------------------
The animals were divided into 6 groups (*n* = 6) as follows: Group 1: normal controlGroup 2: diabetes controlGroup 3: rats with diabetes administered 5 mg/kg metformin (Sigma-Aldrich)Group 4: rats with diabetes administered 12.45 mg/kg body weight of *S.* *macrocarpon* leaf aqueous extractGroup 5: rats with diabetes administered 24.9 mg/kg body weight of *S.* *macrocarpon* leaf aqueous extractGroup 6: rats with diabetes administered 49.8 mg/kg body weight of *S. *macrocarpon* leaf aqueous extract*
The animals were sacrificed on day 14 by cervical dislocation and the brain of each rat was quickly excised, homogenized using Tris-HCl buffer (Sigma-Aldrich) and centrifuged at 4 000 *g* for 15 min at 24°C to obtain a clear supernatant for use in different biochemical analyses, as previously described.^[@bibr6-0300060520922649]^ Samples were then deep frozen for storage prior to analysis.
Fasting blood glucose {#sec7-0300060520922649}
---------------------
Fasting blood glucose levels were determined at baseline, at 48 h following alloxan induction of diabetes, and at day 14 of treatment, using an Accu-chek® glucometer, by placing a drop of tail vein blood onto the glucose strip, as previously described.^[@bibr7-0300060520922649]^
Oxidative stress biomarkers {#sec8-0300060520922649}
---------------------------
The level of malondialdehyde (MDA) and activities of superoxide dismutase (SOD), catalase (CAT), and glutathione peroxidase (GPx) in brain homogenates were determined using an RX Daytona automated analyser (Randox; County Antrim, UK) according to the manufacturer's instructions.
Neurotransmitter levels {#sec9-0300060520922649}
-----------------------
Levels of epinephrine, norepinephrine, dopamine, and serotonin in rat brain homogenates were determined using commercial enzyme-linked immunosorbent assay (ELISA) kits (Cusabio; Houston, TX, USA) according to the manufacturer's instructions.
Determination of cholinesterase activity {#sec10-0300060520922649}
----------------------------------------
Acetylcholinesterase (AChE) and butyrylcholinesterase (BChE) activity levels in the rat brain on experimental day 14 were measured as follows. Briefly, 50 ml of brain homogenate, 50 ml of 2-nitrobenzene acid (Sigma-Aldrich) and 175 ml of 0.1 mol/l phosphate buffered saline (pH 8.0; Sigma-Aldrich) were mixed together and incubated for 20 min at 25°C. Thereafter, 25 ml of both acetylthiocholine iodide and butyrylthiocholine iodide solution was added to the solution. The absorbance was measured at 412 nm using a Randox microplate reader, as previously described.^[@bibr8-0300060520922649]^
Determination of other biochemical parameters {#sec11-0300060520922649}
---------------------------------------------
Cyclooxygenase (COX)-2 and nitric oxide (NO) levels in the rat brain on experimental day 14 were determined using commercial ELISA kits (Merck, Darmstadt, Germany), according to the manufacturer's instructions.
Statistical analyses {#sec12-0300060520922649}
--------------------
Data are reported as mean ± SD of six replicates, and were statistically analysed using GraphPad Prism 5 software (GraphPad Software, San Diego, CA, USA). Between-group and within-group differences were assessed using one-way analysis of variance (ANOVA) followed by Tukey's post-hoc test. Statistical significance was set at *P* \< 0.05.
Results {#sec13-0300060520922649}
=======
At the end of the experimental period (day 14), fasting blood glucose levels were significantly higher in diabetes control rats compared with normal controls (*P* \< 0.05). However, a statistically significant (*P* \< 0.05) reduction in fasting blood glucose levels at day 14 was observed in rats with diabetes administered different doses of *S. macrocarpon* leaf aqueous extract, as well as 5 mg/kg body weight of metformin (*P* \< 0.05 versus diabetes control rats at day 14 and versus own group at 48 h of diabetes induction; [Figure 1](#fig1-0300060520922649){ref-type="fig"})
{#fig1-0300060520922649}
At experimental day 14, levels of MDA in control diabetes rats were significantly higher compared with all other groups (*P* \< 0.05; [Figure 2](#fig2-0300060520922649){ref-type="fig"}). There were no statistically significant differences in levels of MDA between normal control rats, and rats with diabetes administered 12.45, 24.9 or 49.8 mg/kg body weight of *S. macrocarpon* leaf aqueous extract or 5 mg/kg body weight of metformin (*P* \> 0.05; [Figure 2](#fig2-0300060520922649){ref-type="fig"}).
{#fig2-0300060520922649}
Rats with diabetes administered 12.45, 24.9 and 49.8 mg/kg body weight of *S. macrocarpon* leaf aqueous extract revealed significantly increased SOD activity in brain tissue at experimental day 14 compared with control diabetes rats (*P* \< 0.05). In addition, SOD activity in rats with diabetes administered 49.8 mg/kg body weight of *S. macrocarpon* leaf aqueous extract was not significantly different from normal control rats (*P* \> 0.05). SOD activity in rats with diabetes administered 12.45 mg/kg body weight of *S. macrocarpon* leaf aqueous extract showed no significant difference compared with diabetes rats administered 5 mg/kg body weight of metformin (*P* \> 0.05; [Figure 2](#fig2-0300060520922649){ref-type="fig"}).
On experimental day 14, brain tissue in diabetes control rats showed significantly lower CAT and GPx activity versus normal control rats (*P* \< 05), but there were no statistically significant differences in the activities of CAT and GPx in the brains of rats with diabetes administered 49.8 mg/kg body weight of *S. macrocarpon* leaf aqueous extract compared with normal controls (*P* \> 0.05). In addition, diabetes control rats showed significantly decreased CAT and GPx activity compared with all of the treated groups (*P* \< 0.05; [Figure 2](#fig2-0300060520922649){ref-type="fig"}). There was no significant difference in GPx activity in the brain tissue of diabetes rats administered 12.45 and 24.9 mg/kg body weight of *S. macrocarpon* leaf aqueous extract compared with 5 mg/kg body weight of metformin (*P* \> 0.05; [Figure 2](#fig2-0300060520922649){ref-type="fig"}).
Brain tissue levels of epinephrine, norepinephrine, dopamine, and serotonin were significantly increased in diabetes control rats compared with diabetes rats administered 12.45, 24.9 and 49.8 mg/kg body weight of *S. macrocarpon* leaf aqueous extract, as well as those administered 5 mg/kg body weight of metformin (*P* \< 0.05; [Figure 3](#fig3-0300060520922649){ref-type="fig"}). In addition, diabetes rats administered different doses of *S. macrocarpon* leaf aqueous extract exhibited no significant difference in levels of these neurotransmitters compared with normal control rats (*P* \> 0.05). There was no significant difference in epinephrine and norepinephrine levels in diabetes rats administered different doses of *S. macrocarpon* leaf aqueous extract and diabetes rats administered 5 mg/kg body weight of metformin (*P* \> 0.05). But levels of dopamine and serotonin were significantly different between diabetes rats administered different doses of *S. macrocarpon* leaf aqueous extract and diabetes rats administered 5 mg/kg body weight of metformin (*P* \< 0.05; [Figure 3](#fig3-0300060520922649){ref-type="fig"}).
{#fig3-0300060520922649}
Levels of AChE and BChE were significantly increased in the brain of diabetes control rats compared with other groups (*P* \< 0.05; [Figure 4](#fig4-0300060520922649){ref-type="fig"}). However, on day 14 of the experiment, diabetes rats administered different doses of *S. macrocarpon* leaf aqueous extract revealed significantly decreased AChE levels compared with diabetes controls (*P* \< 0.05). Also, there was no significant difference in AChE levels between diabetes rats administered 24.9 and 49.8 mg/kg body weight of *S. macrocarpon* leaf aqueous extract and normal control rats (*P* \> 0.05). Diabetes rats administered 12.45 mg/kg body weight of *S. macrocarpon* leaf aqueous extract showed significantly lower AChE levels versus diabetes rats administered 5 mg/kg body weight of metformin (*P* \< 0.05; [Figure 4](#fig4-0300060520922649){ref-type="fig"}). In addition, there were no significant differences in BChE levels between normal control rats and diabetes rats administered 12.45, 24.9 and 49.8 mg/kg body weight of *S. macrocarpon* leaf aqueous extract, or diabetes rats administered 5 mg/kg body weight of metformin (*P* \> 0.05; [Figure 4](#fig4-0300060520922649){ref-type="fig"}).
{#fig4-0300060520922649}
Finally, diabetes control rat brain tissue demonstrated significantly increased COX-2 activity and NO level compared with normal controls and all of the treated diabetes groups (*P* \< 0.05; [Figure 5](#fig5-0300060520922649){ref-type="fig"}). On experimental day 14, diabetes rats administered 12.45, 24.9 and 49.8 mg/kg body weight of *S. macrocarpon* leaf aqueous extract, as well as those administered 5 mg/kg body weight of metformin, exhibited no significant difference versus normal control rats in COX-2 activity and NO level.
{#fig5-0300060520922649}
Discussion {#sec14-0300060520922649}
==========
Diabetes mellitus is associated with hyperglycaemia,^[@bibr3-0300060520922649]^ the persistence of which can affect different organelles in the body system, including the brain, leading to neuropathy. Hyperglycaemia was also observed in the present rat model of diabetes, and the normoglycaemia results observed in the diabetes rats administered different doses of *S. macrocarpon* leaf aqueous extract ([Figure 1](#fig1-0300060520922649){ref-type="fig"}) support the local usage of this plant in the management of diabetes mellitus, and may be attributed to phenolic compounds present in the plant as reported previously.^[@bibr5-0300060520922649],[@bibr9-0300060520922649]^
Oxidative stress plays an important role in cellular injury due to persistent hyperglycaemia, stimulating free radical production, and thereby, weakening the immune system of such individuals, who become unable to counteract the enriched reactive oxygen species (ROS) generation leading to oxidative stress.^[@bibr10-0300060520922649]^ Lipids are reported as one of the primary targets of ROS, especially in the brain due to its high level, and this is probably responsible for the high level of lipid peroxidation in the brain of rats with induced diabetes. The increased MDA level in the present rat model of diabetes may be linked to a decline in defence mechanisms of antioxidant enzymes ([Figure 2](#fig2-0300060520922649){ref-type="fig"}).^[@bibr11-0300060520922649]^ This may be responsible for the reduction in activities of SOD, CAT, and GPx in the brain of diabetes rats, as shown in the current study. SOD is an important enzyme against cellular damage produced by ROS, and it promotes the conversion of superoxide radicals into hydrogen peroxide. CAT encourages the conversion of hydrogen peroxide into water,^[@bibr12-0300060520922649]^ and this is supported by the cytosolic enzyme GPx. The rapid decrease in activities of these antioxidant enzymes could be attributed to their combatting effect on free radicals.^[@bibr13-0300060520922649]^ The ability of aqueous extract of *S. macrocarpon* leaf to boost the activities of SOD, CAT, and GPx with a correspondent decrease in MDA ([Figure 2](#fig2-0300060520922649){ref-type="fig"}) may be attributed to the antioxidant nature of the extract supporting the anti-neuropathy effects. This may be the main mechanism of action of aqueous extract of *S. macrocarpon* leaf as an anti-neuropathy agent.
Neurotransmitters are endogenous substances that encourage neurotransmission, transmitting signals across a chemical synapse, such as a neuromuscular junction, from one neuron to another target neuron, muscle cell, or gland cell.^[@bibr14-0300060520922649]^ The persistent hyperglycaemia in a diabetes mellitus state, which triggers an increase in ROS production, may be responsible for the abnormal increase in levels of all neurotransmitters in the present study (epinephrine, norepinephrine, dopamine, and serotonin; [Figure 3](#fig3-0300060520922649){ref-type="fig"}). The abnormal increase in levels of epinephrine, norepinephrine, dopamine, and serotonin supports the neuropathy complication of diabetes mellitus, and the aqueous extract of *S. macrocarpon* leaf was able to inhibit levels of these neurotransmitters in the present study, supporting its anti-neuropathy effects, probably due to its ability to boost antioxidant enzymes booster.
Cholinesterases (AChE and BChE) are important enzymes related to memory and cognitive functions, and persistent hyperglycaemia may trigger memory loss, particularly in type II diabetes mellitus.^[@bibr2-0300060520922649]^ Also, AChE and BChE are crucial enzymes in the management of neurodegenerative diseases, such as Alzheimer's disease,^[@bibr15-0300060520922649]^ another secondary complication of diabetes mellitus. Increased activities of both AChE and BChE in the present study ([Figure 4](#fig4-0300060520922649){ref-type="fig"}) suggest that memory loss and neurodegenerative diseases may be present in diabetes control rats. It is noteworthy that rats with diabetes administered different doses of *S. macrocarpon* leaf aqueous extract showed inhibited AChE and BChE activity, suggesting a reduction in the hydrolysis of acetylcholine, as well as amelioration of neuronal damage, which may be associated with correction of memory loss.
The inflammatory enzyme COX-2 plays an important role in the pathogenesis of diabetes mellitus.^[@bibr16-0300060520922649]^ Inhibition of this enzyme is crucial role in protecting patients with diabetes mellitus from inflammation. An increase in COX-2 activity in diabetes control rats in the present study ([Figure 5](#fig5-0300060520922649){ref-type="fig"}) suggests a high level of inflammation in their brain. Hence, the ability of *S. macrocarpon* leaf aqueous extract to inhibit the activity of COX-2 further supports its anti-neuropathy effects, and this may be linked to different bioactive compounds present in the extract, as reported previously.^[@bibr9-0300060520922649]^ In addition, increased NO level due to diabetic pathophysiology may be responsible for both cell apoptosis and necrosis, as the reaction of NO with superoxide anion produces peroxynitrite,^[@bibr17-0300060520922649]^ as noticed in the present study ([Figure 5](#fig5-0300060520922649){ref-type="fig"}), and this may also be associated with increased lipid peroxidation also observed ([Figure 2](#fig2-0300060520922649){ref-type="fig"}). The aqueous extract of *S. macrocarpon* leaf administered at different doses exhibited potential in inhibiting the level of NO in the brain of animals with diabetes, also supporting the anti-neuropathy effects linked to the antioxidant activity of the extract, as well as possible synergistic reactions of different bioactive compounds reported in the extract.^[@bibr9-0300060520922649]^
In conclusion, the aqueous extract of *S. macrocarpon* leaf demonstrated the ability to reduce lipid peroxidation, and boost the brain tissue activities of CAT, SOD, and GPx. The anti-neuropathy effects of the extract were also substantiated by the amelioration of epinephrine, norepinephrine, dopamine and serotonin levels, cholinesterase activities, and COX-2 and NO levels.
We are very grateful to Miss Sonia for purchasing the *S. macrocarpon* leaves, and to all the technologists who were involved in this experiment in the Biochemistry laboratory of Afe Babalola University, Ado-Ekiti, Ekiti State, Nigeria.
Declaration of conflicting interest {#sec15-0300060520922649}
===================================
The authors declare that there is no conflict of interest.
Funding {#sec16-0300060520922649}
=======
This work was financially assisted by the South African Medical Research Council (SAMRC) through funding received from the South African National Treasury. However, the content of this manuscript is the view of the authors.
ORCID iD
========
Basiru O. Ajiboye <https://orcid.org/0000-0001-5982-2322>
| |
I had not heard of the term Rembrandt Lighting until today. It is a lighting technique used for portraits.
I wish I had known about the technique before I took a couple of photographs of my boss for his Facebook profile page. The photographs were to replace his iPhone self-portrait, which made him look like a serial killer. The entire shoot took less than 5 minutes.
I will re-shoot with Rembrandt Lighting if I have another chance.
If you're wondering why I shot portraits in Landscape mode, it's because I didn't have my grip installed on the camera; I have been shooting "light" since the summer hiatus.
Fri Oct 02 08:20:58 2009: A note on the lighting setup: I switched off all the overhead lights and used the ambient light from an adjacent room to light the subject. Since the image was going to be reduced to a postage-stamp sized thumbnail, I didn't care about sharpness, or noise. The important thing was contrast between the subject and the background. | https://www.ee.ryerson.ca/~elf/50d/200910.html |
Troyal Garth Brooks is an American country singer and songwriter
His integration of rock and roll elements into the country genre has earned him immense popularity in the United States. Brooks has had great success in the country single and album charts, with multi-platinum recordings and record-breaking live performances, while also crossing over into the mainstream pop arena.
According to the RIAA, he is the best-selling solo albums artist in the United States with 148 million domestic units sold, ahead of Elvis Presley, and is second only to The Beatles in total album sales overall. He is also one of the worlds best-selling artists of all time, having sold more than 160 million records.
(As of September 23, 2016) Brooks is now the only artist in music history to have released seven albums that achieved diamond status in the U.S. Those being; Garth Brooks (10 platinum), No Fences (17 platinum), Ropin the Wind (14 platinum), The Hits (10 platinum), Sevens (10 platinum), Double Live (21× platinum), and The Ultimate Hits (10 platinum). Since 1989, Brooks has released 21 records in all, which include: 12 studio albums, 1 live album, 3 compilation albums, 3 Christmas albums and 4 box sets, along with 77 singles. He won several awards in his career, including 2 Grammy Awards, 17 American Music Awards (including the Artist of the 90s) and the RIAA Award for best-selling solo albums artist of the century in the U.S.
Source: Wikipedia
Popular Garth Brooks albums
-
The Hits
-
Fresh Horses
-
Sevens
-
The Ultimate Hits
-
Scarecrow
Listen to Garth Brooks on Spotify
More about Garth Brooks: | https://what-is-the-meaning-of.com/people/who-is-garth-brooks/ |
In November, Professor Rae Cooper from the University of Sydney Business School, and colleagues from Australian National University, published a report titled “Pandemic Pressures: Job Security and Customer Relations for Retail Workers”. Their survey of 1160 retail, fast-food, and distribution workers in Australia found more than half had experienced customer abuse during the pandemic. Similarly, a retail workers’ union survey conducted in December 2020 reported 88 per cent of retail workers had experienced abuse in the previous 12 months. In the UK, this figure was 90 per cent, which prompted the Scottish Government to enact laws protecting retail workers.
Last month, in an open letter to Victorian Premier Daniel Andrews, Australian Retailers Association CEO Paul Zahra highlighted several examples of customer aggression, including physical assaults and verbal harassment. Associations are now calling on governments for greater protection of retail and frontline service workers as businesses re-open for the fully vaxxed. While many have pointed directly to the pandemic as a trigger for abuse and aggression – there has certainly been a spike – we argue that such behaviour has been present for decades. For example, prior to the pandemic, the SDA retail workers union found 80 per cent of workers had experienced abuse in the past year. There are many academic studies that detail abuse and aggression in call centres, airlines and other service sectors, years before Covid.
The psychology behind abusive customer behaviours
Increases in customer aggression and abuse can be explained using three broad approaches: a psychological approach, a contingency approach, and a sociological approach.
The classic psychological approach suggests abuse and aggression are forms of ‘deviant’ behaviour and; therefore, they label the individual deviant as the key unit of analysis. With deviant individuals identified, psychologists then proceed to analyse the characteristics of these individuals, seeking to identify key traits they have in common. These traits may include dysfunctionality, anti-social behaviours, extreme egocentricity or extreme narcissism. These character traits are then identified as the primary cause of the deviant behaviour leading to abuse. While this approach may explain a small proportion of customers who walk through the door, it would not be reasonable to suggest this correlates with the 80-90 per cent of workers reporting abuse.
The second approach draws on contingency theory to identify contextual and situational factors that act as triggers of abusive and aggressive customer behaviour. This approach may explain the rise in abuse due to changes within the servicescape. Such changes may include layout and design changes, environmental shifts, process changes, and the exterior service environment. This approach certainly does explain recent reports of abuse and aggression directed at retail workers due to new check-in protocols and health directives. While logically this approach implies that there is latent customer abuse and aggression bottled up, waiting to be released at an innocent and unsuspecting retail worker, it fails to address the crucial question of why this might be so – why customer abuse is systemically present within the retail and service sector.
We suggest that to address this question, researchers must turn to the sociological approach. Sociological approaches to abusive and aggressive customer behaviours centre on the idea that such behaviours are often deeply linked to prevailing social norms. We argue that the social creation of customer abuse lies within the fabric of the service economy.
The myth of customer sovereignty
Our key argument advanced here is that customer abuse systemically arises when the myth of customer sovereignty, which retailers and service organisations advance, breaks down. The notion that the customer is always right predetermines a fragile relationship hierarchy that quickly dissolves when the rule of customer sovereignty is broken. For example, upon entering a store, customers are regularly welcomed warmly and the employee offers to be of service – get another size, call another store, arrange for a free delivery, or make a cup of coffee. The relationship is essentially one of master and servant.
Customer abuse arises at the point where customer enchantment turns to disillusionment. The point upon which the servant requests to see the master’s vaccination certificate, or proof of purchase. Led into assuming the trappings of authority, customers then feel powerless and offended when the myth of sovereignty dissolves. The hierarchy shifts and the fragile relationship fails, and it is this approach that leads to the systemic presence of customer abuse in the retail and service economy.
Status shields and anonymous relationships
Intensifying these abusive behaviours are two other factors – a perceived lack of a status shield and anonymous, disconnected relationships. Traditionally, retail and service frontline workers tend to be younger, female, or from migrant minorities – groups often more exposed to abuse and aggression. There is also a perception that much of this work is undertaken by low-skilled, low-paid employees. Hence they lack a ‘status shield’, which is afforded to other professionals, such as doctors and lawyers. Employees with low status shields are considered easy targets, with little power. Adding to this are the multiple disconnected and anonymous interactions customers have with certain retail and service workers. We don’t know the young team member who just packed our bags poorly at the supermarket or the fast-food worker who just got our order wrong. Therefore, we are more inclined to snap at them than our hairdresser, whom we have known for years. Overall, the notion that the customer is always right, and therefore superior to workers, and the fact that there are few penalties for abusing retail workers, have contributed to the problem.
Underpinning this argument is Displaced Aggression Theory, which predicts an individual treated unfairly (can’t enter without proof of a vaccine) may behave aggressively toward a third party (retail or service worker) because the source (health directive) is too powerful and may exert retaliation (a fine, or penalty). Displaced Aggression Theory is often referred to anecdotally as ‘Kicking the (Barking) Dog Effect’.
The ‘self’-centred solution
Sadly, when an abusive incident occurs, retail and service workers often feel very isolated, vulnerable, and helpless. The innocent bystander effect explains that when a large group of customers are shopping in a store, individuals are less likely to intervene when a retail worker is being subjected to aggressive or abusive behaviour. The other customers assume that someone else will speak up on the worker’s behalf. This is particularly prevalent during one-off or infrequent encounters with retail staff, such as in large supermarkets, as opposed to longer-term customer relationships and repeat interactions, which often happen in smaller stores.
The traditional approach to reducing abuse of retail workers tends to focus on the behaviour and reminding the customer that it is wrong and won’t be tolerated. Such tactics are often supported with visible security guards, resilience training, in-store signage, and surveillance cameras. However, burly security guards, CCTV and signage detract from a positive shopping experience and resilience training doesn’t fix the problem of abuse, only compensates for it by preparing team members after the fact.
It is suggested retailers adopt a self-surveillance strategy combining two tactics. First, a traditional external motivation to do the right thing – such as amplifying the spotlight effect with an overt reminder we are being watched. This is also intended to evoke self-reflection and self-regulation. Research shows the effectiveness of cues that cause us to self-focus and self-regulate; they are part of an evolutionary instinct to focus on one’s self. University of East Anglia researcher Rose Meleady and colleagues demonstrated this with experiments using signs to encourage drivers to turn off their engines at a busy rail crossing with a two-minute average wait.
After an experiment just using a “watching eyes” image (with no discernible effect), they tried two signs. One with a set of human eyes and the words “When barriers are down, switch off your engine.” The other with just the words: “Think of yourself: When barriers are down, switch off your engine.” With no sign, 20 per cent of drivers switched off their engines. With the watching eyes sign, 30 per cent switched off. With the “think of yourself” sign, 51 per cent did so.
Ultimately, this problem needs to be tackled from multiple sides – business, society, and government. The retailers associations have developed a suite of resources for retail workers including training and posters. More recently, Woolworths introduced worker badges.
The SDA’s ‘No-one deserves a serve’ campaign has leveraged personalisation. It re-focuses the abusive customer on the person (someone’s son, daughter or mother) and not the ‘brand’. We suggest a federally funded and nationwide campaign reminding the community of the importance of essential workers. In addition, greater penalties should be legislated (as they have in Scotland with the 2018 Protection of Workers Bill), in a similar way in which police, paramedics, corrective services officers, and other frontline emergency service workers are protected at work. | https://insideretail.com.au/business/hr/heres-the-real-reason-customer-abuse-is-rising-202112 |
PRIOR RELATED APPLICATIONS
This application claims priority under 35 U.S.C. § 119(e) to U.S. Provisional application Ser. No. 60/002,953, filed Aug. 30, 1995.
BACKGROUND OF THE INVENTION
It is well known that Bacillus thuringiensis, which accounts for the majority of all biopesticides used today, produces a crystalline inclusion during sporulation, and it is this crystalline inclusion which is composed of one or more &dgr;-endotoxins and is responsible for toxicity to a wide range of insect hosts including larvae of lepidopteran, coleopteran, and dipteran insects.
Due to considerable interest in the use of B. thuringiensis as a biological pesticide, numerous studies have been done on identification of the insecticidal proteins and genes encoding these proteins in B. thuringiensis strains. However, less is known about the genes involved in the sporulation pathway. The sporulation genes are not only responsible for sporulation but also are associated with crystal production. Most insecticidal crystal proteins are expressed only during sporulation. Therefore, the identification and characterization of the genes involved in the sporulation pathway would enhance our knowledge and ability to manipulate crystal production and the sporulation cycle, and further would potentially increase the effectiveness of B. thuringiensis as a biological pesticide.
The production of viable spores by recombinant B. thuringiensis strains can be a disadvantage with respect to the use of B. thuringiensis products. All commercial B. thuringiensis strains form spores which are released into the surrounding environment in combination with the insecticidal proteins and while the spores may provide some insecticidal advantage, they are highly durable structures and can survive in the soil under extreme weather conditions.
The use of asporogenic or oligosporogenic B. thuringiensis in biopesticides could prevent or diminish the release of these viable spores making the use of B. thuringiensis biopesticides an even more attractive alternative to the use of conventional pesticides. Moreover, documentation in the literature suggests that over expression of certain cry genes can occur in asporogenic strains. For example, expression of the cryIIIA gene has been observed to increase in sporulation mutants of Bacillus strains.
The morphological and physiological changes that occur during sporulation have been studied extensively in Bacillus subtilis. In general once sporulation is initiated, the cells undergo a number of morphological stages and sporulation involves a radical change of the biosynthetic activity of the bacterium. As sporulation begins, the chromosome condenses. At stage II, cell division occurs producing sister cells that are different in size. The smaller cell is the daughter cell, also known as the forespore or prespore. The larger cell is designated the mother cell. During stage III, the mother cell engulfs the forespore resulting in the formation of a double-membrane around the forespore inside the mother cell. A modified form of a cell wall known as cortex is synthesized between the inner and outer membranes of the prespore during stage IV followed by spore coat deposition on the outer membrane of the prespore during stage V. Stage VI is defined as the complete maturation of the spore. At this stage the spore develops its characteristic properties of resistance to radiation, heat, lysozyme, and organic solvents. Finally, the mother cell lyses and the mature spore is released in stage VII. The free spore is refractile and can be easily observed using light microscopy.
Genes that are needed for sporulation can be recognized by creating mutations which permit normal vegetative growth, but block sporulation. In Bacillus subtilis at least 100 sporulation genes have been identified which are involved in the sporulation process. The genes are designated as spo0, spoII, spoIII, etc. depending upon the stage in which sporulation is blocked. Genes involved in later events of sporulation in Bacillus subtilis have been identified as spoV genes and spoV A, B, D, Ea, Eb, G, Id, J and R have been identified in databases. The capital letters indicate loci containing mutations conferring similar phenotypes but mapping at distinct chromosomal positions. Sporulation and gene expression and control in Bacillus subtilis is further discussed in Errington, Jeffrey, Bacillus substilis Sporulation: Regulation of Gene Expression and Control of Morphogenesis, Microbio. Rev. 57 (1-33), which is hereby incorporated by reference.
The present invention is the first known isolation of a stage V sporulation gene from a host B. thuringiensis strain. The present gene designated spoVBt1 is about 65.5% homologous to the B. subtilis spoVJ gene at the nucleotide level and, the transposon, Tn917 was used as a tool for the identification of spoVBt1.
The novel spoVBt1 gene and related spoV DNA sequences (as defined below) are used in a method of stably introducing exogenous DNA into bacteria. The spoV sequences of the invention are substantially homologous to a fragment of a sporulation gene located on a bacterial chromosome. The bacterial fragment comprising a sporulation gene serves as a site for chromosomal integration of the exogenous DNA and spoV sequence.
Surprisingly it has also been found that if the spoVBt1 gene and other spoV DNA sequences are mutated by, for example, point mutations not only will exogenous DNA be incorporated into the bacterial chromosome but also the recipient bacteria will form mutated spores.
SUMMARY OF THE INVENTION
The present invention relates to an isolated spoVBt1 gene having the nucleotide sequence as shown in SEQ. ID NO.1. This invention further relates to an isolated spoV DNA sequence selected from the group consisting of i) the above-identified isolated spoVBt1 gene; ii) a nucleotide sequence encoding a Bacillus thuringiensis sporulation protein as depicted in SEQ. ID NO:2; iii) a nucleotide sequence encoding a Bacillus thuringiensis sporulation protein substantially similar to the protein depicted in SEQ. ID NO.2; iv) a nucleotide sequence which hybridizes to a complementary strand of a sequence of i), ii) or iii), under stringent hybridization conditions and v) a truncated nucleotide sequence of i), ii), iii) or iv) above wherein said truncated sequence includes at least 300 nucleotides and more preferably at least 500 nucleotides.
The invention further relates to a DNA segment comprising the spoV DNA sequence defined above linked to a DNA sequence encoding at least one insecticidal crystal protein wherein codons of said spoV DNA sequence comprises nucleotide sequences substantially homologous to sequences present in Bacillus thuringiensis chromosomal DNA and which allows for recombination. This DNA segment may be chromosomally integrated into a host Bacillus thuringiensis. The B. thuringiensis chromosomal fragment which is substantially homologous to the spoV DNA sequence serves as an integration site for the DNA segment. In this manner the invention includes an increase in the crystal gene content of a bacterium.
The invention also comprises a DNA segment comprising a mutated spoV DNA sequence (defined herein below) operably linked to a DNA sequence encoding at least one insecticidal crystal toxin protein wherein codons of said mutated spoV DNA sequence comprise nucleotide sequences substantially homologous to sporulation gene sequences present in Bacillus thuringiensis chromosomal DNA so that the DNA segment is capable of being inserted into the bacterial chromosomal sporulation gene locus and replicated and further the insecticidal crystal toxin protein is capable of being expressed in a Bacillus host wherein said host is rendered asporogenic or oligosporogenic.
The invention has particular relevance to recombinant B. thuringiensis strains wherein toxic crystal proteins are expressed by a transformed host but wherein spores are released into the environment. Therefore, in addition, the invention concerns a method of preparing asporogenic or oligosporogenic insecticidal crystal protein producing Bacillus thuringiensis strains comprising a) obtaining a DNA segment which includes a mutated spoV DNA sequence operably linked to at least one and no more than three insecticidal crystal protein encoding sequences; b) introducing said segment into a Bacillus thuringiensis host capable of sporulation; c) allowing homologous recombination to occur between the DNA segment and a substantially homologous nucleotide fragment of a sporulation gene in the host Bacillus thuringiensis chromosome wherein said DNA segment is stably integrated into the Bacillus thuringiensis chromosome and disrupts the sporulation process and; d) isolating a stably transformed asporogenic or oligosporogenic Bacillus thuringiensis host transformant wherein said stably transformed host is capable of expressing the introduced insecticidal crystal protein sequences.
A further object of the invention includes the transduction of a transformed Bacillus thuringiensis host comprising exposing the transformed host to a transducing phage; allowing said phage to replicate in said host wherein one to three exogenous insecticidal crystal protein encoding DNA sequences integrated into said Bacillus thuringiensis host chromosome are incorporated into the phage and introducing the insecticidal crystal protein encoding DNA sequence from said phage into a recipient Bacillus thuringiensis wherein said introduced exogenous crystal protein encoding DNA sequence is stably incorporated into said chromosome of the recipient and expressed in said recipient. The recipient Bacillus thuringiensis may or may not be rendered asporogenic or oligosporogenic depending on the DNA segment.
In this regard the invention includes a method of using a Bacillus thuringiensis chromosomal sporulation gene fragment as a locus for chromosomal integration of a DNA segment, the DNA segment comprising at least one insecticidal crystal protein encoding gene wherein said gene is stably integrated into the Bacillus thuringiensis chromosome.
The invention further relates to a broad spectrum insecticidal composition comprising an insecticidally effective amount of a transformed B. thuringiensis according to the invention and an acceptable carrier thereof.
Another objective of the present invention is the genetic engineering of a B. thuringiensis host whereby use of said host for the control of pathogenic insects provides an environmentally safer biopesticide wherein viable spores are not released into the environment.
A further object of the invention includes a method of protecting crop plants comprising applying to the locus where control is desired a composition of the invention.
Other aspects of the present invention will become apparent to those skilled in the art from the following description and figures.
BRIEF DESCRIPTION OF THE FIGURES
FIG. 1 illustrates the predicted amino acid sequence of the isolated spoVBt1 gene as illustrated in SEQ ID NO.1 and corresponds to SEQ. ID NO. 2.
FIG. 2 illustrates the spoVBt1 gene interrupted with Tn917 and the location of oligonucleotides used for sequencing.
FIG. 3 illustrates the plasmids, pSB901, pBR322 and pSB140.
FIG. 4 illustrates plasmid pSB210.
FIG. 5 illustrates plasmid pSB1207.
FIG. 6 illustrates plasmid pSB1209.
FIG. 7 illustrates plasmid pSB1218.
FIG. 8 illustrates plasmid pSB1219.
FIG. 9 illustrates plasmid pSB1220.
FIG. 10 illustrates plasmid pSB139.
FIG. 11 illustrates plasmid pSB210.1.
FIG. 12 illustrates plasmid pSB32.
FIG. 13 illustrates plasmid pSB219.
FIG. 14 illustrates plasmid pSB458.
FIG. 15 illustrates plasmid pSB1221.
DETAILED DESCRIPTION OF THE INVENTION
The isolation and purification of a sporulation gene from B. thuringiensis HD1Mit9::Tn917 is described at length in Example 1. The nucleotide sequence is shown in SEQ. ID NO.1. The molecular weight of the putative protein product is calculated as 36.7 kDa. The sporulation gene is designated spoVBt1. The predicted amino acid sequence is shown under the nucleotide sequence. The putative ribosome binding site includes nucleotides 459 through 470 of SEQ ID NO.1 and the predicted transcription terminator stemloop includes nucleotides 1415 through 1424 and 1431 through 1440.
The invention also includes those nucleotide sequences which encode the protein of SEQ. ID NO.2. It will be appreciated by those skilled in the art that an amino acid is frequently encoded by two or more codons, for example the amino acid leucine is encoded by the nucleic acid sequences of the following codons, TTA, TTG, CTT, CTA, CTG and CTC. Codons which code for the same amino acid are considered synonymous codons.
The invention still further embodies nucleotide sequences which encode sporulation proteins that are substantially similar to the protein depicted in SEQ. ID NO.2. The term substantially similar to the protein depicted in SEQ. ID NO.2 means that the proteins are stage V sporulation proteins and the degree of similarity of the amino acid sequences is preferred to be at least 80%, more preferred the degree of similarity is at least 85%, and most preferred the degree of similarity is 95% to SEQ. ID No:2. A nucleotide sequence encoding a stage V sporulation gene includes those genes involved in late events of the sporulation process. For example those genes involved in deposition of spore coat protein, development of germination processes and progressive acquisition of resistance to organic solvent, heat and lysozyme to name a few.
In the context of the present invention, two amino acids sequences with at least 85% similarity to each other have at least 85% identical or conservatively replaced amino acid residues.
For the purpose of the present invention conservative replacements may be made between amino acids within the following groups: (i) alanine, serine and threonine; (ii) glutamic acid and aspartic acid; (iii) arginine and lysine; (iv) asparagine and glutamine; (v) isoleucine, leucine, valine and methionine; and (vi) phenylalanine, tyrosine and tryptophan.
The invention still further includes nucleic acid sequences which are complementary to one which hybridizes under stringent conditions with any of the above disclosed nucleic acid sequences. A first nucleotide sequence which "hybridizes under stringent hybrization conditions" to a second nucleotide sequence can not be substantially separated from the second sequence when the second sequence has been bound to a support and the first and second sequences have been incubated together at 65° C. in 2× standard saline citrate containing 0. 1% (w/v) sodium dodecyl sulphate, the thus hybridized sequences then being washed at 50. degree. C. with 0.5× standard saline citrate containing 0.1% (w/v) sodium dodecyl sulphate.
Specifically, the spoVBt1 gene depicted in SEQ. ID NO.1 is a gene that is associated with later stages of the sporulation process. In this regard the nucleotide sequences of the invention are referred to under the general heading of spoV DNA sequences and more specifically are identified as i) a spoVBt1 gene having the nucleotide sequence shown in SEQ. ID NO.1; ii) a nucleotide sequence encoding a Bacillus thuringiensis sporulation protein as depicted in SEQ. ID NO.2; iii) a nucleotide sequence encoding a Bacillus thuringiensis sporulation protein substantially similar to the protein depicted in SEQ. ID NO.2; iv) a nucleotide sequence which hybridizes to a complementary strand of i), ii) or iii) under stringent hybridization conditions; and v) a truncated nucleotide sequence of i), ii), iii) or iv) above wherein the truncated sequence includes at least 300 nucleotides and more preferably at least 500 nucleotides.
While a truncated spoV DNA sequence is most preferably at least 500 nucleotides, a particularly preferred sequence is base pair 488 to 1404, inclusive, of SEQ ID NO.1. This particular sequence is referred to herein as (t)spoVBt1-1.
Accordingly, the invention provides, a DNA segment comprising a spoV DNA sequence as defined above or a mutated spoV DNA sequence as defined herein below operably linked to other DNA sequences wherein the DNA sequences encode exogenous or foreign proteins. For example in a preferred embodiment the DNA segment may include in addition to the spoV DNA sequence, insecticidal crystal protein encoding DNA sequences. Preferably, the DNA segment will include one to three insecticidal protein encoding genes. Such sequences include but are not limited to cryIA(a), cryIA(b), cryIA(c), cryIB, cryIC, cryIC(b), cryID, cryIE, cryIF, cryIG, cryIH, cryIIA, cryIIB, cryIIIA, cryIIIB, cryIIIC, cryIVA, cryIVB, cryIVC, cryID, cryV genes, mixtures thereof and sequences constructed from parts of these cry genes. In particular, the crystal protein encoding DNA sequences include cryIA(b), cryIA(c), cryIC, cryIIA, and cryIE. Sequences constructed from parts of any genes include hybrid crystal encoding proteins wherein domains of two or three different crystal encoding toxins are included. These hybrid genes are known in the art and may include for example domain I and domain II of one crystal encoding gene and domain III of another crystal toxin encoding gene. In particular, domain III of cryIC is preferred. The hybrid G27 is one such example wherein the gene includes domain I and II of cryIE and domain III of cryIC. The protein G27 is further described in Bosch et al. , Biotechnology 12:9 5-918 (1994) the contents of which are hereby incorporated by reference. However, one skilled in the art can envisage various combinations of toxins comprising a hybrid toxin encoding gene and these combinations are incorporated into the invention. The terms foreign or exogenous protein or gene are terms used in the art to denote a gene which has been transferred to a host cell from a source other than the host cell.
According to this invention, the most preferred hosts include B. thuringiensis subspecies, and particularly subspecies thuringiensis, kurstaki, dendrolimus, galleriae, entomocidus, aizawai, morrisoni, tolworthi and israelensis, and most particularly B. thuringiensis kurstaki.
The DNA segment may further comprise an origin of replication for a gram-negative bacterium. Any origin of replication capable of functioning in one or more gram negative bacterial species or strains of Enterobacter, Nitrosomonas, Pseudomonas, Serratia, Rhizobium, and Azotobacter genera among others may be used. After cloning the DNA segment in a gram- negative bacterium such as E. Coli and transforming a Bacillus thuringiensis, the only remaining exogenous insecticidal DNA sequences will be those integrated into the host's chromosome. Since the gram negative origin of replication will not function in a Bacillus thuringiensis host, a host transformed with the DNA segment will neither replicate nor express the crystal toxin encoding genes unless the DNA segment becomes integrated into the host chromosome.
The DNA segment may further comprise other nucleic acid sequences including selectable markers. In general, selectable markers for drug resistance, chemical resistance, amino acid auxotrophy or prototrophy or other phenotypic variations useful in the selection or detection of mutant or recombinant organisms can be used.
Other sequences may also be incorporated into the DNA segment including but not limited to regulatory sequences capable of directing transcription and translation of the crystal toxin encoding sequences within the host cell, such as promoters, operators, repressors, enhancer sequences, ribosome binding sites, transcription initiation and termination sites and the like. Specific examples include the CryIC promoter, CryIA(c) terminator and ermC promoter. Additionally, sequences adjacent to the claimed spoVBt1 gene may be included. These sequences comprise promoter sequences, downstream enhancer sequences and the like.
The spoV DNA sequence suitable for use in the invention is substantially homologous to a nucleotide fragment of the B. thuringiensis chromosome. This fragment will generally be part of a sporulation gene and it serves as an integration site for the DNA segment of the invention into the host DNA by homologous recombination thereof with the bacterial DNA. The DNA segment of the invention may be provided as either a circular, closed DNA segment wherein homologous recombination occurs by means of a single cross over event or as a linear DNA segment wherein homologous recombination occurs by means of a double-cross over event. Thus the substantially homologous DNA sequences may be as one or two flanking DNA sequences. The spoV DNA sequences are homologous to a fragment of the bacterial chromosome in the range of about 15-1600 nucleotide bases and more preferably 200-1200. One skilled in the art will also recognize that at the integration site, multiple insertions can occur.
The term homologous as used herein in the context of nucleotide sequences means the degree of similarity between the sequences in different nucleotide molecules. Therefore two nucleic acid molecules which are 100% homologous have identical sequences of nucleotides. A substantially homologous nucleotide sequence or fragment is one wherein the sequences of the fragment are at least 90% and preferably 95% identical. Homologous recombination is defined as general recombination which occurs between two sequences which have fairly extensive regions of homology; the sequences may be in different molecules.
The DNA segment of the invention may be carried on a phage or a vector, a preferred vector is a plasmid and in particular the plasmids disclosed herein and in PCT International application WO 9425611, published Mar. 19, 1995 which is hereby incorporated by reference in its entirety. Additionally, the DNA segment may be carried on a hybrid shuttle vector for gram-positive bacteria. Appropriate vectors include any vector capable of self-replication in gram-negative bacteria, yeast's or any monocellular host in addition to gram-positive bacteria. Such shuttle vectors are known in the art.
Transformation, the process in which exogenous DNA is taken up by a recipient B. thuringiensis may be conducted by techniques known in the art and includes transfection, electro-poration, transduction, or conjugation. Particularly preferred methods include electro-poration and transduction. Host isolation may be conducted by selecting from the selectable marker on the transformed host. Transformed host may then be amplified by known techniques.
Therefore a preferred embodiment of the present invention is a method of preparing a transformed Bacillus thuringiensis host expressing exogenous insecticidal crystal protein proteins comprising
a) obtaining a DNA segment comprising
1) an origin of replication from a gram negative bacterium;
2) a spoV DNA sequence selected from the group consisting of
i) a spoVBt1 gene having the nucleotide sequence show in SEQ. ID NO.1;
ii) a nucleotide sequence encoding the protein depicted in SEQ. ID NO. 2,
iii) a nucleotide sequence encoding a Bacillus thuringiensis sporulation protein substantially similar to the protein depicted in SEQ. ID NO.2;
iv) a nucleotide sequence which hybridizes to a complementary strand of i), ii) or iii) under stringent hybridization conditions; and
v) a truncated nucleotide sequence of i), ii) or iv) above wherein the truncated sequence includes at least 300 nucleotides and
3) a DNA sequence encoding one to three insecticidal crystal proteins;
b) introducing said segment into a Bacillus thuringiensis host;
c) allowing homologous recombination between the DNA segment and a substantially homologous nucleotide fragment of a sporulation gene in the host Bacillus thuringiensis chromosome wherein the DNA segment is stably integrated into the Bacillus thuringiensis host chromosome; and
d) isolating stably transformed Bacillus thuringiensis transformants wherein said stable transformed Bacillus thuringiensis is capable of producing the exogenous insecticidal crystal proteins.
Also included in the invention is the transformed Bacillus host and progeny thereof formed by amplification of said transformant.
Mutation of a sporulation gene or genes may cause the formation of mutant spores. Mutant spores as used herein include spores from oligosporogenic and asporogenic strains. Asporogenic B. thuringiensis strains are those wherein spores are not formed because the strain is not capable of forming spores. Alternatively, oligosporogenic B. thuringiensis strains are those wherein spores are formed however, the spores may not be viable for a variety of reasons or the spores are viable but they are sensitive to heat, cold or organic solvents and rendered nonviable upon exposure thereto. Frequently, oligosporogenic B. thuringiensis produce what is known in the art as phase grey spores.
Mutation of a gene may occur in a number of ways well known to those in the art and include chemical mutagenesis, point mutations, deletions, insertional mutations, including use of transposons, and the like.
One embodiment of the present invention is a method of using a DNA segment of the invention in a manner to interrupt the chromosomal DNA encoding for sporulation genes. In this respect the DNA segment includes a mutated spoV DNA sequence.
A mutated spoV DNA sequence is a spoV DNA sequence of the invention wherein the sequence is altered with point mutations, deletions, or inserts. Point mutations are generally understood to mean any mutation involving a single nucleotide including the gain or loss of a nucleotide resulting in a frame shift mutation as well as transition and transversion mutations. The point mutations can occur at various codons. Preferred point mutations are used to create stop codons and may be used to destroy the ribosome binding site and methionine start codons. These stop codons can occur anywhere throughout the gene, however, they do not interrupt the process of homologous recombination between the DNA segment according to the invention and the substantially homologous chromosomal sporulation gene locus.
A mutated spoV DNA sequence may include 1 to about 25 stop codons although either a greater number than 25 or less than 25 can be used. A specific mutated spoV DNA sequence of the invention is part of the spoVBt1 sequence of SEQ. ID NO.1 including nucleotide sequence 465 to 1256 inclusive wherein the following nucleotides as illustrated in Table 1 have been altered. In general, stop codons should be engineered before nucleotide 1404 of SEQ ID No.1, or a related spoV DNA sequence to prevent reversion to wild type spores in the recipient host cells. Additionally, the peptide encoded by a mutated spoV DNA sequence should be less than 306 amino acids. Most preferably stop codons should be engineered before nucleotide 1256 of SEQ ID No.1 or a related spoV DNA sequence.
TABLE 1
______________________________________
Nucleotide # original codon
altered to
______________________________________
465 G T
475 T A
487 T A
492 A T
873 G T
881 C A
1243 T A
1254 A T
______________________________________
This specific mutated spoV DNA sequence is designed (m) spoVBt1- 8.
A mutated spoV DNA sequence may include the point mutations described above or a sequence substantially homologous to the non- mutated codons of sequence 465 to 1254.
The mutated spoV DNA sequence also includes a spoV DNA sequence which has exogenous inserts of nucleic acid sequences for example, the inserts may comprise 2 to 15 nucleotides; however, more nucleotides could be used.
Therefore another preferred embodiment of the present invention is a method of preparing a transformed oligosporogenic or asporogenic Bacillus thuringiensis host expressing one to three exogenous insecticidal crystal proteins comprising
a) obtaining a DNA segment comprising
1) an origin of replication from a gram negative bacterium;
2) a mutated spoV DNA sequence selected from the group consisting of
i) a spoVBt1 gene having the nucleotide sequence show in SEQ. ID NO.1;
ii) a nucleotide sequence encoding the protein depicted in SEQ. ID NO. 2;
iii) a nucleotide sequence encoding a sporulation protein substantially similar to the protein depicted in SEQ. ID NO:2;
iv) a nucleotide sequence which hybridizes to a complementary strand of i), ii) and iii) under stringent hybridization conditions; and
v) a truncated nucleotide sequence of i), ii), iii) or iv) above wherein said truncated sequence includes at least 300 nucleotides;
wherein the nucleotide sequence of i), ii), iii), iv) or v) above has one or more point mutations, inserts or deletions; and
3) a DNA sequence encoding one to three insecticidal crystal toxin proteins;
b) introducing said segment into a sporulating Bacillus thuringiensis host;
c) allowing homologous recombination between the DNA segment and a substantially homologous sporulation gene fragment of the host Bacillus thuringiensis chromosome wherein the DNA segment including the mutated spoV DNA sequence is stably integrated into the Bacillus thuringiensis host chromosome; and
d) isolating stably transformed Bacillus thuringiensis transformants wherein said stable transformed Bacillus thuringiensis is capable of producing the exogenous crystal toxin and is oligosporogenic or asporogenic.
The method further comprises employing the transformed aligosporogenic or asporogenic host.
Transduction is a virus mediated transfer of host DNA from one host cell (a donor) to another cell (recipient). When a phage replicates in a donor cell, a few progeny virions encapsidate pieces of the host DNA in addition to phage DNA. These virions can adsorb to a new host cell and introduce their DNA in the usual way. In this invention, the host strain which is transformed with a DNA segment of the invention can be further transduced with a phage. The host (donor) DNA which is incorporated into the phage undergo recombination with a homologous region of a recipient's chromosome so that the genes can be stably inherited. This is generally referred to by those skilled in the art as generalized transduction. Phages are known by those skilled in the art and include all phages capable of infecting B. thuringiensis strains, for example CP-51 and CP- 51ts45 and all derivations thereof. In the present invention, the preferred recipient call is from a strain of Bt kurstaki.
Therefore, in another aspect the invention is a method of transducing a transformed Bacillus thuringiensis comprising
a) exposing a Bacillus thuringiensis host of the invention to a transducing phage;
b) allowing the phage to replicate in the host wherein one to three exogenous crystal protein encoding genes integrated into the host chromosome are incorporated into the phage; and
c) introducing the exogenous crystal protein encoding sequences from the phage into a recipient Bacillus thuringiensis strain wherein said introduced exogenous crystal protein encoding sequences are stably incorporated into the recipient Bacillus thuringiensis chromosome and expressed in said recipient.
The recipient Bacillus thuringiensis may be rendered oligosporogenic or asorogenic depending on the spoV DNA sequence used in the DNA segment introduced into the transformed host. A most preferred locus for chromosomal integration is the spoVBt1 nucleotide fragment of the recipient Bacillus thuringiensis strain. However, other sporulation gene fragments may equally serve as a chromosomal locus. Preferred sporulation gene fragments include those substantially homologous with a spoV DNA sequence.
The stable incorporation of the DNA segment according to the invention into a host chromosome is defined as the maintenance of the DNA segment within the host chromosome through many generations.
The invention further relates to pesticidal compositions wherein the transformed Bacillus thuringiensis or protein derived from said Bacillus are the active ingredient. The compositions of the invention include an asporogenic or oligosporogenic Bacillus thuringiensis encoding one or more insecticidal Cry proteins and is applied at an insecticidally effective amount. An insecticidally effective amount is defined as the amount of an active ingredient which causes substantial mortality of an insect to be controlled and the amount will vary depending on such factors as the specific Cry protein, specific insects to be controlled, the specific plant to be treated, and the method of applying the insecticidally active compositions.
The compositions of the invention may contain about 10.sup.6 to about 10.sup.13 microorganisms per gram ca. The pesticidal concentration will vary depending on the carrier of the particular formulation. The compositions contain from 0.1 to 99% of the transformed host or progeny thereof and 0 to 99.9% of a solid or liquid carrier.
The insecticidal compositions of the invention may be formulated with an agriculturally acceptable carrier. The formulated compositions may be in the form of dusts, granular material, suspensions in oil or water, emulsions or as wettable powders. Suitable agricultural carriers may be solid or liquids and are well known to those in the art. Agriculturally acceptable carriers as used herein include all adjuvants such as wetting agents, spreaders, emulsifiers, dispersing agents, foaming agents foam, suppressants, pentrants, surfactants, solvents, solublizers, buffering agents, stickers etc., that are ordinarily used in insecticide formulation technology. These are well known to those skilled in the art of insecticide formulation.
The formulations comprising the asporogenic or oligosporogenic Bacillus thuringiensis strains and one or more liquid or solid adjuvants are prepared in a manner known to those in the art.
The compositions of this invention are applied to the locus where control is desired and typically onto the foliage of a plant to be protected by conventional methods. These application procedures are well known in the art. The formulations of the present composition may be applied by spreading about 10.sup.8 to about 10.sup.16 spores per acre. With oligosporogenic or asporogenic compositions the spores may be present but are either immature or non- viable. The compositions are best applied as sprays to plants with subsequent reapplication. Plants to be protected within the scope of the invention include but are not limited to cereals, fruits, leguminous plants, oil plants, vegetable plants, deciduous and conifer trees, beet plants, omamentals. The compositions may be effective against pests of the orders Coleoptera, Lepidoptera and Diptera.
The methods of the present invention make use of techniques of genetic engineering and molecular cloning that are known to those skilled in the art using commercially available equipment and are included in Maniatis, et al. Molecular Cloning: A Laboratory Manual, Cold Spring Harbor Laboratory (1991).
The present invention will now be described in more detail with reference to the following specific, non-limiting examples.
EXAMPLES
Example 1
Identification and Cloning of B. thuringiensis spoVBt1 gene:
A. Preparation of transposon Tn917 bearing plasmid PLTV1.
Plasmid pLTV1 is isolated from the B. subtilis strain PY1177. The strain is grown overnight (18-20 hours) on TBAB plates (3.3% Difco Tryptose Blood Agar Base) containing 0.5% glucose and Tet.sup.10 (10 &mgr; g/ml). Cells from single colonies are used to inoculate 10 ml LB (1% Bacto Tryptone, 0.5% Bacto Yeast Extract, 0.5% NaCl, pH 7.0) containing 0. 5% glucose and Tet.sup.10 (10 &mgr;g/ml). The cells are incubated for 5 hours with shaking (300 rpm) at 37° C., centrifuged at 18,500. times. g and 4° C. for 10 minutes, then washed once in 10 ml of SET buffer [20% sucrose, 50 mM disodium ethylenediaminetetra-acetic acid (EDTA), 50 mM Tris-HCl pH 8.0]. The pellet is resuspended in 500 &mgr;l SET solution containing 2 mg/ml of lysozyme and 0.4 mg/ml RNase A (Boehringer Mannheim Biochemicals, Indianapolis, Ind.). The cell suspension is incubated at 37° C. for 10 minutes and 1 ml of the lysis mixture [1% sodium dodecycl sulfate (SDS), 200 mM NaOH] is added, followed by 725 &mgr;l of prechilled neutralization buffer (1.5M potassium acetate pH 4.8). The mixture is then incubated on ice for 20 minutes; centrifuged at 18, 500×g and 4° C. for 10 minutes; and the supernatant is transferred to a fresh tube. Plasmid DNA is then isolated using a Mini Qiagen Plasmid Kit (Qiagen Inc., Chatsworth, Calif. ).
B. Transfer of pLTV1 to E. coli GM2163.
Unless otherwise indicated, E. coli and B. thuringiensis strains are grown at 37° C. and 30° C., respectively.
Plasmid pLTV1 requires conditioning in a dcm(-) host cell prior to transformation of B. thuringiensis. This is accomplished by transfer of pLTV1 into dcm(-) E. coli GM2163 (New England Biolabs, Inc., Beverly, Mass.). Competent E. coli GM2163 cells are prepared by inoculating a single colony into 30 ml SOB medium (2% Bacto Tryptone, 0.5% Bacto Yeast Extract, 0.06% NaCl, 0.05% KCl, 10 mM MgCl.sub.2, 10 mM MgSO.sub.4). The cells are incubated overnight at 300 rpm and 37° C. Two hundred ml of SOB media, in a 2 L flask, is inoculated with 8 ml of the overnight culture, and incubated at 37° C. and 300 rpm to an OD. sub.550 of 0.3. The culture is placed on ice for 15 min., centrifuged at 4,000× g and 4° C. for 5 minutes, and the pellet gently resuspended in 64 ml of transformation buffer 1 (1.2% RbCl, 0.99% MnCl. sub.2.4H.sub.2 O, 30 mM potassium acetate pH 5.8, 0.25% CaCl.sub.2.2H. sub.2 O, 15% glycerol). After a 15 minute incubation on ice, the cells are again centrifuged at 4, 000×g and finally resuspended in 16 ml of transformation buffer 2 (10 mM MOPS pH 7.0, 0.12% RbCl, 1.1% CaCl.sub. 2. 2H.sub.2 O, 15% glycerol). Approximately 50 &mgr;l of competent cells and 4 &mgr;l DNA are mixed in a 1.5 ml Eppendorf tube and incubated on ice for 1 minute. The mixture of cells and DNA (pLTV1 isolated from B. subtilis strain PY1177) are transferred to a prechilled 0.2 cm gap electrode cuvette and pulsed using the high voltage Gene Pulser electroporation apparatus. The electroporation conditions were 25 &mgr;F, 2.5 kV, and 200 . OMEGA.. Cells are immediately transferred to 1 ml SOC medium (2% Bacto Tryptone, 0.5% Bacto Yeast Extract, 0.06% NaCl, 0.05% KCl, 20 mM glucose) and incubated at 37° C. and 225 rpm for 1 hr. The cells are plated on LB agar containing Amp.sup.75 (75 &mgr;g/ml) and incubated overnight at 37° C. Plasmid pLTV1 is isolated from transformed E. coli GM2163 cells using the Mini Qiagen Plasmid Kit.
C. Transfer of pLTV1 to B. thuringiensis Cry-B.
The B. thuringiensis strain HD1Mit9 was used for transposon mutagenesis. This strain was obtained from Dr. Arthur I. Aronson at Purdue University. It is an acrystalliferous derivative of B. thuringiensis subspecies kurstaki HD1 and contains only one 4-mDa plasmid.
Plasmid pLTV1 isolated from E. coli is unstable in HD1Mit9. As a result, the plasmid DNA from GM2163 is transformed into Cry.sup.- B, a plasmid cured crystal-minus strain of B. thuringiensis (Stahly, D. P., Dingmann, D. W., Bulla, L. A. and Aronson, A. I., Biochem. Biophys. Res. Com. 84:581-588, 1978). To prepare competent cells, Cry.sup.- B is grown overnight on an LB plate. Individual colonies are used to inoculate 100 ml of BHIS medium (3.7% Brain Heart Infusion, 0.5M sucrose) in a 1 L flask. The culture is incubated at 37° C. with shaking, until an OD. sub.600 of 0.2-0.3. The cells are transferred to a prechilled 250 ml bottle and centrifuged for 7 minutes, at 6,500×g and 4° C. The pellet is washed once in 100 ml and twice in 10 ml of ice cold HEPES (N- [2-hydroxyethyl]piperazine-N'-[2-ethanesulfonic acid])/sucrose wash solution (5 mM HEPES, pH 7.0, 0.5M sucrose). Cells are then resuspended in a solution containing 10 ml of HEPES/sucrose solution and 2.5 ml of 50% glycerol. Competent B. thuringiensis cells (200 &mgr;l) are mixed with 10 &mgr;l of plasmid DNA pLTV1 (1-5 &mgr;g) in a prechilled 0.2 cm gap electrode Gene Pulser Cuvette, and exposed to an electrical current in the Gene Pulser electroporation apparatus (Bio-Rad Laboratories, Richmond, Calif.). The parameters for the electroporation of Cry.sup.- B are 1.05 kV, 25 &mgr;F, &OHgr;=∞. Following electroporation the cells are immediately transferred to 5 ml BHIS in a 125 ml flask and grown at 30. degree. C. and 250 rpm. After three hours of growth, the cells are transferred to LB agar plates containing Tet.sup.10. The plates are incubated overnight at 30° C. and the transformants, designated Cry.sup.- B(pLTV1) are restreaked onto fresh LB Tet.sup.10 plates.
D. Isolation of Plasmid pLTV1 from B. thuringiensis Strain Cry- B.
Cry.sup.- B(pLTV1) is streaked onto LB Tet.sup.10 plates. The culture is grown overnight. A single colony is restreaked onto an SA plate (1. times. Spizizens salts, 1% casamino acids, 0.5% glucose, 0.005 mM MnSO. sub.4.H.sub.2 O, 1.5% Bacto agar) and incubated for 3-4 hours at 37. degree. C. The 1× Spizizens salts contain 0.2% (NH.sub.4).sub.2 SO. sub.4, 1.4% K.sub.2 HPO.sub.4, 0.6% KH.sub.2 PO.sub.4, 0.1% Sodium- Citrate 2H.sub.2 O, and 0.02% MgSO.sub.4.7H.sub.2 O (Anagnostopoulos and Spizizen, 1961). The grown cells are removed from the plate, resuspended in 100 &mgr;l TESL (100 mM Tris-HCl pH 8.0, 10 mM EDTA, 20% sucrose, 2 mg/ml lysozyme) and incubated at 37° C. for 30-60 minutes. Two hundred microliters of lysis solution (200 mM NaOH, 1% SDS) is added to the tube followed by a 5 minute incubation at room temperature. After addition of 150 &mgr;l ice-cold 3M potassium acetate pH 4.8, the suspension is microcentrifuged for 20 minutes at 18,500×g and 4. degree. C. The supematant is transferred to a fresh tube and mixed with 1 ml of 100% ethanol. This suspension is left at -20° C. for 1 hour and centrifuged at 18,500×g and 4° C. for 30 minutes. The plasmid DNA is washed with 70% ethanol, dried under vacuum, and resuspended in 20 &mgr;l of TE.
E. Transfer of pLTV1 to B. thuringiensis Strain HD1Mit9.
Plasmid pLTV1 isolated from Cry.sup.- B(pLTV1) is introduced into strain HD1Mit9. B. thuringiensis HD1Mit9 cells (Dr. Arthur I. Aronson, Purdue University) are made competent and transformed using the same procedure described above for Cry.sup.- B. The plasmid is isolated from HD1Mit9(pLTV1) using the same protocol used for Cry.sup.- B(pLTV1). However, the parameters for the electroporation of HD1Mit9 are 1.2 kV, 3 . mu.F, &OHgr;=∞. The presence of plasmid pLTV1 in the HD1Mit9 transformants was confirmed by restriction enzyme digestion and polymerase chain reaction (PCR). The results of each experiment are analyzed by agarose gel electrophoresis. The pLTV1 DNA (1 &mgr;g), isolated from HD1Mit9, is digested with EcoRI (Pharmacia Biotechnology, Piscataway, N.J.) under the conditions described by the manufacturer. Primers LacNHS1, SEQ ID NO.3 and LacNHS2, SEQ ID NO.4, used for the PCR reactions, are synthesized on a PCR-Mate DNA synthesizer model 391 (Applied Biosystems, Foster City, Calif.).
______________________________________
SEQ
Primer Sequence (5'-3') ID NO.
______________________________________
LacNHS1
GGCTTTCGCTACCTGGAGAGACGCGCCCGC
3
LacNHS2
CCAGACCAACTGGTAATGGTAGCGACCGGC
4
______________________________________
A single B. thuringiensis colony is resuspended in 15 &mgr;l sterile water, placed in a boiling water bath for 10 minutes, and then centrifuged for 5 minutes. Each PCR reaction contained 2 &mgr;l of the supernatant from the boiled cells, 1× PCR buffer [100 mM Tris-HCl pH 8.3, 500 mM KCl, 15 mM MgCl.sub.2, 0.1% (wt/vol) gelatin], 200 &mgr;M deoxyribonucleoside triphosphates (dNTP's; 1.25 mM of dATP, dCTP, dGTP, and dTTP), 1 &mgr;M LacNHS1 primer, 1 &mgr;M LacNHS2 primer, and 2.5 units AmpliTaq DNA polymerase (Perkin-Elmer Cetus, Norwalk, Conn.). The reactions are run in a DNA Thermal Cycler (Perkin-Elmer Cetus) for 25 cycles. Each cycle consists of 1 minute at 94° C. (denaturation), 2 minutes at 55° C. (hybridization), and 3 minutes at 72° C. (extension). The PCR products are analyzed by agarose gel electrophoresis. Synthesis of a 1.5 kb PCR generated fragment indicates that the clone carried the pLTV1 plasmid.
F. Preparation of Tn917 Insertion Libraries.
To prepare the library, HD1Mit9 (pLTV1) is grown on an LB plate containing Tet.sup.10. Two to three colonies are used to inoculate 10 ml Penassay broth (1.75% Difco Antibiotic Medium 3) containing Ery.sup.0.15. After 90 minutes of growth at 30° C. and 300 rpm, the concentrations of antibiotics in the culture are increased to Ery.sup.1 and Lm.sup.25. When the culture reaches an OD.sub.595 of 0.5, a 100 &mgr; l portion is added to 10 ml fresh Penassay broth containing Ery.sup.1 and Lm.sup.25. After 16 hours of growth at 39° C. and 300 rpm, the culture is diluted 1:15 with 10 ml Penassay broth containing Ery.sup. 1 and Lm.sup. 25 and grown with moderate shaking at 39° C. until an OD.sub.595 of 2. 0. The cells are pelleted by centrifugation (4,000. times. g, 4° C.), resuspended in 500 &mgr;l of Penassay broth containing Ery.sup.1 and 30% glycerol, and frozen on dry ice. Dilutions (10.sup.-3 and 10.sup.-4) of the library are plated onto Penassay Ery. sup.5 plates and incubated at 39° C. for 16 hours. Individual colonies are patched onto LB plates each containing different antibiotics (Tet.sup.10, Lm.sup.25, and Ery.sup. 5) and grown overnight at 30° C. Those colonies that grow only on LB Lm25 and LB Ery.sup. 5 but not LB Tet.sup. 10 contained Tn917 insertions. These colonies are grown on CYS plates (1% Casitone, 0.5% glucose, 0.2% Bacto Yeast Extract, 0.1% KH.sub.2 PO.sub.4, 0.5 mM MgCl.sub.2, 0.05 mM MnCl.sub.2, 0.05 mM ZnSO.sub.4, 0.05 mM FeCl. sub.2, 0.2 mM CaCl.sub.2, 1. 5% Bacto agar and examined microscopically for their ability to sporulate. Over 1×10. sup.4 colonies are screened for transposon insertions. Sixty-three colonies containing a chromosomal insertion are obtained. Only three colonies (5% of the insertion mutants) are sporulation- defective. The sporulation mutants are further analyzed.
To identify auxotrophs and citric acid cycle-defective colonies, the B. thuringiensis sporulation mutants are grown on glucose minimal and lactate minimal plates for 1-2 days, and the growth is compared to that of HD1Mit9. Glucose minimal agar is made of 1× salt solution [1 liter; 5.6 g K.sub.2 HPO.sub.4, 2.4 g KH.sub.2 PO.sub.4, 1.2 g (NH. sub.4) .sub.2 SO.sub.4, 0.4 g sodium citrate, pH 7.0], 0.005 mM FeCl.sub. 2, 0. 25 mM MgCl.sub.2, 0.96% glucose, 0.0002 mM MnCl.sub.2, 0.00012% Thiamine- B1, and 1.5% Bacto agar. Lactate minimal medium contains 0.2% lactate and 25 mM glutamate instead of glucose and sodium citrate. The three sporulation mutants grow well on glucose and lactate minimal media indicating that they are not auxotrophs or defective in citric acid cycle enzymes.
G. Isolation of DNA from the B. thuringiensis Sporulation Mutant.
The sporulation mutant strain, HD1Mit9::Tn917, is grown in 2 ml LB overnight at 200 rpm. The overnight culture is used to inoculate 100 ml LB (1% inoculation). The cells are grown at 300 rpm to an OD.sub.600 of 0. 7-1.0, collected by centrifugation (3,840×g, 5 minutes, 4. degree. C.), and resuspended in 5 ml TES (25 mM Tris-HCl pH 8.0, 25 mM EDTA, 25% sucrose). The cell suspension is mixed with 0.55 ml of 10 mg/ml lysozyme in TES solution and incubated at 37° C. for 60 minutes. SDS is added to 2% weight-to-volume, followed by a 15 minute incubation at 50. degree. C. The suspension is mixed with 1.52 ml of 5M NaCl and incubated at 50° C. for 5 minutes. The sample is incubated at 0° C. for 1 hour, centrifuged for 10 minutes at 18, 500×g and 4° C. , and the supematant is transferred to a new tube. To purify the DNA from the protein, the supematant is treated with phenol and chloroform. First, it is mixed with 5 ml TE-equilibrated phenol followed by 15 minutes incubation at 50° C., then with 5 ml phenol/chloroform (1:1), and finally with 5 ml chloroform. The aqueous phase is separated from the organic phase by centrifugation (4, 000×g, 5 minutes) at each step. The DNA is precipitated by the addition of 4.6 ml isopropanol and centrifugation (18,500×g, 30 minutes, 4° C.). The air-dried DNA pellet was dissolved in 2 ml of TE and stored at -20° C.
H. Cloning the Chromosomal DNA Adjacent to Transposon Insertions.
B. thuringiensis chromosomal DNA is cloned and maintained in E. coli DH5&agr; (Gibco BRL, Grand Island, N.Y.) or HB101 (New England Biolabs, Inc.) strains. Restriction enzymes, T4 DNA Ligase, and reaction buffers are purchased from New England Biolabs, Inc., Pharmacia Biotechnology (Piscataway, N.J.), or Gibco BRL. Competent E. coli cells were prepared as follows. The strains are incubated on LB agar plates for 16-18 hours. Several colonies are used to inoculate 100 ml LB in 1L Erlenmeyer flasks. Liquid cultures are grown at 37° C. and 300 rpm. When the cultures reach an OD.sub.600 of 0.2, they are placed on ice for 15 minutes and then centrifuged for 10 minutes at 5,500×g and 4° C. The cells are resuspended in 50 ml (1/2 volume) 0.05M CaCl.sub.2, incubated on ice for 20 minutes, and centrifuged as described above. The competent cells are resuspended in 5 ml (1/20 volume) 50 mM CaCl.sub.2 containing 20% glycerol and placed into microfuge tubes (100 &mgr;l per tube).
B. thuringiensis chromosomal DNA (11-14 &mgr;g) is digested with a restriction enzyme as follows. Reactions contain 30-50 units of enzyme in a 1× reaction buffer and are incubated overnight at 37° C. The 10× reaction buffers include REact 2 (500 mM Tris- HCl pH 8. 0, 100 mM MgCl.sub.2, 500 mM NaCl), REact 3 (500 mM Tris-HCl pH 8.0, 100 mM MgCl.sub.2, 1M NaCl), and NEB 3 [500 mM Tris-HCl pH 8.0, 100 mM MgCl. sub. 2, 10 mM dithiothreitol (DTT), 1M NaCl]. The REact 2, REact 3, and NEB 3 buffers are used for the XbaI, EcoRI, and BspEI enzymes, respectively. The enzymes are inactivated by heating at 70° C. for 40 minutes. Digested DNA is ligated in a 100 &mgr;l volume using 16 units of T4 DNA Ligase and 20 &mgr;l of 5× Ligase Reaction Buffer [50 mM MgCl.sub.2, 25% (wt/vol) polyethylene glycol 8000, 5 mM ATP, 5 mM DTT, 250 mM Tris- HCl pH 7.6] and incubated overnight at 16° C. The ligated mixtures are used to transform 100 &mgr;l E. coli HB101 competent cells. Ampicillin- resistant transformed colonies are isolated after 16 hours of growth on LB agar containing Amp.sup.75. Plasmid DNA is extracted from the HB101 transformants, transferred to E. coli DH5. alpha. or GM2163, and then analyzed by restriction endonuclease digestion.
I. Determination of the Nucleotide Sequence of spoVBt1.
Both strands of the cloned gene are sequenced using the primers listed in Table 1 which are synthesized on a PCR-Mate DNA synthesizer model 391. The positions of the oligonucleotides are shown in FIG. 2. A Sequenase Version 2.0 DNA Sequencing Kit (United States Biochemical, Cleveland, Ohio) is used to sequence the cloned B. thuringiensis chromosomal fragments as described by the manufacturer. The sequence of spoVBt1 gene is shown in SEQ ID NO.1. The molecular mass of the spoVBt1 gene product is 36.7 kDa.
TABLE 2
______________________________________
Primers used for sequencing B. thuringiensis spoVBtl gene
Primer Sequence (5'-3') SEQ ID NO.
______________________________________
MS1 GAGAGATGTCACCGTCAAG 5
MS2 CCCTGTACCTGGATTCCC 6
MS3 GGGAATCCAGGTACAGGG 7
MS4 CCATCCCAACAAGCTTCCC 8
MS5 GGGAAGCTTGTTGGGATGG 9
MS6 CCTGTCCCCCTTGTAAATGC 10
MS7 GCATTTACAAGGGGGACAGG 11
MS8 CGCCGTCTACTTACAAGCAGC 12
MS9 GGTGGTGGGACTATGGAG 13
MS10 CTCCATAGTCCCACCACC 14
MS11A CGAGGAGGAGAGAAGGAC 15
MS11B GTCCTTCTCTCCTCCTC 16
MS12A CGAAGTGTACGGTCTGG 17
MS12B CACGATGCATCG 18
MS13A CGAAAGAGGCTGAATGG 19
MS13B GGGCGGTATGTACGG 20
MS13C CCGTACATACCGCCC 21
MS14 GCATCAAATCCATACTCGATATTCC
22
MS14A CGAGTATGGATTTGATGCTCG 23
MS16 GGACACGATCCTAATTCAGC 24
MS16A GCTGAATTAGGATCGTGTCC 25
______________________________________
J. Characterization of Spores from the sporulation mutant strain HD1Mit9:: Tn917.
The resistance properties of the mutant and the wild type spores of HD1Mit9 against heat, lysozyme, and organic solvents are compared. The wild type and mutant strains are grown in 100 ml CYS medium. The cultures are harvested 48 hours after they reached stationary phase. The sporulated cultures are then exposed to various treatments, and serial dilutions in 0.1% peptone are plated onto LB agar and incubated overnight at 30° C. Colonies arising from germinated spores are counted. Resistance of spores to heat, lysozyme, and organic solvents is determined. Heat Treatment: Cultures are diluted 1:10 in 0.1% sterile peptone and divided into two equal parts. One part is incubated at 55. degree. C. and the other at 65° C. for 45 minutes with occasional mixing. Lysozyme Treatment: Cultures are diluted 1:100 in 0.1% peptone containing 250 &mgr;g/ml lysozyme and incubated at 37° C. for 15 minutes. Organic Solvent Treatment: Samples are treated with toluene, 1- octanol, and chloroform using the following protocol. One milliliter of the cultures is mixed with 7 ml of 0.1% peptone and 2 ml of organic solvent. The mixtures are then vortexed for 1 minute. For the acetone treatment, the cultures are diluted 1:10 in acetone and incubated at room temperature for 15 minutes. The mutants spores are sensitive to heat and organic solvents and resistant to lysozyme. The relative resistance of spores produced by the sporulation mutant strain, from the most resistant to the least resistant, are as follows: lysozyme, heat, toluene, chloroform, acetone, and 1-octanol.
Example 2
Chromosomal Integration
A. Preparation of the spoVBt1 DNA sequence and a mutated spoV DNA sequence.
The spoVBt1 DNA sequence is prepared as described above in Example 1. Primers MS18, MS19A, MS19B and MS20 are used to PCR amplify a mutated spoV gene using the native spoVBt1 as a template. The primers are designed to contain several point mutations such that the putative ribosomal binding site and the start codon are destroyed and multiple stop codons are introduced throughout the spoVBt1 sequence. This mutated spoV DNA sequence corresponds to nucleotide sequence 465 to 1256 of SEQ ID No.1 with alterations as indicated in Table 1. The specific mutated spoV DNA sequence is designated (m) spoVBt1-8.
_________________________________________________________________________ _
Primer
Sequence (5'-3') SEQ ID NO.
_________________________________________________________________________ _
MS18 GATGTGATTGTAAGGAACAATCGAAGCGATAGAAAAAC
26
MS19A
GATCTTGTATGAGAGTAAATCGGCCATACAGC
27
MS19B
GCTGTATGGCCGATTTACTCTCATACAAGCTC
28
MS20 CTATACAGCATGTTAATGATCCC 29
_________________________________________________________________________ _
To obtain the (m)spoVBt1-8 gene, two PCR reactions are involved. The two fragments of the gene, the 3' and the 5' halves of the sequence, containing a 32 bp overlap corresponding to MS19A and MS19B on each DNA strand are amplified. The 5' half is amplified using primers MS18 and MS19B and the 3' half is amplified using primers MS19A and MS20. The two fragments are mixed, denatured and annealed. The entire (m)spoVBt1-8 gene is amplified using primers MS18 and MS20.
B. Chromosomal Integration of crystal genes at the spoV gene Bt1 site with the (t) spoVBt1-1.
Crystal genes are integrated into the B. thuringiensis chromosome using the pSB1209 plasmid (FIG. 6).
The plasmid pSB901 (FIG. 3) is constructed to provide an erythromycin resistance gene, ermC. The ermC gene is isolated as a HindIII/ClaI fragment from the p1M13 Bacillus subtilis plasmid described by Monod et al. [Monod et al., J. Bacteriol. 167:138-147 (1986)]. The ermC HindIII/ClaI fragment is ligated to pUC18 cut with HindIII and AccI. To replace the tetracycline resistance gene (tet.sup.r) in pBR322 (FIG. 3) with the ermC gene from pSB901, pBR322 is digested with AvaI and the linearized vector is treated with the Klenow fragment of E. coli DNA polymerase I to generate a blunt end. Following Klenow treatment, pBR322 is digested with HindIII and the large fragment is purified away from the tet.sup.r gene fragment. Plasmid pSB901 is digested with SmaI followed by HindIII and the fragment carrying the ermC SmaI-HindIII fragment is purified. The ermC gene is ligated into the pBR322 HindIII large fragment to generate pSB140 (FIG. 3).
_________________________________________________________________________ _
Primer
Sequence (5'-3') SEQ ID NO.
_________________________________________________________________________ _
KK14 AGCTTGCGGCCGCGTCGACCCCGGGCCATGGGGGCCCG
30
KK14B
AATTCGGGCCCCCATGGCCCGGGGTCGACGCGGCCGCA
31
MS17 GCGAAAGAAAAACAACAATC 32
_________________________________________________________________________ _
Using PCR primers, MS14, SEQ ID NO.22, and MS17, SEQ ID NO.32, (t) spoVBt1-1 gene is amplified from B. thuringiensis HD73 strain. The 916 bp PCR product is blunted at both ends using the DNA polymerase I Klenow fragment, and cloned into plasmid pUC18 at the SmaI site. The (t) spoVBt1- 1 corresponds to base pair 488 through 1404 of SEQ ID NO.1. The resulting plasmid is called pSB1207 (FIG. 5).
The (t)spoVBt1-1 gene is isolated from pSB1207 using EcoRI and HincII restriction enzymes and the ends were blunted with the Klenow fragment. The pSB210 is linearized using HindIII enzyme, blunted with Klenow, and dephosphorylated using Calf Intestinal alkaline phosphatase (CIP). The isolated (t)spoVBt1-1 gene is then ligated into the linearized pSB210 plasmid. The resulting plasmid is called pSB1209 (FIG. 6). Various crystal genes are cloned at the ApaI and NotI sites of pSB1209 and integrated into the B. thuringiensis chromosome at the spoVBt1 site.
C. Chromosomal Integration of crystal genes at the spoVBt1 site using the (m) spoVBt1-8 fragment.
The (m)spoVBt1-8 fragment is amplified from B. thuringiensis HD73 strain using the PCR technique. The 0.8 kb PCR product is blunted at both ends using the Klenow fragment, and cloned into plasmid pUC19 at the SmaI site. The resulting plasmid is called pSB1218 (FIG. 7).
The (m) spoVBt1-8 fragment is isolated from pSB1218 at the KpnI and BamHI sites. This fragment is blunted at both ends using T4 DNA polymerase, and cloned into plasmid pSB210 at the MscI site. The resulting plasmids are pSB1219 (FIG. 8) and pSB1220 (FIG. 9). The cloned (m) spoVBt1-8 fragment is either in the same orientation as ermC gene in pSB210 (pSB1219) or in the opposite direction of the ermC open reading frame (pSB1220). The G27 gene encoding a CryIC/CryIE hybrid crystal protein is cloned in pSB1219 using the following steps:
To construct the pSB210.1 plasmid, (FIG. 11), the phospholipase C "plc" gene is added to pSB210. The DNA sequence of the plc region from B. thuringiensis strain ATCC 10792 is obtained from Genbank (Accession number X14178) and is described by Lechner et al., [Lechner, M., et al., Mol. Microbiol. 3:621-626 (1989)]. The plc region is amplified from HD73 total DNA by PCR using primers Phos1 and Phos4.
______________________________________
Primer Sequence (5'-3') SEQ ID NO.
______________________________________
Phos1 GGAACGCTACATACTAGTGATAGAGTAG
33
Phos4 GCTTGTACACCGCAACTGTTTTCGCATG
34
______________________________________
The PCR product is cloned into the SmaI site of pUC18 to construct pSB139 (FIG. 10). The plc target region is isolated on a 2.2 kb blunted- KpnI, BamHI fragment from pSB139, gel-purified and ligated into pSB210, which has been digested with MscI and BamHI and purified using the Geneclean Kit (Bio101), following the manufacturer's directions. The resulting plasmid is designated pSB210.1 (FIG. 11).
The plasmid pSB32, (FIG. 12) carrying the holotype cryIA(b) gene from B. thuringiensis aizawai, is cut with ApaI and NotI to release the 4.2 kb fragment containing the cryIA(b) gene. This plasmid also contains pBlueScript KS.sup.+, cryIC promoter and cryIA(c) terminator which control the expression of cryIA(b) gene. This isolated cryIA(b) fragment is ligated into pSB210.1 cut with ApaI and NotI to generate pSB219 (FIG. 13) containing the cryIA(b), the plc, and the ermC genes.
A 3.9 kb ApaI/NotI fragment containing a G27 toxin coding region is ligated to the 6.3 kb ApaI/NotI fragment from pSB219. The resulting plasmid is called pSB458 (FIG. 14).
The G27 is isolated from pSB458 using ApaI and NotI digests and ligated to pSB1219 at the ApaI/NotI sites. The resulting plasmid is called pSB1221 (FIG. 15). This plasmid is used for integrating G27 by the transformation process described below into B. thuringiensis chromosome at the homologous spoVBt1 region while creating a mutation at the site.
Other crystal genes are also cloned at the ApaI and NotI sites of pSB1219 and pSB1220 and then integrated into the B. thuringiensis chromosome at the homologous spoVBt1 region while creating a mutation at that site.
D. B. thuringiensis Transformation.
To prepare competent HD73 and W4D23 B. thuringiensis cells, strains are grown in 50 ml BHIS medium (50% brain heart infusion broth, 50% 1M sucrose) at 37° C. and 300 rpm until they reach OD.sub.600 of 0.2- 0.3. W4D23 is a crystal-minus derivative of HD73. The cells are washed successively in one volume, 1/2 volume, and 1/4th volume of ice- cold HEPES/sucrose solution. The cells are finally resuspended in 1/20th volume of HEPES/sucrose solution. B. thuringiensis competent cells (40 . mu.l for 0.1 cm curvette and 200 &mgr;l for 0.2 cm curvette) are mixed with 5-20 &mgr;l of DNA (2-10 &mgr;g) in a prechilled electrode Gene Pulser Cuvette, and exposed to the electrical current in the Gene Pulser electroporation apparatus. The parameters for the electroporation are 0. 9 kV, 3 &mgr;F and R=600 for 0.1 cm curvette and 1.25 kV, 3 &mgr;F, &OHgr; =600 for the 0.2 cm curvette. The cells are immediately transferred to 400 &mgr;l or 1.8 ml BHIS and grown at 37° C. for 4 hours at 250 rpm. During this period, the vector pSB1221 inserts into the chromosome via homologous recombination (a single cross-over) between the homologous spoVBt1 sequences on the bacterial chromosome and the integration vector. This results in the formation of two spoVBt1 genes, one on each side of the integrated DNA segment. After four hours of growth, the cells are transferred to LB agar plates containing the appropriate antibiotic. The plates are incubated overnight at 30° C. and the transformants are restreaked onto fresh plates. The colonies are screened by PCR to confirm the presence of the ermC gene using the primers PG2 and PG4 to amplify a 0.3 kb fragment. Integration of pSB1221 into the spoVBt1 chromosomal locus is proven by PCR amplication of a 1261 bp produced between the upstream portions of spoVBt1 not included in pSB1221 integration vector using primer MS12A and the pBR322 portion of pSB1221 using primer pBR4. Results indicate that crystal encoding proteins are incorporated in the B. thuringiensis chromosome.
______________________________________
Primer Sequence (5'-3') SEQ ID NO.
______________________________________
PG2 GAAATCGGCTCAGGAAAAGG
35
PG4 AGCAAACCCGTATTCCACG
36
PBR4 GCACGATCATGCGCACCC 37
______________________________________
Mutant spores resulting from integration of pSB1221 did not revert to wild type and spontaneously degraded within two weeks after growth when stored in liquid or on solid bacterial media.
Example 3
Generalized Transduction to Move the Integrated DNA to Alternative B. thuringiensis Hosts.
Integration occurs through homologous recombination (a double cross- over event) between DNA segments on both sides of the integrated vector in the donor strain and their homologous regions on the chromosome of the recipient strain. The donor strain containing the integrated vector is grown in 10 ml LB Ery.sup.5 plate at 30° C. overnight (16- 18 hours) . Approximately one hundred microliters of the overnight culture are used to inoculate 10 ml LB containing 0.4% glycerol. The culture is then incubated at 30° C. and 300 rpm to an OD.sub.600 of 1- 2. To infect the cells with the phage CP-51ts45, (obtained from Dr. Curtis B. Thorne, University of Massachusetts at Amherst), different amounts of the phage lysate, 1×10.sup.5 to 5. times.10.sup.6 plaque forming units (PFU), are added to 2×10.sup.7 cells in 3 ml Phage Assay (PA) soft agar (0. 8% nutrient broth, 0.5% NaCl, 0.02% MgSO.sub.4. 7H.sub.2 O, 0.005% MnSO. sub.4.H.sub.2 O, 0.015% CaCl. sub.2.2H.sub.2 O, pH 6.0, 0.5% Bacto Agar) which is previously equilibrated at 50° C. The mixtures are then poured onto PA plates (PA medium containing 1.5% Bacto Agar), allowed to solidify, and incubated at 30° C. for 16 hours. The top agar, which contains hundreds of plaques in a lawn of cells, is collected in 3 to 6 ml of PA medium. The phage lysate is maintained at 16° C. for 3-4 hours, centrifuged (4,000×g, 5 minutes, 16° C.), and the supernatant is filter-sterilized using a 0. 45 &mgr;M filter (VWR Scientific). The phage lysate is stored at 16° C. for long term storage. The titer of the phage lysate is approximately 1×10.sup.9 to 1×10.sup.10 PFU/ml.
For generalized transduction, the G27 insecticidal gene, erythromycin marker gene and (m) spoVBt1-8 are moved into the unmutated spoVBt1 chromosomal locus of a Bt kurstaki strain which normally contains cryIA(c) , IA(b) and IA(c) genes. The recipient strain is grown in 10 ml LB at 30. degree. C. for 16-18 hours. Two hundred milliliters of LB are inoculated with 2-3 ml of the overnight culture and grown at 30. degree. C. and 300 rpm to an OD.sub.600 of 1. The cells are centrifuged (7, 520×g, 4. degree. C., 15 minutes) and resuspended in LB at a concentration of approximately 2×10.sup.9 colony forming units (CFU) /ml (approximately 100 fold concentrated). The transduction mixture, which contains 8×10.sup.8 recipient cells and 4.9×10.sup.8 to 8. times.10.sup.8 PFU from the phage lysate, is incubated at 37° C. and 250 rpm for 30 minutes. The cell/phage suspension is spread on HA Millipore membranes (Millipore Corporation, Bedford, Mass.), placed on LB plates containing Ery.sup.0.15, and incubated at 37° C. for 3-4 hours. The membranes are then transferred to LB plates containing Ery.sup. 5 and incubated at 37° C. for 18-20 hours.
The transduced isolates are confirmed using PCR as described above and microscopic viewing of the altered spore phenotype. The amount of protein in each isolate is determined using SDS Page as well as bioassay against T. ni and S. exigua. Isolates showed production of the 135 Kdalton insecticidal crystal protein.
_________________________________________________________________________ _
SEQUENCE LISTING
(1) GENERAL INFORMATION:
(iii) NUMBER OF SEQUENCES: 37
(2) INFORMATION FOR SEQ ID NO:1:
(i) SEQUENCE CHARACTERISTICS:
(A) LENGTH: 1662 base pairs
(B) TYPE: nucleic acid
(C) STRANDEDNESS: double
(D) TOPOLOGY: linear
(ii) MOLECULE TYPE: DNA (genomic)
(iii) HYPOTHETICAL: NO
(iv) ANTI-SENSE: NO
(ix) FEATURE:
(A) NAME/KEY: CDS
(B) LOCATION: 474..1427
(D) OTHER INFORMATION: /codon.sub.-- start= 474
(xi) SEQUENCE DESCRIPTION: SEQ ID NO:1:
CAGGTGAAATGAAATCTTCGTTACGAAGTGTACGGTCTGGTTGAATAGATATCTCCATAT60
TTTTCAATGGATTAGGAATGTTTAGAAAATGATGCATTCTATTTAGTACAATAAATACAC120
GATGCATCGTTTTTTCTGAGTAATGTCGATTCGTTTTAAATCGGAAAAGTAATCTTCGTA180
GTCTTTTGTACAAAGTGTAGCCCATATATTACTGGAAGGGAGCTTTTTGTTTTTTTCTAA240
CCAATGTCCGAAGTCTTCAACGTCATAAACATAACGTTTAATAGTTGAGGGTTTTCGGCC300
TTTATTCAATAAAAAAATAGAAAAGGCTTGTATTGTATCATGGAATTCCGTTGTCTCCAT360
AGTCCCACCACCTTAATTATTTCTTATATTATAGCAAACTTTTCTGAAAATAGGCATTTA420
CAAGGGGGACAGGAATAATAATATTTGGTGAGTGGATAAAATGAGGTGATTGTATG476
Met
GAACAATCGATGCGAAAGAAAAACAACAATCAAATTAATATTGTGTTA524
GluGlnSerMetArgLysLysAsnAsnAsnGlnIleAsnIleValLeu
51015
AACCATCGAAAGAAAATTTCTTTGCCAGCCGCAGAAAATAAAACGGTA572
AsnHisArgLysLysIleSerLeuProAlaAlaGluAsnLysThrVal
202530
ATTTCAAATGAAACTGCAATTAAACATGAAATGCTGCAGAGAATTGAA620
IleSerAsnGluThrAlaIleLysHisGluMetLeuGlnArgIleGlu
354045
GAAGAGATGGGGAAGCTTGTTGGGATGGATGATATAAAAAAGATAATA668
GluGluMetGlyLysLeuValGlyMetAspAspIleLysLysIleIle
50556065
AAAGAAATATATGCTTGGATTTATGTGAATAAAAAAAGACAAGAGAAG716
LysGluIleTyrAlaTrpIleTyrValAsnLysLysArgGlnGluLys
707580
GGATTGAAGTCAGAGAAGCAAGTACTTCATATGCTGTTTAAAGGGAAT764
GlyLeuLysSerGluLysGlnValLeuHisMetLeuPheLysGlyAsn
859095
CCAGGTACAGGGAAGACAACTGTTGCTAGAATGATAGGGAAATTGCTG812
ProGlyThrGlyLysThrThrValAlaArgMetIleGlyLysLeuLeu
100105110
TTTGAGATGAATATTCTATCGAAAGGCCACTTGGTTGAAGCTGAACGT860
PheGluMetAsnIleLeuSerLysGlyHisLeuValGluAlaGluArg
115120125
GCTGATCTTGTAGGAGAGTACATCGGCCATACAGCTCAAAAAACAAGA908
AlaAspLeuValGlyGluTyrIleGlyHisThrAlaGlnLysThrArg
130135140145
GACTTAATAAAAAAAGCAATGGGAGGTATTTTGTTTATTGATGAGGCG956
AspLeuIleLysLysAlaMetGlyGlyIleLeuPheIleAspGluAla
150155160
TATTCTTTAGCTCGAGGAGGAGAGAAGGACTTTGGAAAAGAAGCAATT1004
TyrSerLeuAlaArgGlyGlyGluLysAspPheGlyLysGluAlaIle
165170175
GATACGCTTGTAAAACATATGGAAGATAAACAACATGGTTTTGTATTG1052
AspThrLeuValLysHisMetGluAspLysGlnHisGlyPheValLeu
180185190
ATTTTAGCTGGATATTCAAGAGAGATGAATCACTTTCTTTCATTAAAT1100
IleLeuAlaGlyTyrSerArgGluMetAsnHisPheLeuSerLeuAsn
195200205
CCAGGGCTGCAATCTCGTTTTCCATTTATTATTGAATTTGCGGATTAC1148
ProGlyLeuGlnSerArgPheProPheIleIleGluPheAlaAspTyr
210215220225
TCGGTAAATCAGTTGTTGGAAATTGGGAAAAGAATGTATGAAGATCGT1196
SerValAsnGlnLeuLeuGluIleGlyLysArgMetTyrGluAspArg
230235240
GAATATCAGTTATCGAAAGAGGCTGAATGGAAATTTAGGGATCATTTA1244
GluTyrGlnLeuSerLysGluAlaGluTrpLysPheArgAspHisLeu
245250255
CATGCTGTAAAGTATTCGTCGCAAATTACATCGTTTAGTAATGGGCGG1292
HisAlaValLysTyrSerSerGlnIleThrSerPheSerAsnGlyArg
260265270
TATGTACGGAATATTGTTGAAAAATCAATTCGTACACAGGCGATGCGG1340
TyrValArgAsnIleValGluLysSerIleArgThrGlnAlaMetArg
275280285
TTGTTGCAAGAAGATGCCTATGATAAAAATGATTTAATTGGAATATCG1388
LeuLeuGlnGluAspAlaTyrAspLysAsnAspLeuIleGlyIleSer
290295300305
AGTATGGATTTGATGCTCGAAGAGGAGACGCACAGTACATAAACTGTGC1437
SerMetAspLeuMetLeuGluGluGluThrHisSerThr
310315
GTCGATTTTTGTGTATAAGTTCGTTTACTCTTTTTTTTCTTTTTCTTGGTGTACTTCATG1497
GAAGTGTTCCATTTTAGCGCTCTTTTCGTGTGCTGAATTAGGATCGTGTCCAAATTGATT1557
TACTGAGCTTTTTTGAGCTCCTTTATTAACGTGGTTTGTCATTTGTATTCACCTCACTTT1617
AAAAATTAGTATAAACATTATATAAAGAAAAAATCGTTAGAAAGA1662
(2) INFORMATION FOR SEQ ID NO:2:
(i) SEQUENCE CHARACTERISTICS:
(A) LENGTH: 318 amino acids
(B) TYPE: amino acid
(D) TOPOLOGY: linear
(ii) MOLECULE TYPE: protein
(xi) SEQUENCE DESCRIPTION: SEQ ID NO:2:
MetGluGlnSerMetArgLysLysAsnAsnAsnGlnIleAsnIleVal
151015
LeuAsnHisArgLysLysIleSerLeuProAlaAlaGluAsnLysThr
202530
ValIleSerAsnGluThrAlaIleLysHisGluMetLeuGlnArgIle
354045
GluGluGluMetGlyLysLeuValGlyMetAspAspIleLysLysIle
505560
IleLysGluIleTyrAlaTrpIleTyrValAsnLysLysArgGlnGlu
65707580
LysGlyLeuLysSerGluLysGlnValLeuHisMetLeuPheLysGly
859095
AsnProGlyThrGlyLysThrThrValAlaArgMetIleGlyLysLeu
100105110
LeuPheGluMetAsnIleLeuSerLysGlyHisLeuValGluAlaGlu
115120125
ArgAlaAspLeuValGlyGluTyrIleGlyHisThrAlaGlnLysThr
130135140
ArgAspLeuIleLysLysAlaMetGlyGlyIleLeuPheIleAspGlu
145150155160
AlaTyrSerLeuAlaArgGlyGlyGluLysAspPheGlyLysGluAla
165170175
IleAspThrLeuValLysHisMetGluAspLysGlnHisGlyPheVal
180185190
LeuIleLeuAlaGlyTyrSerArgGluMetAsnHisPheLeuSerLeu
195200205
AsnProGlyLeuGlnSerArgPheProPheIleIleGluPheAlaAsp
210215220
TyrSerValAsnGlnLeuLeuGluIleGlyLysArgMetTyrGluAsp
225230235240
ArgGluTyrGlnLeuSerLysGluAlaGluTrpLysPheArgAspHis
245250255
LeuHisAlaValLysTyrSerSerGlnIleThrSerPheSerAsnGly
260265270
ArgTyrValArgAsnIleValGluLysSerIleArgThrGlnAlaMet
275280285
ArgLeuLeuGlnGluAspAlaTyrAspLysAsnAspLeuIleGlyIle
290295300
SerSerMetAspLeuMetLeuGluGluGluThrHisSerThr
305310315
(2) INFORMATION FOR SEQ ID NO:3:
(i) SEQUENCE CHARACTERISTICS:
(A) LENGTH: 30 base pairs
(B) TYPE: nucleic acid
(C) STRANDEDNESS: single
(D) TOPOLOGY: linear
(ii) MOLECULE TYPE: cDNA
(iii) HYPOTHETICAL: NO
(iv) ANTI-SENSE: NO
(xi) SEQUENCE DESCRIPTION: SEQ ID NO:3:
GGCTTTCGCTACCTGGAGAGACGCGCCCGC30
(2) INFORMATION FOR SEQ ID NO:4:
(i) SEQUENCE CHARACTERISTICS:
(A) LENGTH: 30 base pairs
(B) TYPE: nucleic acid
(C) STRANDEDNESS: single
(D) TOPOLOGY: linear
(ii) MOLECULE TYPE: cDNA
(iii) HYPOTHETICAL: NO
(iv) ANTI-SENSE: NO
(xi) SEQUENCE DESCRIPTION: SEQ ID NO:4:
CCAGACCAACTGGTAATGGTAGCGACCGGC30
(2) INFORMATION FOR SEQ ID NO:5:
(i) SEQUENCE CHARACTERISTICS:
(A) LENGTH: 19 base pairs
(B) TYPE: nucleic acid
(C) STRANDEDNESS: single
(D) TOPOLOGY: linear
(ii) MOLECULE TYPE: cDNA
(iii) HYPOTHETICAL: NO
(iv) ANTI-SENSE: NO
(xi) SEQUENCE DESCRIPTION: SEQ ID NO:5:
GAGAGATGTCACCGTCAAG19
(2) INFORMATION FOR SEQ ID NO:6:
(i) SEQUENCE CHARACTERISTICS:
(A) LENGTH: 18 base pairs
(B) TYPE: nucleic acid
(C) STRANDEDNESS: single
(D) TOPOLOGY: linear
(ii) MOLECULE TYPE: cDNA
(iii) HYPOTHETICAL: NO
(iv) ANTI-SENSE: NO
(xi) SEQUENCE DESCRIPTION: SEQ ID NO:6:
CCCTGTACCTGGATTCCC18
(2) INFORMATION FOR SEQ ID NO:7:
(i) SEQUENCE CHARACTERISTICS:
(A) LENGTH: 18 base pairs
(B) TYPE: nucleic acid
(C) STRANDEDNESS: single
(D) TOPOLOGY: linear
(ii) MOLECULE TYPE: cDNA
(iii) HYPOTHETICAL: NO
(iv) ANTI-SENSE: NO
(xi) SEQUENCE DESCRIPTION: SEQ ID NO:7:
GGGAATCCAGGTACAGGG18
(2) INFORMATION FOR SEQ ID NO:8:
(i) SEQUENCE CHARACTERISTICS:
(A) LENGTH: 19 base pairs
(B) TYPE: nucleic acid
(C) STRANDEDNESS: single
(D) TOPOLOGY: linear
(ii) MOLECULE TYPE: cDNA
(iii) HYPOTHETICAL: NO
(iv) ANTI-SENSE: NO
(xi) SEQUENCE DESCRIPTION: SEQ ID NO:8:
CCATCCCAACAAGCTTCCC19
(2) INFORMATION FOR SEQ ID NO:9:
(i) SEQUENCE CHARACTERISTICS:
(A) LENGTH: 19 base pairs
(B) TYPE: nucleic acid
(C) STRANDEDNESS: single
(D) TOPOLOGY: linear
(ii) MOLECULE TYPE: cDNA
(iii) HYPOTHETICAL: NO
(iv) ANTI-SENSE: NO
(xi) SEQUENCE DESCRIPTION: SEQ ID NO:9:
GGGAAGCTTGTTGGGATGG19
(2) INFORMATION FOR SEQ ID NO:10:
(i) SEQUENCE CHARACTERISTICS:
(A) LENGTH: 20 base pairs
(B) TYPE: nucleic acid
(C) STRANDEDNESS: single
(D) TOPOLOGY: linear
(ii) MOLECULE TYPE: cDNA
(iii) HYPOTHETICAL: NO
(iv) ANTI-SENSE: NO
(xi) SEQUENCE DESCRIPTION: SEQ ID NO:10:
CCTGTCCCCCTTGTAAATGC20
(2) INFORMATION FOR SEQ ID NO:11:
(i) SEQUENCE CHARACTERISTICS:
(A) LENGTH: 20 base pairs
(B) TYPE: nucleic acid
(C) STRANDEDNESS: single
(D) TOPOLOGY: linear
(ii) MOLECULE TYPE: cDNA
(iii) HYPOTHETICAL: NO
(iv) ANTI-SENSE: NO
(xi) SEQUENCE DESCRIPTION: SEQ ID NO:11:
GCATTTACAAGGGGGACAGG20
(2) INFORMATION FOR SEQ ID NO:12:
(i) SEQUENCE CHARACTERISTICS:
(A) LENGTH: 21 base pairs
(B) TYPE: nucleic acid
(C) STRANDEDNESS: single
(D) TOPOLOGY: linear
(ii) MOLECULE TYPE: cDNA
(iii) HYPOTHETICAL: NO
(iv) ANTI-SENSE: NO
(xi) SEQUENCE DESCRIPTION: SEQ ID NO:12:
CGCCGTCTACTTACAAGCAGC21
(2) INFORMATION FOR SEQ ID NO:13:
(i) SEQUENCE CHARACTERISTICS:
(A) LENGTH: 18 base pairs
(B) TYPE: nucleic acid
(C) STRANDEDNESS: single
(D) TOPOLOGY: linear
(ii) MOLECULE TYPE: cDNA
(iii) HYPOTHETICAL: NO
(iv) ANTI-SENSE: NO
(xi) SEQUENCE DESCRIPTION: SEQ ID NO:13:
GGTGGTGGGACTATGGAG18
(2) INFORMATION FOR SEQ ID NO:14:
(i) SEQUENCE CHARACTERISTICS:
(A) LENGTH: 18 base pairs
(B) TYPE: nucleic acid
(C) STRANDEDNESS: single
(D) TOPOLOGY: linear
(ii) MOLECULE TYPE: cDNA
(iii) HYPOTHETICAL: NO
(iv) ANTI-SENSE: NO
(xi) SEQUENCE DESCRIPTION: SEQ ID NO:14:
CTCCATAGTCCCACCACC18
(2) INFORMATION FOR SEQ ID NO:15:
(i) SEQUENCE CHARACTERISTICS:
(A) LENGTH: 18 base pairs
(B) TYPE: nucleic acid
(C) STRANDEDNESS: single
(D) TOPOLOGY: linear
(ii) MOLECULE TYPE: cDNA
(iii) HYPOTHETICAL: NO
(iv) ANTI-SENSE: NO
(xi) SEQUENCE DESCRIPTION: SEQ ID NO:15:
CGAGGAGGAGAGAAGGAC18
(2) INFORMATION FOR SEQ ID NO:16:
(i) SEQUENCE CHARACTERISTICS:
(A) LENGTH: 17 base pairs
(B) TYPE: nucleic acid
(C) STRANDEDNESS: single
(D) TOPOLOGY: linear
(ii) MOLECULE TYPE: cDNA
(iii) HYPOTHETICAL: NO
(iv) ANTI-SENSE: NO
(xi) SEQUENCE DESCRIPTION: SEQ ID NO:16:
GTCCTTCTCTCCTCCTC17
(2) INFORMATION FOR SEQ ID NO:17:
(i) SEQUENCE CHARACTERISTICS:
(A) LENGTH: 12 base pairs
(B) TYPE: nucleic acid
(C) STRANDEDNESS: single
(D) TOPOLOGY: linear
(ii) MOLECULE TYPE: cDNA
(iii) HYPOTHETICAL: NO
(iv) ANTI-SENSE: NO
(xi) SEQUENCE DESCRIPTION: SEQ ID NO:17:
CACGATGCATCG12
(2) INFORMATION FOR SEQ ID NO:18:
(i) SEQUENCE CHARACTERISTICS:
(A) LENGTH: 12 base pairs
(B) TYPE: nucleic acid
(C) STRANDEDNESS: single
(D) TOPOLOGY: linear
(ii) MOLECULE TYPE: cDNA
(iii) HYPOTHETICAL: NO
(iv) ANTI-SENSE: NO
(xi) SEQUENCE DESCRIPTION: SEQ ID NO:18:
CACGATGCATCG12
(2) INFORMATION FOR SEQ ID NO:19:
(i) SEQUENCE CHARACTERISTICS:
(A) LENGTH: 17 base pairs
(B) TYPE: nucleic acid
(C) STRANDEDNESS: single
(D) TOPOLOGY: linear
(ii) MOLECULE TYPE: cDNA
(iii) HYPOTHETICAL: NO
(iv) ANTI-SENSE: NO
(xi) SEQUENCE DESCRIPTION: SEQ ID NO:19:
CGAAAGAGGCTGAATGG17
(2) INFORMATION FOR SEQ ID NO:20:
(i) SEQUENCE CHARACTERISTICS:
(A) LENGTH: 15 base pairs
(B) TYPE: nucleic acid
(C) STRANDEDNESS: single
(D) TOPOLOGY: linear
(ii) MOLECULE TYPE: cDNA
(iii) HYPOTHETICAL: NO
(iv) ANTI-SENSE: NO
(xi) SEQUENCE DESCRIPTION: SEQ ID NO:20:
GGGCGGTATGTACGG15
(2) INFORMATION FOR SEQ ID NO:21:
(i) SEQUENCE CHARACTERISTICS:
(A) LENGTH: 15 base pairs
(B) TYPE: nucleic acid
(C) STRANDEDNESS: single
(D) TOPOLOGY: linear
(ii) MOLECULE TYPE: cDNA
(iii) HYPOTHETICAL: NO
(iv) ANTI-SENSE: NO
(xi) SEQUENCE DESCRIPTION: SEQ ID NO:21:
CCGTACATACCGCCC15
(2) INFORMATION FOR SEQ ID NO:22:
(i) SEQUENCE CHARACTERISTICS:
(A) LENGTH: 25 base pairs
(B) TYPE: nucleic acid
(C) STRANDEDNESS: single
(D) TOPOLOGY: linear
(ii) MOLECULE TYPE: cDNA
(iii) HYPOTHETICAL: NO
(iv) ANTI-SENSE: NO
(xi) SEQUENCE DESCRIPTION: SEQ ID NO:22:
GCATCAAATCCATACTCGATATTCC25
(2) INFORMATION FOR SEQ ID NO:23:
(i) SEQUENCE CHARACTERISTICS:
(A) LENGTH: 21 base pairs
(B) TYPE: nucleic acid
(C) STRANDEDNESS: single
(D) TOPOLOGY: linear
(ii) MOLECULE TYPE: cDNA
(iii) HYPOTHETICAL: NO
(iv) ANTI-SENSE: NO
(xi) SEQUENCE DESCRIPTION: SEQ ID NO:23:
CGAGTATGGATTTGATGCTCG21
(2) INFORMATION FOR SEQ ID NO:24:
(i) SEQUENCE CHARACTERISTICS:
(A) LENGTH: 20 base pairs
(B) TYPE: nucleic acid
(C) STRANDEDNESS: single
(D) TOPOLOGY: linear
(ii) MOLECULE TYPE: cDNA
(iii) HYPOTHETICAL: NO
(iv) ANTI-SENSE: NO
(xi) SEQUENCE DESCRIPTION: SEQ ID NO:24:
GGACACGATCCTAATTCAGC20
(2) INFORMATION FOR SEQ ID NO:25:
(i) SEQUENCE CHARACTERISTICS:
(A) LENGTH: 20 base pairs
(B) TYPE: nucleic acid
(C) STRANDEDNESS: single
(D) TOPOLOGY: linear
(ii) MOLECULE TYPE: cDNA
(iii) HYPOTHETICAL: NO
(iv) ANTI-SENSE: NO
(xi) SEQUENCE DESCRIPTION: SEQ ID NO:25:
GCTGAATTAGGATCGTGTCC20
(2) INFORMATION FOR SEQ ID NO:26:
(i) SEQUENCE CHARACTERISTICS:
(A) LENGTH: 38 base pairs
(B) TYPE: nucleic acid
(C) STRANDEDNESS: single
(D) TOPOLOGY: linear
(ii) MOLECULE TYPE: cDNA
(iii) HYPOTHETICAL: NO
(iv) ANTI-SENSE: NO
(xi) SEQUENCE DESCRIPTION: SEQ ID NO:26:
GATGTGATTGTAAGGAACAATCGAAGCGATAGAAAAAC38
(2) INFORMATION FOR SEQ ID NO:27:
(i) SEQUENCE CHARACTERISTICS:
(A) LENGTH: 32 base pairs
(B) TYPE: nucleic acid
(C) STRANDEDNESS: single
(D) TOPOLOGY: linear
(ii) MOLECULE TYPE: cDNA
(iii) HYPOTHETICAL: NO
(iv) ANTI-SENSE: NO
(xi) SEQUENCE DESCRIPTION: SEQ ID NO:27:
GATCTTGTATGAGAGTAAATCGGCCATACAGC32
(2) INFORMATION FOR SEQ ID NO:28:
(i) SEQUENCE CHARACTERISTICS:
(A) LENGTH: 32 base pairs
(B) TYPE: nucleic acid
(C) STRANDEDNESS: single
(D) TOPOLOGY: linear
(ii) MOLECULE TYPE: cDNA
(iii) HYPOTHETICAL: NO
(iv) ANTI-SENSE: NO
(xi) SEQUENCE DESCRIPTION: SEQ ID NO:28:
GCTGTATGGCCGATTTACTCTCATACAAGCTC32
(2) INFORMATION FOR SEQ ID NO:29:
(i) SEQUENCE CHARACTERISTICS:
(A) LENGTH: 23 base pairs
(B) TYPE: nucleic acid
(C) STRANDEDNESS: single
(D) TOPOLOGY: linear
(ii) MOLECULE TYPE: cDNA
(iii) HYPOTHETICAL: NO
(iv) ANTI-SENSE: NO
(xi) SEQUENCE DESCRIPTION: SEQ ID NO:29:
CTATACAGCATGTTAATGATCCC23
(2) INFORMATION FOR SEQ ID NO:30:
(i) SEQUENCE CHARACTERISTICS:
(A) LENGTH: 38 base pairs
(B) TYPE: nucleic acid
(C) STRANDEDNESS: single
(D) TOPOLOGY: linear
(ii) MOLECULE TYPE: cDNA
(iii) HYPOTHETICAL: NO
(iv) ANTI-SENSE: NO
(xi) SEQUENCE DESCRIPTION: SEQ ID NO:30:
AGCTTGCGGCCGCGTCGACCCCGGGCCATGGGGGCCCG38
(2) INFORMATION FOR SEQ ID NO:31:
(i) SEQUENCE CHARACTERISTICS:
(A) LENGTH: 38 base pairs
(B) TYPE: nucleic acid
(C) STRANDEDNESS: single
(D) TOPOLOGY: linear
(ii) MOLECULE TYPE: cDNA
(iii) HYPOTHETICAL: NO
(iv) ANTI-SENSE: NO
(xi) SEQUENCE DESCRIPTION: SEQ ID NO:31:
AATTCGGGCCCCCATGGCCCGGGGTCGACGCGGCCGCA38
(2) INFORMATION FOR SEQ ID NO:32:
(i) SEQUENCE CHARACTERISTICS:
(A) LENGTH: 20 base pairs
(B) TYPE: nucleic acid
(C) STRANDEDNESS: single
(D) TOPOLOGY: linear
(ii) MOLECULE TYPE: cDNA
(iii) HYPOTHETICAL: NO
(iv) ANTI-SENSE: NO
(xi) SEQUENCE DESCRIPTION: SEQ ID NO:32:
GCGAAAGAAAAACAACAATC20
(2) INFORMATION FOR SEQ ID NO:33:
(i) SEQUENCE CHARACTERISTICS:
(A) LENGTH: 28 base pairs
(B) TYPE: nucleic acid
(C) STRANDEDNESS: single
(D) TOPOLOGY: linear
(ii) MOLECULE TYPE: cDNA
(iii) HYPOTHETICAL: NO
(iv) ANTI-SENSE: NO
(xi) SEQUENCE DESCRIPTION: SEQ ID NO:33:
GGAACGCTACATACTAGTGATAGAGTAG28
(2) INFORMATION FOR SEQ ID NO:34:
(i) SEQUENCE CHARACTERISTICS:
(A) LENGTH: 28 base pairs
(B) TYPE: nucleic acid
(C) STRANDEDNESS: single
(D) TOPOLOGY: linear
(ii) MOLECULE TYPE: cDNA
(iii) HYPOTHETICAL: NO
(iv) ANTI-SENSE: NO
(xi) SEQUENCE DESCRIPTION: SEQ ID NO:34:
GCTTGTACACCGCAACTGTTTTCGCATG28
(2) INFORMATION FOR SEQ ID NO:35:
(i) SEQUENCE CHARACTERISTICS:
(A) LENGTH: 20 base pairs
(B) TYPE: nucleic acid
(C) STRANDEDNESS: single
(D) TOPOLOGY: linear
(ii) MOLECULE TYPE: cDNA
(iii) HYPOTHETICAL: NO
(iv) ANTI-SENSE: NO
(xi) SEQUENCE DESCRIPTION: SEQ ID NO:35:
GAAATCGGCTCAGGAAAAGG20
(2) INFORMATION FOR SEQ ID NO:36:
(i) SEQUENCE CHARACTERISTICS:
(A) LENGTH: 19 base pairs
(B) TYPE: nucleic acid
(C) STRANDEDNESS: single
(D) TOPOLOGY: linear
(ii) MOLECULE TYPE: cDNA
(iii) HYPOTHETICAL: NO
(iv) ANTI-SENSE: NO
(xi) SEQUENCE DESCRIPTION: SEQ ID NO:36:
AGCAAACCCGTATTCCACG19
(2) INFORMATION FOR SEQ ID NO:37:
(i) SEQUENCE CHARACTERISTICS:
(A) LENGTH: 18 base pairs
(B) TYPE: nucleic acid
(C) STRANDEDNESS: single
(D) TOPOLOGY: linear
(ii) MOLECULE TYPE: cDNA
(iii) HYPOTHETICAL: NO
(iv) ANTI-SENSE: NO
(xi) SEQUENCE DESCRIPTION: SEQ ID NO:37:
GCACGATCATGCGCACCC18
_________________________________________________________________________ _ | |
Dear mummy, our travels took us somewhere new last weekend. As we were driving back from London and the traffic was bad on the M25 we decided to take a quick detour via Leatherhead towards a little village called Bookham, in between Dorking and Guildford.
We had been told that Polesden Lacey, a National Trust Estate, was just round the corner. As we are members of the National Trust, it was a good opportunity to stop in and visit.
It’s another fine day and the skies are blue. We drive up the long road through an impressive gate and up to the estate. The car park is a stones throw from the entrance to Polesden Lacey and it’s not too busy today.
We arrive at 2ish and grab our picnic blanket and get the sun cream out of the car, a quick nappy change in the toilets, which are cool and clean and I feel refreshed after my long car journey. A quick runaround is just what I need to stretch my little legs!
This place is beautiful! And is surrounded by rolling hills and stunning scenery. People are sat in deck chairs soaking up the sunshine.
Polesden Lacey is an Edwardian House and estate and is located on the North Downs. It is owned and run by the National Trust and is very popular. The house originally was owned by Margaret Greville, a well-know Edwardian hostess. Who entertained royalty and the privileged. She was a close friend to Queen Mary and bequeathed all her jewels to Elizabeth the Queen Mother, including a diamond necklace belonging to Marie Antoinette!
She was named a Dame Commander of the Order of the British Empire in 1922 and her estate was bequeathed to the National Trust.
The grounds of the estate are extensive and we head off to see the house and take in the views. It is spectacular and you can see why King George VI and Queen Elizabeth spent part of their honeymoon here! Mrs Greville’s collection of fine paintings and porcelain is displayed for visitors to see in the house. My mummy ducks in and has a quick look while daddy and I play on the lawn. The estate has a regal feel about it and we imagine young royals playing on the lawn as children.
We explore the walled garden next, as it’s July the roses are still in bloom and they looked fabulous. Lavender lines the paved paths around the gardens and you could hear busy bees flying from one flower to another.
It was so peaceful here and we had the whole place to ourselves. Our favourite blooms were the snow white ‘Iceberg’ roses and the beautiful dip dyed yellow and pink ‘You’re Beautiful’ blooms. Both looked stunning against the deep blue sky.
Stone sculptures littered the formal gardens, some scary and some angelic. I was memorised by daddy blowing bubbles and followed them around the gardens chasing them with my hands. It was lovely to spend quality time with my daddy and we both lay down on the picnic blanket and stared at the sky. I nestled my head under his arm and we both chilled out.
But not for long! Mummy had brought my ball with her and we played piggy in the middle, while I chased it on the grass in my bare feet. I loved the feeling of grass in-between my toes. Daddy carried me through the trees and I giggled in delight as we ducked and dived through the leaves as they brushed across my body.
On the way back we walked through the pleasure grounds and watched staff set up for a wedding, a lovely spot for one. It’s still very warm, so before we head back to the car we stop by the cafe opposition reception and have an ice cream.
My mummy goes to get my National Trust passport stamped (a collection of little stamps we’ve been acquiring on our trips around the National Trust estates) and we sit and enjoy the world go by.
A lovely short break in-between car journeys and we could have spent the whole day there! So much more to explore and we will be back. | https://dearmummyblog.com/2014/07/24/polesden-lacey-national-trust/ |
Privacy Enhancing Protocols using Pairing Based Cryptography.
PhD thesis, Dublin City University.
Full text available as:
This thesis presents privacy enhanced cryptographic constructions,
consisting of formal definitions, algorithms and motivating
applications. The contributions are a step towards the development of
cryptosystems which, from the design phase, incorporate privacy as a
primary goal. Privacy offers a form of protection over personal and
other sensitive data to individuals, and has been the subject of much
study in recent years.
Our constructions are based on a special type of algebraic group called
bilinear groups. We present existing cryptographic constructions which
use bilinear pairings, namely Identity-Based Encryption (IBE). We define
a desirable property of digital signatures, blindness, and present new
IBE constructions which incorporate this property.
Blindness is a desirable feature from a privacy perspective as it allows
an individual to obscure elements such as personal details in the data
it presents to a third party. In IBE, blinding focuses on obscuring
elements of the identity string which an individual presents to the key
generation centre. This protects an individual's privacy in a direct
manner by allowing her to blind sensitive elements of the identity
string and also prevents a key generation centre from subsequently
producing decryption keys using her full identity string. Using blinding
techniques, the key generation centre does not learn the full identity
string.
In this thesis, we study selected provably-secure cryptographic
constructions. Our contribution is to reconsider the design of such
constructions with a view to incorporating privacy. We present the new,
privacy-enhanced cryptographic protocols using these constructions as
primitives. We refine useful existing security notions and present
feasible security definitions and proofs for these constructions. | https://doras.dcu.ie/15368/ |
Crazy Cat's Eye had a really bad season during Sand Marble Rally 2016. With two DNFs and only 8 points, Crazy Cat's Eye was eliminated after Race 6 at 30th in the standings.
2017
Crazy Cat's Eye qualified for Sand Marble Rally 2017 by finishing first in their Qualifying Race. Given their awful performance in the last season, Crazy Cat's Eye entered the 2017 season as an underdog, but they showed that they had improved much by finishing first in Race 1.
Despite failing to finish the second race in an incident involving H2 Blue, Slimer, Blizzard Blaster, Ghost Plasma and El Capitan, who blamed Deep Ocean, who caused the six to get stuck on the plinko board mid-race, then they finished eleventh in Race 3, ninth in Race 4, and fifteenth in Race 5. They had a good 4th-place finish in Race 6, and another eleventh in Race 7. They finished 6th in Race 8, and 8th in the ninth race. Crazy Cat's Eye proceeded to keep up with their performance, with a third-place finish in Race 10. Then they finished fifteenth in Race 11, and twelfth in Race 12. Finally, they finished fourteenth in the final race. They ended the season in 8th, signifying that they had indeed improved from the last season.
2018
Crazy Cat's Eye finished in 8th place at Sand Marble Rally 2018 (losing the tiebreaker to Dragon's Egg, 1 point ahead of Big Pearl, and 3 points behind Quicksilver). Crazy Cat's Eye had good midfield consistency, but their form seemed to have dipped since their Round 1 victory in 2017.
2019
Crazy Cat's Eye qualified for the Marble Rally 2019.
Their season started on a high with a convincing win in Race 1, with only a brief challenge from Blizzard Blaster mid-race. However, two poor races followed, including their first DNF since Race 2 of 2017 in Race 3. Their performances picked up and they were able to get a podium in Race 5, which Red Number 3 dominated, behind Cool Moody. Race 6 was another low for Crazy Cat's Eye with an 18th place, but then they bounced back with a win in Race 7, not losing the lead once. Race 8, the final race, was also a decent race for them, coming 7th and competing with the leaders all race long. They finished 4th in the overall standing, the highest championship standing in their career so far. Surprisingly, despite having the most medals this season, they could not break into the final championship podium.
Season 5
Crazy Cat's Eye was automatically qualified for Marble Rally Season 5 due to placing in the top 12 in 2019 and was invited when the system changed late on.
Race 1 for Crazy Cat's Eye started poorly after starting on the front row, but after a section where six marbles crashed, they leaped up into 5th, helped by going slightly off-track on the next turn. They then fought up to second, and after a long battle with Lollipop, took the lead, which they kept despite a challenge by Cool Moody at the end. However, the rest of the season was difficult for them, despite several other top 5 finishes, Crazy Cat's Eye recorded 3 DNFs and a 17th, as well as several lower points finishes, which left them down in 11th place, their lowest since 2016.
Racing profile and analysis
Crazy Cat's Eye exploits good starts to the maximum potential, taking convincing wins in races 1 and 7 of 2019 due to their ability to retain good track position. They have breakneck speed, and their pace seems to get even faster when they have an open track ahead of them. Because of this trait, Crazy Cat's Eye has DNFd multiple times in their career, the most recent being their Race 8 crash in Season 5. Despite this, they do hold the record for consistent points finishes (20 races). Other flaws include their incompetence at overtaking in tight congestion, as they often remain in the back of the pack if their starts fail to succeed. Above average defence skills, alongside good overtaking on straights and fast corners, round off their strengths, while subpar slalom navigation is one other notable weakness of theirs. Crazy Cat's Eye is, fitting to their name, an eye-catching racer due to their swift gate releases and consistent pace upfront, while they may need some work on their back row game.
Marble Rally results
|Season||1||2||3||4||5||6||7||8||9||10||11||12||13||Pos||Points|
|2016||R1
|
DNF
|R2
|
28
|R3
|
18
|R4
|
21
|R5
|
DNF
|R6
|
16
|R7||R8||R9||R10||R11||R12||30th||8|
|2017||R1
|
1
|R2
|
DNF
|R3
|
11
|R4
|
9
|R5
|
15
|R6
|
4
|R7
|
11
|R8
|
6
|R9
|
8
|R10
|
3
|R11
|
15
|R12
|
12
|F‡
|
14
|8th||94|
|2018||R1
|
11
|R2
|
4
|R3
|
9
|R4
|
6
|R5
|
14
|R6
|
4
|R7
|
10
|R8
|
16
|8th||55|
|2019||R1
|
1
|R2
|
16
|R3
|
DNF
|R4
|
9
|R5
|
3
|R6
|
18
|R7
|
1
|R8
|
7
|4th||70|
|S5||R1
|
1
|R2
|
DNF
|R3
|
5
|R4
|
DNF
|R5
|
11
|R6
|
17
|R7
|
4
|R8
|
DNF
|R9
|
4
|R10
|
7
|R11
|
11
|R12
|
14
|11th||76|
* - Season is still in progress
‡ - Double points
Trivia
- Crazy Cat's Eye also holds the record for the most wins in the first race of the season, at 3.
References
- ↑ Stynth (13th September 2020). Who’s Tearing Up the Sand in 2020? – Part 1. Jelle's Marble Runs. | https://jellesmarbleruns.fandom.com/wiki/Crazy_Cat%27s_Eye |
(Volcano Watch is a weekly article written by scientists at the U.S. Geological Survey’s Hawaiian Volcano Observatory.)
At the summit of Kīlauea Volcano, Halemaʻumaʻu has changed dramatically since early May 2018. As the crater walls and inner caldera slump inward, the depth of Halemaʻumaʻu has more than tripled and the diameter has more than doubled. Before May, about 10 earthquakes per day were typical at the summit. As of late June 2018, there are about 600 earthquakes located in the same region on a daily basis. Many of these earthquakes are strong enough to be felt, and some can be damaging. These earthquakes are understandably causing concern, especially in Volcano Village and surrounding subdivisions.
What is causing these earthquakes? The short answer is that the rigid rock of the caldera floor is responding to the steady withdrawal of magma from a shallow reservoir beneath the summit. As magma drains into the East Rift Zone (traveling about 40 km (26 mi) underground to erupt from fissures in the Leilani Estates subdivision), it slowly pulls away support of the rock above it. Small earthquakes occur as the crater floor sags. The collapse/explosion event is triggered when the caldera floor can no longer support its own weight and drops downward. Large collapses can produce an explosion and ash plume that rises above the crater.
An example of this is the most recent event that occurred on June 28, 2018, at 4:49 AM HST. An ash-poor plume rose about 300 m (1000 ft) above the ground and drifted to the southwest. The energy released by the event was equivalent to a M5.3 earthquake. Since May 16, we have observed intervals between collapse/explosion events as short at 8 hours and as long as 64 hours. The average is about 28 hours, which is why they seem to happen on an almost daily basis.
Analyses of data from tiltmeters, GPS stations, seismometers, gas measurements, satellite and visual observations are ongoing, and several hypotheses could explain the processes occurring at the summit. A leading concept is that incremental collapses beneath the caldera act as a piston dropping on top of a depressurized magmatic system. This collapse process culminates in a large earthquake that may be strong enough to be felt by residents in the area. It also can produce an explosion in which gas drives ash into the atmosphere. After a large collapse/explosion event, the stress on the faults around Halemaʻumaʻu is temporarily reduced, resulting in fewer earthquakes. Several hours later, as magma continues to drain out of the summit, stress increases on the faults around Halemaʻumaʻu and earthquake rates increase and grow to a constant level that continues for several hours before the next collapse/explosion event.
The collapse/explosion events generate plumes that have become progressively more ash-poor and now rise only a few thousand feet above the crater. This is in contrast to the eruptive sequence from May 16-26, when the vent within Halemaʻumaʻu crater was open so ash plumes could rise into the air during each collapse/explosion event, like the event on May 17, 2018, that sent an ash plume to 30,000 feet. But by May 29, rock rubble from the crater rim and walls had filled the vent and a portion of the conduit that comprises the shallow magma reservoir may have partially collapsed, blocking the path for most of the ash to escape.
Since June 21, nearby residents have reported feeling stronger, more intense shaking from the collapse/explosion events. Why do they feel stronger when the location and magnitude are about the same? It is possible that another partial collapse of the shallow magma reservoir occurred, also changing subsurface geometry. This changed the character of the seismic waves, which now have more high frequencies (shorter wavelengths) that people may feel more intensely. An analogy is a home theater or car stereo. Imagine you have it set at a constant volume (like the consistent earthquake magnitude) but then change the dials to increase the treble while lowering the bass slightly. The total energy is the same, but it’s just being expressed in different frequencies. This is why, over time, people may be reporting that they are feeling these events more widely and intensely.
One of the most frequently asked questions is when will this end? The response is not so straightforward. The summit continues to subside as magma moves from the shallow reservoir beneath the Kīlauea summit into the lower East Rift Zone. As this process continues, Halemaʻumaʻu will continue to respond with collapse/explosion events. If you feel strong shaking, remember to drop, cover, and hold on until it stops. Be sure to quake-proof your home, school, and business. Look here for tips: www.shakeout.org/hawaii/resour…. Also, please help the USGS by reporting if you feel an earthquake at earthquake.usgs.gov/dyfi.
Share this: Facebook
Twitter
Pinterest
Reddit
Tumblr
LinkedIn
Print
More
Pocket
| |
---
abstract: 'Coalition formation typically involves the coming together of multiple, heterogeneous, agents to achieve both their individual and collective goals. In this paper, we focus on a special case of coalition formation known as Graph-Constrained Coalition Formation (GCCF) whereby a network connecting the agents constrains the formation of coalitions. We focus on this type of problem given that in many real-world applications, agents may be connected by a communication network or only trust certain peers in their social network. We propose a novel representation of this problem based on the concept of edge contraction, which allows us to model the search space induced by the GCCF problem as a rooted tree. Then, we propose an anytime solution algorithm (CFSS), which is particularly efficient when applied to a general class of characteristic functions called $\mplusa$ functions. Moreover, we show how CFSS can be efficiently parallelised to solve GCCF using a non-redundant partition of the search space. We benchmark CFSS on both synthetic and realistic scenarios, using a real-world dataset consisting of the energy consumption of a large number of households in the UK. Our results show that, in the best case, the serial version of CFSS is 4 orders of magnitude faster than the state of the art, while the parallel version is 9.44 times faster than the serial version on a 12-core machine. Moreover, CFSS is the first approach to provide anytime approximate solutions with quality guarantees for very large systems of agents (i.e., with more than 2700 agents).'
author:
- 'FILIPPO BISTAFFA and ALESSANDRO FARINELLI JESÚS CERQUIDES and JUAN RODRÍGUEZ-AGUILAR SARVAPALI D. RAMCHURN'
bibliography:
- 'tist.bib'
title: |
Algorithms for Graph-Constrained Coalition Formation\
in the Real World
---
Author’s addresses: F. Bistaffa [and]{} A. Farinelli, Department of Computer Science, University of Verona, Verona, Italy; J. Cerquides [and]{} J. Rodríguez-Aguilar, IIIA-CSIC, Barcelona, Spain; S. D. Ramchurn, Electronics and Computer Science, University of Southampton, Southampton, United Kingdom.
Introduction
============
Coalition Formation (CF) is one of the key approaches to establishing collaborations in multi-agent systems. It involves the coming together of multiple, possibly heterogeneous, agents in order to achieve either their individual or collective goals, whenever they cannot do so on their own. Building upon the seminal work of , identify the key computational tasks involved in the CF process: (i) coalitional value calculation: defining a *characteristic function* which, given a coalition as an argument, provides its coalitional value; (ii) coalition structure generation (CSG): finding a partition of the set of agents (into disjoint coalitions) that maximises the sum of the values of the chosen coalitions; and (iii) payment computation: finding the transfer or payment to each agent to ensure it is fairly rewarded for its contribution to its coalition.
On the one hand, typical CF approaches assume that the values of all the coalitions are stored in memory, allowing to read each value in constant time. However, this assumption makes the size of the input of the CSG and payment computation problems exponential, as the entire set of coalitions (whose size is $2^n$ for $n$ agents) must be mapped to a value. On the other hand, CSG and payment computation are combinatorial in nature and most existing solutions do not scale well with the number of agents. In this paper, we focus on the CSG problem to provide solutions that can be applied to real-world problems, which usually involve hundreds or thousands of agents.
The computational complexity of the CSG problem is due to the size of its search space,[^1] which contains every possible subset of agents as a potential coalition. However, in many real-world applications, there are constraints that may limit the formation of some coalitions [@rahwan2011constrained]. Specifically, we focus on a specific type of constraints that encodes synergies or relationships among the agents and that can be expressed by a graph [@Myerson1977], where nodes represent agents and edges encode the relationships between the agents. In this setting, edges enable connected agents to form a coalition and a coalition is considered feasible only if its members represent the vertices of a connected subgraph. Such constraints are present in several real-world scenarios, such as social or trust constraints (e.g., energy consumers who prefer to group with their friends and relatives in forming energy cooperatives [@switch]), physical constraints (e.g., emergency responders may join specific teams in disaster scenarios where only certain routes are available), or communication constraints (e.g., non-overlapping communication loci or energy limitations for sending messages across a network from one agent to another). Hereafter, we shall refer to the CF problem where coalitions are encoded by means of graphs as *Graph-Constrained Coalition Formation* (GCCF). It is important to note that the addition of these constraints does not lower the complexity of the problem. In particular, show that the GCCF problem remains NP-complete.
In this work, we are primarily interested in developing CSG solutions for GCCF that are deployable in real-world scenarios involving hundreds or thousands of agents, such as collective energy purchasing [@vinyals-ENERGYCON-12; @eps351521] and ridesharing [@aaai]. Notice that, since the computation of an optimal solution is often infeasible for large-scale systems, our CSG algorithm should be able to provide anytime approximate solutions with good quality guarantees. Moreover, the memory requirements should scale well with the number of agents.
In this context, the works by [-@Voice2012b; -@Voice2012a] represent the state of the art for GCCF. However, there are some drawbacks that hinder their applicability. make assumptions that do not hold in most real-world applications (see Section \[sec:stategccf\]), whereas the memory requirements of the approach in [@Voice2012a] grow exponentially in the number of agents, hence limiting the scalability. To overcome these drawbacks, in this paper we propose CFSS (Coalition Formation for Sparse Synergies), the first approach for GCCF that computes anytime solutions with theoretical quality guarantees for large systems (i.e., more than 2700 agents). As recently noticed in a survey on CSG by , previous approaches in the CF literature have been either applied to small-scale synthetic scenarios, or, in the case of heuristic approaches, cannot provide any theoretical guarantees on the quality of their solutions. Moreover, we provide P-CFSS, a parallelised version of CFSS that exploits multi-core CPUs. Finally, we identify a general class of closed-form functions, denoted as $\mplusa$, for which we provide upper bounds, allowing for coalitional values to be computed online (i.e., their storage can be avoided).
In more detail, this paper advances[^2] the state of the art in the following ways:
1. We provide a new representation for GCCF which, by using edge contractions on the graph, can efficiently build a search tree where each node is a feasible coalition structure, while avoiding redundancy (i.e., each solution appears only once).
2. We identify a general class of characteristic functions, i.e., $\mplusa$ functions, which are expressive enough to represent a wide range of real-world GCCF problems.
3. We propose CFSS, a branch and bound algorithm that, when applied to CF with $\mplusa$ functions, can solve the CSG problem for GCCF and can provide anytime approximate solutions with good quality guarantees.
4. We propose P-CFSS, a parallel version of CFSS that is up to 9.44 times faster than the serial version on a 12-core machine.
The rest of the paper is organised as follows. Section \[sec:relwork\] discusses the relationship between our work and the existing literature, and Section \[sec:problem\] formally defines GCCF. Section \[sec:search\] explains how we generate our search space, and Section \[sec:6\] details the domains used to benchmark CFSS, our branch and bound approach described in Section \[sec:cfss\], and Section \[sec:exp\] discusses our empirical evaluation. Finally, Section \[sec:conclusions\] concludes the paper.
Related work {#sec:relwork}
============
In this section we elaborate on related work in the areas of CF (Section \[sec:relwcf\]), team formation (Section \[sec:teamformation\]), graph theory (Section \[sec:graphtheory\]) and optimisation (Section \[sec:rloptimisation\]).
Coalition Formation {#sec:relwcf}
-------------------
### Classic CSG algorithms
A number of algorithms have been developed to solve CSG for the general CF problem where all coalitions can be formed (i.e., non-GCCF). These range from mixed-integer programming to branch and bound techniques [@Rahwan2009] through Dynamic Programming (DP) [@idp]. In particular, and focused on providing anytime solutions with quality guarantees. However, their solutions do not scale (growing in $O(n^n)$) and, as discussed by , they cannot be employed to solve CSG for GCCF, since assigning artificially low values (such as $-\infty$) to infeasible coalitions would not be suitable for assessing valid bounds. Finally, [-@rahwan:jennings:2008b; -@Rahwan2009; -@eps337164] developed IDP-IP$^*$, the state of the art algorithm for classic CSG. However, IDP-IP$^*$ is limited to tens of agents (30 at most) due to its memory requirements (i.e., $\Theta\left(2^n\right)$), as such approaches need to store all coalition values.
To overcome the intractability due to such memory requirements, a number of works [@ohta2009coalition; @ueda2011concise; @tran2013efficient] have examined alternative function representations, which allow to reduce the computational complexity of the associated CF problems. Unfortunately, their models may not be able to capture the realistic nature of functions such as the collective energy purchasing one we consider here. On the one hand, this function cannot be concisely expressed as a MC network, as its MC network would require an exponential amount of memory with respect to the number of agents. On the other hand, the concepts of agent types/skills imply that it is possible to fully characterise the contribution of each agent on the basis of a small set of features, in order to achieve the conciseness of the representation. However, in our scenario each agent is associated to its own energy consumption profile, resulting in a number of types/skills equal to the number of agents. Hence, we do not compare against these works, since we are interested in developing techniques that can handle complex functions such as the collective energy purchasing function.
### CSG algorithms based on heuristics {#sec:heu}
Very few heuristic solutions to the CSG problem have been developed over the last few years. For example, propose a solution based on genetic algorithms, propose an approach based on swarm intelligence (the bee clustering algorithm) for task allocation in the RoboCup Rescue domain, and propose an approach based on hierarchical clustering. Meta-heuristic approaches to CSG have also been investigated, for example proposes a CSG algorithm based on Simulated Annealing, while use a stochastic local search approach (GRASP) to iteratively build a coalition structure of high quality. Even if these approaches are not able to provide any guarantees on the solution quality, they can compute solutions for large numbers of agents. Hence, in Section \[sec:clink\] we compare CFSS against C-Link, since it is the most recent heuristic approach for CSG and it has been tested using the collective energy purchasing function, which we also consider.
### Constrained CF
The works discussed above focus on unconstrained CF and cannot be directly used in contexts where constraints of various types may limit the formation of some coalitions. In this respect, first introduced the idea, arising in many realistic scenarios, of restricting the maximum cardinality $k$ of the coalitions in CSG, highlighting that, even though this constraint lowers the number of coalitions from exponential, i.e., $2^n$, to polynomial, i.e., $O\left(n^k\right)$, the problem remains NP-hard. Therefore, the authors propose an approximate algorithm with quality guarantees, which, however, can be used if all $O\left(n^k\right)$ coalitions are valid. On the other hand, developed a model of Constrained Coalition Formation (CCF), differing from standard CF due to the presence of constraints that forbid the formation of certain coalitions. However, authors provide an algorithm for optimal CSG only for *Basic* CCF (BCCF) games, which cannot be used to represent every GCCF problem, as shown in Section A.1 of the Appendix.
Finally, in a recent work, proposed an approach to check the non-emptiness of the core when the grand coalition does not form, hence effectively addressing a CSG problem. Notice that, even though such an approach is tested on 1000 agents, the authors assume that the number of feasible coalitions is less than 10000. This assumption is not reasonable for large-scale scenarios we are interested to solve. For the sake of comparison, the number of feasible coalitions with 50 agents and $m=1$ (i.e., the simplest network topology we consider in our tests) is $\sim 150$ billions, thus severely limiting the scalability of such an approach on large-scale scenarios due to its memory requirements.
### State of the art algorithms for GCCF {#sec:stategccf}
[-@Voice2012b; -@Voice2012a] were the first to propose algorithms for the GCCF problem. However, there are some drawbacks that hinder their applicability. First, [@Voice2012b] can only be applied to characteristic functions fulfilling the independence of disconnected members (IDM) property. The IDM property requires that, given two disconnected agents $i$ and $j$, the presence of agent $i$ does not affect the marginal contribution of agent $j$ to a coalition. This assumption is rather strong for real-world applications. As noticed by considering task allocation, the addition of a new agent to a coalition could result in intra-coalition coordination and communication costs, which increase with the size of the coalition. Hence, realistic functions capturing such costs (such as the ones in Section \[sec:functions\]) do not satisfy the IDM property, hence this approach cannot be applied. Second, the DyCE algorithm [@Voice2012a] uses DP to find the optimal coalition structure by progressively splitting the current solution into its best partition. DyCE is not an anytime algorithm and requires an exponential amount of memory in the number of agents (i.e., $\Theta\left(2^n\right)$). Hence, the scalability of this approach is limited to systems consisting of tens of agents (around 30).
Team formation {#sec:teamformation}
--------------
The problem of forming groups of agents has also been widely studied in the context of Team Formation, in which several formal definitions of such problem have been proposed. As an example, devise a heuristic to modify the graph connecting the agents based on local autonomous reasoning, without considering any concept of global optimal solution. The problem studied by focuses on finding a single group of agents who possess a given set of skills, so as to minimise the communication cost within such a group. focus on forming a single group of agents that has the maximum strength in the set of world states. Finally, are interested in modelling the values of the characteristic function, based on observations of the agents. In this paper, we address the specific group formation problem in which groups must form a partition (into disjoint coalitions) of a given set of agents, with the objective of maximising the sum of the coalitional values. Such problem is equivalent to the *complete set partitioning* problem [@yun1986dynamic], i.e., the standard definition adopted in the CF literature.
Graph theory techniques {#sec:graphtheory}
-----------------------
Our approach enumerates all the feasible partitions of the set of agents by means of the edge contraction operation, a graph theoretic technique known for its application in the algorithm to solve the Min-Cut problem [@karger]. Edge contraction has never been employed in CF [@rahwan2015coalition], hence we aim at investigating its use in this paper. In this context, the problem of enumerating all the connected subgraphs (corresponding to feasible coalitions in GCCF scenarios) of a given graph has been studied in a number of works [@Voice2012a; @Skibski:2014:ASM:2615731.2615766]. Nonetheless, such algorithms can only be used to enumerate feasible coalitions, and cannot be applied to enumerate feasible coalition *structures* (as CFSS does), which are *sets* of disjoint feasible coalitions that collectively cover the entire set of agents.
Submodular-supermodular decomposition {#sec:rloptimisation}
-------------------------------------
Submodular functions have been widely studied in the optimisation literature [@schrijver2003combinatorial] in virtue of their natural *diminishing returns* property, which makes them suitable for many applications [@nemhauser1978analysis; @narayanan1997submodular]. Moreover, [-@shekhovtsov2006supermodular; -@shekhovtsov2008lp] focused on general functions that can be decomposed as the sum of supermodular and submodular components, exploiting such a property to achieve better results in the solution of several optimisation problems.
While this approach is similar to the decomposition we propose in Section \[sec:functions\], our result holds for superadditive and subadditive functions (cf. Definition \[def:supersub\]), which are *weaker* (i.e., more general) properties with respect to supermodularity and submodularity. In fact, it is easy to show that supermodularity (resp. submodularity) implies superadditivity (resp. subadditivity), but the converse is not true [@schrijver2003combinatorial].
GCCF problem definition {#sec:problem}
=======================
\[subsec:problem\]
The Coalition Structure Generation (CSG) problem [@Sandholm99; @Shehory1998] takes as input a finite set of $n$ agents $\mathcal A$ and a characteristic function $v:2^\mathcal{A}\rightarrow \mathbb{R}$, that maps each coalition $C\in 2^\mathcal{A}$ to its value, describing how much collective payoff a set of players can gain by forming a coalition. A coalition structure $CS$ is a partition of the set of agents into disjoint coalitions. The set of all coalition structures is $\Pi(\mathcal{A})$. The value of a coalition structure $CS$ is assessed as the sum of the values of its composing coalitions, i.e., $$\label{eq:V}
V(CS)=\sum_{C\in CS}v(C).$$ CSG aims at identifying $CS^*$, the most valuable coalition structure, i.e., $CS^*=\operatorname*{arg\,max}_{CS\in\Pi(\mathcal{A})}{V(CS)}.$ Graphs have been used in different scenarios to encode synergies, coordination among players, possible collaborations or cooperation structures [@Myerson1977; @Voice2012a; @meir2012optimization]. and pioneered the study of graphs to model cooperation structures. Given an undirected graph $G=(\mathcal{A},\mathcal{E})$, where $\mathcal{E} \subseteq \mathcal{A} \times \mathcal{A}$ is a set of edges between agents, representing the relationships between them, Myerson considers a coalition $C$ to be feasible if all of their members are connected in the subgraph of $G$ induced by $C$. That is, for each pair of players from $a,b \in C$ there is a path in $G$ that connects them without going out of $C$. Thus, given a graph $G$ the set of feasible coalitions is $$\mathcal{FC}(G)=\{C\subseteq \mathcal{A} \mid \text{The subgraph induced by } C \text{ on } G \text{ is connected}\}.$$ A Graph-Constrained Coalition Formation (GCCF) problem is a CSG problem together with a graph $G$, in which a coalition $C$ is considered feasible if $C\in\mathcal{FC}(G)$. Moreover, a coalition structure $CS$ is considered feasible if each of its coalitions is feasible, i.e., $$\mathcal{CS}(G)=\{CS\in \Pi(\mathcal{A}) \mid CS\subseteq \mathcal{FC}(G)\}.$$ A GCCF problem aims at identifying the most valuable coalition structure, defined as $CS^*=\operatorname*{arg\,max}_{CS\in \mathcal{CS}(G)}{V(CS)}.$
In the next section, we propose a novel representation of the GCCF problem based on the concept of edge contraction.
A general algorithm for GCCF {#sec:search}
============================
We now present a general algorithm to solve GCCF by showing that all feasible coalition structures induced by $G$ can be modelled as the nodes of a search tree in which each feasible coalition structure is represented only once. Specifically, we first detail how we use edge contractions to represent the GCCF problem and then we provide a depth-first approach to build and traverse the search tree to find the optimal solution.
Generating feasible coalition structures via edge contractions
--------------------------------------------------------------
In this section we show that each $CS\in\mathcal{CS}(G)$ can be represented by a corresponding graph $G_{CS}= (\mathcal{V},\mathcal{F})$, where $\mathcal{V}\subseteq 2^\mathcal{A}$ and $\mathcal{F}\subseteq \mathcal{V}\times\mathcal{V}$, i.e., each node $u\in \mathcal{V}$ represents a particular coalition. Notice that in the initial graph $G=(\mathcal{A},\mathcal{E})$ each vertex $u\in\mathcal{A}$ represents a single agent, and hence, $G$ can be seen as the representation of the feasible coalition structure formed by all the singletons.
In what follows, we will show that, for each $CS\in\mathcal{CS}(G)$, the corresponding $G_{CS}$ can be obtained as the contraction of a set of edges of $G$, and that each contraction of a set of edges of $G$ represents a feasible coalition structure $CS\in\mathcal{CS}(G)$. In more detail, let us define an *edge contraction* as follows.
Given a graph $G = (\mathcal{V},\mathcal{F})$, where $\mathcal{V}\subseteq 2^\mathcal{A}$ and $\mathcal{F}\subseteq \mathcal{V}\times\mathcal{V}$, and an edge $e=(u,v)\in\mathcal{F}$, the result of the contraction of $e$ is a graph $G'$ obtained by removing $e$ and the corresponding vertices $u$ and $v$, and adding a new vertex $w=u\cup v$. Moreover, each edge incident to either $u$ or $v$ in $G$ will become incident to $w$ in $G'$, merging the parallel edges (i.e., the edges that are incident to the same two vertices) that may result.
![Example of an edge contraction (the dashed edge is contracted).[]{data-label="fig:trianglecontraction"}](img/f1)
Intuitively, one edge contraction represents the merging of the coalitions associated to the incident vertices. Figure \[fig:trianglecontraction\] shows the contraction of the edge $\left(\left\{A\right\},\left\{C\right\}\right)$, which results in a new vertex $\left\{A,C\right\}$ connected to vertex $\left\{B\right\}$. Notice that edge contraction is a commutative operation (i.e., first contracting $e$ and then $e'$ results in the same graph as first contracting $e'$ and then $e$). Hence, we can define the contraction of a set of edges as the result of contracting each of the edges of the set in any given order.
Given a graph $G$, the graph $G'$ resulting from the contraction of any set of edges of $G$ represents a feasible coalition structure, where coalitions correspond to the vertices of $G'$.
![Example of a 2-coloured edge contraction (the dashed edge is contracted).[]{data-label="Fig:ContractionExample"}](img/f2)
Given a graph $G$, any feasible coalition structure $CS$ can be generated by contracting a set of edges of $G$.
Thus, a possible way of listing all feasible coalition structures is to list the contraction of every subset of edges of the initial graph. However, notice that the number of subsets of edges is larger than the number of feasible coalition structures over the graph. For example, in the triangle graph in Figure \[fig:trianglecontraction\]a, the number of subsets of edges is $2^{|\mathcal{E}|}=2^3=8$, but the number of feasible coalition structures is $5$ (i.e., $\left\{A\right\}\left\{B\right\}\left\{C\right\}$, $\left\{A,B\right\}\left\{C\right\}$, $\left\{A,C\right\}\left\{B\right\}$, $\left\{A\right\}\left\{B,C\right\}$ and $\left\{A,B,C\right\}$). This redundancy is due to the fact that the contraction of any two or three edges leads to the same coalition structure, i.e., the grand coalition $\mathcal{A}=\left\{A,B,C\right\}$. Thus, we need a way to avoid listing feasible coalition structures more than once. To avoid such redundancies, we mark each edge of the graph to keep track of the edges that have been contracted so far. Notice that there are only two different alternative actions for each edge: either we contract it, or we do not. If we decide to contract an edge, it will be removed from the graph in all the subtree rooted in the current node, but if we decide not to contract it, we have to mark such edge to make sure that we do not contract it in the future steps of the algorithm. To represent such marking, we will use the notion of *2-coloured graph*.
A 2-coloured graph $G_c = (\mathcal{V},\mathcal{F},c)$ is composed of a set of vertices $\mathcal{V}\subseteq 2^\mathcal{A}$ and a set of edges $\mathcal{F}\subseteq\mathcal{V}\times\mathcal{V}$, as well as a function $c:\mathcal{F}\rightarrow \{red, green\}$ that assigns a colour ($red$ or $green$) to each edge of the graph.
In our case, a red edge means that a previous decision not to contract that edge was made. On the one hand, green edges can be still contracted. Figure \[Fig:ContractionExample\]a shows an example of a 2-colour graph in which edge $\left(\left\{A\right\},\left\{D\right\}\right)$ is coloured in red (dotted line). Hence, in any subsequent step of the algorithm it is impossible to contract it. On the other hand, all other edges in such graph can still be contracted. In a 2-coloured graph, we define a *green edge contraction* (e.g., dashed line in Figure \[Fig:ContractionExample\]a) as follows.
\[def:gec\]Given a 2-coloured graph $G = (\mathcal{V},\mathcal{F},c)$ and a green edge $e\in\mathcal{F}$, the result of the contraction of $e$ is a new graph $G'$ obtained by performing the contraction of $e$ on $G$. Whenever two parallel edges are merged into a single one, the resulting edge is coloured in $red$ if at least one of them is red-coloured, and it is green-coloured otherwise.
The rationale behind marking parallel edges in this way is that, whenever we mark an edge $e=(u,v)$ to be $red$, we want the agents in $u$ and $v$ to be in separate coalitions, hence whenever we merge some edges with $e$ we must mark the new edge as $red$ to be sure that future edge contractions will not generate a coalition that contains both the agents corresponding to nodes $u$ and $v$. For example, note that in Figure \[Fig:ContractionExample\] the red edge $\left(\left\{A\right\},\left\{D\right\}\right)$ (dotted in the figure) and the green edge $\left(\left\{D\right\},\left\{C\right\}\right)$ are merged as a consequence of the contraction of edge $\left(\left\{A\right\},\left\{C\right\}\right)$, resulting in an edge $\left(\left\{D\right\},\left\{A,C\right\}\right)$ marked in red. In this way, we enforce that any possible contraction in the new graph will keep agents $A$ and $D$ in separate coalitions.
Having defined how we can use the edge contraction operation to generate feasible coalition structures, we now provide a way to generate the whole search space of feasible coalition structures.
Generating the entire search space
----------------------------------
Given the green edge contraction operation defined above, we can generate each feasible coalition structure only once. In more detail, at each point of the generation process, each red edge indicates that it has been discarded for contraction from that point onwards, and hence its vertices cannot be joined. Observe that the way we defined green edge contraction guarantees that the information in red edges is always preserved. Thus, given a 2-coloured graph, its children can be readily assessed as follows: for each edge in the graph, we generate the graph that results from contracting that edge. Moreover, we colour the selected edge in red so that it cannot be contracted again in subsequent edge contractions. Algorithm \[alg:VisitAllCoalitionStructures\] implements the depth-first[^3] generation and traversal of our search tree, in which each feasible coalition structure is evaluated by means of the characteristic function and compared with the best (i.e., the one with the highest value) coalition structure so far, hence computing the optimal solution.
-2pt$best \leftarrow G_c,\, F \leftarrow \emptyset$ $F.\textsc{push}(G_c)$ $node \leftarrow F.\textsc{pop}()$ $best \leftarrow node$ $F.\textsc{push}(\textsc{Children}\left(node\right))$ $best$
-2pt$G' \gets G_c,\, Ch \gets \emptyset$ \[line:asd\] $Ch\gets Ch \cup \left\{\textsc{GreenEdgeContraction}\left(G',e\right)\right\}$ Mark edge $e$ with colour $red$ in $G'$ $Ch$
As an example, Figure \[Fig:SquareTree\] shows the search tree generated starting from a square graph, highlighting each generation step with labels on the edges. We now prove that Algorithm \[alg:VisitAllCoalitionStructures\] visits all feasible coalition structures and each of them is visited only once.
\[prop:AllCompatible\] Given $G_c$, the tree generated by Algorithm \[alg:VisitAllCoalitionStructures\] rooted at $G_c$ contains all the coalition structures compatible with $G_c$, each appearing only once.
By induction on the number of green edges. Full proof is provided in the Online Appendix.
\[Cor:OneToOne\] The complexity of Algorithm \[alg:VisitAllCoalitionStructures\] is $O(\left\vert\mathcal{CS}(G)\right\vert\cdot\left\vert\mathcal{E}\right\vert)$.
There is a bijection between $\mathcal{CS}(G)$ and the nodes visited by Algorithm \[alg:VisitAllCoalitionStructures\], by direct application of Proposition \[prop:AllCompatible\] to $G$ with all green edges. The creation of each new node yields a <span style="font-variant:small-caps;">GreenEdgeContraction</span>$(G,e)$ operation, whose complexity is $O(\left\vert\mathcal{E}\right\vert)$ (Definition \[def:gec\]). Hence, the complexity of creating the search tree is $O(\left\vert\mathcal{CS}(G)\right\vert\cdot\left\vert\mathcal{E}\right\vert)$.[^4]
Notice that, even for sparse graphs, the number of feasible coalition structures can be very large, as, in general, the GCCF problem is NP-complete [@Voice2012b]. Hence, in the next section we propose a branch and bound technique that helps prune significant parts of the search space, allowing us to compute the optimal solution for any GCCF problem based on an $\mplusa$ function by generating only a minimal portion of the solution space (i.e., less than $0.32\%$ in our experiments in Section \[exp:bound\]).
In addition, such a bounding technique is employed in the approximate version of our approach, which can compute solutions with quality guarantees for large-scale systems. It is important to note that, in contrast with the optimal version, our approximate approach is not characterised by the above discussed exponential complexity, as the search for the solution is executed only for a given time budget (see Section \[subsec:anytimeprop\]).
A general branch and bound algorithm for m+a functions {#sec:cfss}
======================================================
We now describe CFSS (Coalition Formation for Sparse Synergies), our branch and bound approach to GCCF when applied to the family of $\mplusa$ characteristic functions.
\[def:supersub\] Given a graph $G$, a function $v:\mathcal{FC}(G)\to\mathbb{R}$ is superadditive (resp. subadditive) if the value of the union of disjoint coalitions is no less (resp. no greater) than the sum of the coalitions’ separate values, i.e., $v ( S \cup T ) \geq (\text{resp.} \leq)\, v (S) + v (T)$ for all $S,T\subseteq\mathcal{A}$ such that $S\cap T=\emptyset$.
We also define such properties for the function $V:\mathcal{CS}(G)\to\mathbb{R}$ defined in Equation \[eq:V\].
\[def:Vsupersub\] Given a graph $G$, a function $V:\mathcal{CS}(G)\to\mathbb{R}$ defined according to Equation \[eq:V\] is superadditive (resp. subadditive) if the underlying function $v:\mathcal{FC}(G)\to\mathbb{R}$ is superadditive (resp. subadditive).
\[def:ma\] Given a graph $G$, $V$$\colon$$\mathcal{CS}(G)$$\to$$\mathbb{R}$ is an $\mplusa$ function if it is the sum of a superadditive (i.e., monotonic increasing) function $V^+$$\colon$$\mathcal{CS}(G)$$\to$$\mathbb{R}$ and a subadditive (i.e., antimonotonic) function $V^-$$\colon$$\mathcal{CS}(G)$$\to$$\mathbb{R}$.
This family is interesting because it allows us to provide an upper bound that underlies our branch and bound strategy, so as to prune significant portions of the search space and have a computationally affordable solution algorithm. We provide a technique to compute an upper bound for the value assumed by the characteristic function in every coalition structure of the subtree $ST(CS_i)$ rooted at a given coalition structure $CS_i$. In order to explain how to compute such an upper bound, we first define the element $\overline{CS_i}$.
\[def:cshat\] Given a feasible coalition structure $CS_i$ represented by a 2-coloured graph $G_c$, we define $\overline{CS_i}$ as the coalition structure obtained by removing all red edges from $G_c$ and then contracting all the remaining green edges. Intuitively, $\overline{CS_i}$ represents the connected components in the graph after the removal of all red edges.
\[prop:4\] Given an $\mplusa$ function $V:\mathcal{CS}(G)\to\mathbb{R}$, then $M\left(CS_i\right) = V^-\left(CS_i\right) + V^+\left(\overline{CS_i}\right)$ is an upper bound for the value assumed by such function in every coalition structure of the subtree $ST(CS_i)$ rooted at $CS_i$, i.e., $$\label{eq:bound1}M\left(CS_i\right)= V^-\left(CS_i\right) + V^+\left(\overline{CS_i}\right)\geq \max \{V(CS_j) \mid CS_j\in ST(CS_i)\}.$$
$V^-\left(CS_i\right)$ (resp. $V^+\left(\overline{CS_i}\right)$) is an upper bound for the subadditive (resp. superadditive) component, hence $M\left(CS_i\right)$ is an upper bound for the characteristic function. Full proof is provided in the Online Appendix.
\[cor:edge\] Given $CS_i$ represented by a 2-coloured graph $G_c = (\mathcal{V}, \mathcal{F}, c)$, it is possible to compute a more precise upper bound for the edge sum with coordination cost function (see Section \[sec:clustering\]) by replacing $V^+\left(\overline{CS_i}\right)$ with $\sum_{{e\in \mathcal{F}:c(e)=green}}w^+(e).$
Building upon Theorem \[prop:4\], we can efficiently assess an upper bound for the value of the characteristic function in any subtree and prune it, if such a value is smaller than the value of the best solution found so far. Algorithm \[alg:branch-and-bound\] implements CFSS, our branch and bound approach to solve the GCCF problem.
-2pt$best \leftarrow G_c,\, F \leftarrow \emptyset$ $F.\textsc{push}(G_c)$ $node \leftarrow F.\textsc{pop}()$ \[line:bound\] $best \leftarrow node$ $F.\textsc{push}(\textsc{Children}\left(node\right))$ $best$
We remark that Algorithm \[alg:branch-and-bound\] is correct and complete, i.e., it computes the optimal solution regardless of the order in which the children of the current node are visited, namely the operation of the <span style="font-variant:small-caps;">Children</span> function. However, such an order has a strong influence on the performance of CFSS (as shown in Section \[exp:order\]), since it can be used to compute an upper bound that better resembles the characteristic function (hence improving the effectiveness of the branch and bound pruning).
Edge ordering heuristic {#sec:edgeordering}
-----------------------
In this section we propose a heuristic to define a total ordering among the edges of a graph $G$, in order to guide the traversal of the search tree. This results in a significant speed-up of the algorithm, by means of an improvement of the upper bound. In particular, we notice that the value of $M\left(CS_i\right) = V^-\left(CS_i\right) + V^+\left(\overline{CS_i}\right)$ is heavily influenced by the value of $V^+\left(\overline{CS_i}\right)$. In fact, it is possible that $\overline{CS_i}=\{\mathcal{A}\}$ (i.e., the grand coalition), when $CS_i$ contains enough green edges to connect all the nodes of the graph $G$. This results in a poor bound, since $V^+$ is a superadditive function and it reaches its maximum value for $\mathcal{A}$.
On the other hand, if red edges form a cut-set for the 2-coloured graph, the procedure in Definition \[def:cshat\] results in a coalition structure $\overline{CS_i}=\{C_1,C_2\}$, as Figure \[fig:cut\] shows. In this case, our bounding technique produces a *lower* upper bound $M\left(CS_i\right) = V^-\left(CS_i\right) + v^+\left(C_1\right) + v^+\left(C_2\right)$, since $v^+\left(\cdot\right)$ is superadditive and, therefore, $v^+\left(C_1\right) + v^+\left(C_2\right)\leq v^+\left(\mathcal{A}\right).$ Notice that, having an upper bound that provides a lower overestimation of the characteristic function is crucial for the performance of CFSS, as the condition at line \[line:bound\] in Algorithm \[alg:branch-and-bound\] would be verified less often, hence allowing us to prune bigger portions of the search space. Also, it easy to see that when the value of the characteristic function increases in a non-linear way with respect to the size of the coalitions (such as the functions we consider in this paper), the more $C_1$ and $C_2$ are closer to a *bisection* of $\mathcal{A}$ (i.e., the more $|C_1|$ and $|C_2|$ are close to $\nicefrac{|\mathcal{A}|}{2}$), the more pronounced such improvement is.
[0.3]{}
\(2) at (0.5,1) [$B$]{}; (1) at (1.5,1) [$A$]{}; (4) at (0,2) [$D$]{}; (3) at (1,2) [$J$]{}; (6) at (-0.5,1) [$F$]{}; (5) at (-1.5,1) [$E$]{}; (8) at (2.5,1) [$G$]{}; (7) at (2,2) [$H$]{}; (9) at (-1,2) [$I$]{}; (2) to (1); (3) to (2); (7) to (3); (3) to (1); (4) to (1); (2) to (6); (5) to (6); (1) to (7); (6) to (4); (9) to (4); (1) to (8); [ \[ create hullnodes/.code=[ in at (hullnode) \[name=hullnode0,draw=none\] ; at (hullnode1) \[name=hullnode,draw=none\] ; ]{}, create hullnodes \] ($(hullnode1)!10.5pt!-90:(hullnode0)$) in [1,...,]{} [ – ($(hullnode\currentnode)!10.5pt!-90:(hullnode\previousnode)$) let 1 = ($(hullnode\currentnode)!10.5pt!-90:(hullnode\previousnode) - (hullnode\currentnode)$), 1 = [atan2(1,1)]{}, 2 = ($(hullnode\currentnode)!10.5pt!90:(hullnode\nextnode) - (hullnode\currentnode)$), 2 = [atan2(2,2)]{}, = [-Mod(1-2,360)]{} in [arc \[start angle=1, delta angle=, radius=10.5pt\]]{} ]{} – cycle ]{}; [ \[ create hullnodes/.code=[ in at (hullnode) \[name=hullnode0,draw=none\] ; at (hullnode1) \[name=hullnode,draw=none\] ; ]{}, create hullnodes \] ($(hullnode1)!10.5pt!-90:(hullnode0)$) in [1,...,]{} [ – ($(hullnode\currentnode)!10.5pt!-90:(hullnode\previousnode)$) let 1 = ($(hullnode\currentnode)!10.5pt!-90:(hullnode\previousnode) - (hullnode\currentnode)$), 1 = [atan2(1,1)]{}, 2 = ($(hullnode\currentnode)!10.5pt!90:(hullnode\nextnode) - (hullnode\currentnode)$), 2 = [atan2(2,2)]{}, = [-Mod(1-2,360)]{} in [arc \[start angle=1, delta angle=, radius=10.5pt\]]{} ]{} – cycle ]{}; [ \[ create hullnodes/.code=[ in at (hullnode) \[name=hullnode0,draw=none\] ; at (hullnode1) \[name=hullnode,draw=none\] ; ]{}, create hullnodes \] ($(hullnode1)!12pt!-90:(hullnode0)$) in [1,...,]{} [ – ($(hullnode\currentnode)!12pt!-90:(hullnode\previousnode)$) let 1 = ($(hullnode\currentnode)!12pt!-90:(hullnode\previousnode) - (hullnode\currentnode)$), 1 = [atan2(1,1)]{}, 2 = ($(hullnode\currentnode)!12pt!90:(hullnode\nextnode) - (hullnode\currentnode)$), 2 = [atan2(2,2)]{}, = [-Mod(1-2,360)]{} in [arc \[start angle=1, delta angle=, radius=12pt\]]{} ]{} – cycle ]{}; at ($(9)!0.5!(6)$) [$C_1$]{}; at ($(7)!0.5!(8)$) [$C_2$]{}; at ($(5)!0.45!(9)$) [$\overline{CS_i}$]{};
Following this observation, it is preferable to visit the edges that produce a cut of the graph in the first steps of the algorithm, since they will result in the above-explained improvement once such edges are marked in red. Henceforth, we define a total ordering among the edges of $G$, producing an *ordered* graph $G_o$ by means of Algorithm \[alg:order\]. Intuitively, such algorithm computes small[^5] cut-sets by means of the routine <span style="font-variant:small-caps;">Cut</span>$\left(G\right)$, which outputs the subgraphs $G_1=(\mathcal{V}_1,\mathcal{F}_1)$ and $G_2=(\mathcal{V}_2,\mathcal{F}_2)$ resulting from the cut, and the cut-set $\mathcal{F}'$. Once the cut-set has been found, we label its edges as the first ones in the ordered graph, recursively applying such procedure for all the subsequent subgraphs resulting at each partitioning, until every edge has been ordered.
-2pt$i \gets 1,\,G_o \leftarrow G,\,Q \leftarrow \emptyset$ $Q.\textsc{push}(G)$ $\langle G_1,G_2,\mathcal{F}' \rangle \leftarrow \textsc{Cut}\left(Q.\textsc{pop}()\right)$ Label in $G_o$ each edge $\in \mathcal{F}'$ from $i$ to $i+|\mathcal{F}'|-1$ $i\gets i+|\mathcal{F}'|$ $Q.\textsc{push}(G_1)$ $Q.\textsc{push}(G_2)$ $G_o$
In the worst-case, Algorithm \[alg:order\] makes $\left\vert\mathcal{E}\right\vert$ calls to <span style="font-variant:small-caps;">Cut</span>, whose complexity is $O(\left\vert\mathcal{E}\right\vert)$ [@Karypis:1998:FHQ:305219.305248]. Hence, its worst-case complexity is $O(\left\vert\mathcal{E}\right\vert^2)$.
In addition to this edge ordering heuristic, our bounding technique can be employed to provide anytime approximate solutions, as shown in the next section.
Anytime approximate properties {#subsec:anytimeprop}
------------------------------
Theorem \[prop:4\] can be directly applied to compute an overall bound of an $\mplusa$ function, with anytime properties. More precisely, let us consider frontier $F$ in Algorithm \[alg:branch-and-bound\]. When we expand frontier $F$ (Line 9) we keep track of the highest value of $V(\cdot)$ in the visited nodes. Hence, given a frontier $F$, the bound $B(F)$ is defined as $$\label{eq:bound}
B(F)=\max\{V\left(best\right),\max_{CS \in F} M(CS)\}$$ Thus, $B(F)$ is the maximum between the values assumed by $V(\cdot)$ inside the frontier (i.e., $V\left(best\right)$) and an estimated upper bound outside of it (i.e., $\max_{CS \in F} M\left(CS\right)$). Notice that since each $M\left(CS\right)$ is an overestimation of the value of $V(\cdot)$ in the corresponding subtree, such a maximisation provides a valid upper bound for $V(\cdot)$ in the portion of search space not visited yet. Furthermore, the quality of $B(F)$ can only be improved by expanding frontier $F$. More formally, if $F'$ is such an expansion, then $$\label{eq:inequality}
B\left(F\right) \geq B\left(F'\right) \geq \max \{V(CS)\mid CS \in \mathcal{CS}\left(G\right)\}.$$ This can be easily verified using the definition of $M(\cdot)$. In fact, each bound resulting from the children of a substituted node $u\in F$ must be less or equal to $M(u)$ and, hence, Inequality \[eq:inequality\] holds. Intuitively, the larger the search space explored, the better is the bound provided. Finally, notice that the fastest way to compute a bound for $V(\cdot)$ is to consider a frontier formed exclusively by the root (i.e., the coalition structure formed by all singletons). Assessing this bound has the same time complexity of computing $M$, i.e., $O(|\mathcal{E}|)$, and its quality can be satisfactory, as shown in Section \[sec:anytime\].
After the discussion of our branch and bound algorithm for $\mplusa$ functions, in the next section we discuss some scenarios in which GCCF can be applied, and, in particular, we present three $\mplusa$ functions that will be used to evaluate our approach.
Applications for GCCF {#sec:6}
=====================
As previously discussed, GCCF is a well known model in cooperative game theory that can be applied to several realistic scenarios. In what follows, we focus on two real-world scenarios, namely social ridesharing and collective energy purchasing, that can be modelled as GCCF problems.
In the ridesharing domain, adopted an heuristic approach in order to increase the potential passenger coverage of a fleet of taxis, while decreasing the total travel mileage of the system. Later on, tackled the optimisation problem of arranging one-time shared rides among a set of commuters connected through a social network, with the objective of minimising the overall travel cost. Unlike , explicitly consider coalitions, showing that such a scenario can be modelled as a GCCF problem where the set of feasible coalitions is restricted by the social network. Intuitively, each group of agents that travel in the same car is mapped to a feasible coalition, whose coalitional value is defined as the total travel cost of that particular car, i.e., the cost of driving through its passengers’ pick-up and destination points. show that the adoption of the GCCF model in this scenario leads to a cost reduction of up to $-36.22\%$ when applied to realistic datasets for both spatial and social data.
In the *collective energy purchasing* scenario [@vinyals-ENERGYCON-12], each agent is characterised by an energy consumption profile that represents its energy consumption throughout a day. A profile records the energy consumption of a household at fixed intervals (every half hour in our case). The characteristic function of a coalition of agents is the total cost that the group would incur if they bought energy as a collective in two different markets: the spot market, a short term market (e.g., half hourly, hourly) intended for small amounts of energy; and the forward market, a long term one in which larger amounts of energy (spanning weeks and months) can be bought at cheaper prices [@vinyals-ENERGYCON-12]. In the *edge sum with coordination cost* scenario, every edge is associated to a value that represents how well (or bad) those agents perform together, or the cost of completing a coordination task in a robotic environment [@Dasgupta:2012:DRM:2343576.2343593]. In the *coalition size with distance cost* scenario, the formation of coalitions favours bigger groups and maximises the similarity of the opinion among their members. Such application could be employed to cluster public opinion, or to detect the presence of “virtual coalitions” among members of a parliament based on their recorded votes (e.g., the votes by the Democratic and the Republican parties).
In addition to such practical motivations, these three scenarios are particularly interesting as they are modelled by characteristic functions (Equation \[eq:energym+a\]) part of a large family of functions, i.e., $\mplusa$ functions. In what follows, we discuss the properties of such functions, showing how they can be exploited to significantly speed-up the solution of the associated GCCF problem (see Section \[sec:cfss\]).
Benchmark m+a functions {#sec:functions}
-----------------------
We now present three benchmark functions for GCCF, namely the *collective energy purchasing* function, the *edge sum with coordination cost* function and the *coalition size with distance cost* function. In particular, we are interested in their characterisation as $\mplusa$ functions, showing that they can be seen as the sums of the superadditive and the subadditive parts [@owen1995game]. Such characteristic functions are particularly interesting as they enable an efficient bounding technique to prune part of the search space during the execution of our branch and bound algorithm, presented in Section \[sec:cfss\].
### Collective energy purchasing {#sec:energy}
In the collective energy purchasing scenario, proposed the characteristic function $$v\left(C\right) = \underbrace{\sum\nolimits^T_{t=1} q^t_{S}\left(C\right) \cdot p_{S} + T\cdot q_{F}\left(C\right) \cdot p_{F}}_{energy\left(C\right)} + \kappa\left(C\right),$$ where $T=48$ is the number of energy measurements in each profile, $p_S\in\mathbb{R}^-$ and $p_F\in\mathbb{R}^-$ represent the unit prices of energy in the spot and forward market respectively, $q_{F}:\mathcal{FC}(G)\to\mathbb{R}^-$ stands for the time unit amount of electricity to buy in the forward market and $q^t_{S}:\mathcal{FC}(G)\to\mathbb{R}^-$ for the amount to buy in the spot market at time slot $t$.[^6] $energy:\mathcal{FC}(G)\to\mathbb{R}^-$ represents the total energy cost.
Finally, $\kappa:\mathcal{FC}(G)\to\mathbb{R}^-$ stands for a coalition management cost that depends on the size of the coalition and captures the intuition that larger coalitions are harder to manage. The definition of this cost depends on several low level issues (e.g., the capacity of the power networks connecting the customers in the groups, legal fees, and other costs associated to group contracts, etc.), hence a precise definition of this term goes beyond the scope of this paper. Following we use $\kappa(C)= -\vert C \vert^\gamma$ with $\gamma > 1$ to introduce a non-linear element that penalises the formation of larger coalitions. Hence, the *collective energy purchasing* function is defined as $$\label{eq:energym+a}
V(CS) = \underbrace{\sum\nolimits_{C\in CS}\left[\sum\nolimits^T_{t=1} q^t_{S}\left(C\right) \cdot p_{S} + T\cdot q_{F}\left(C\right) \cdot p_{F}\right]}_{V^+\left(CS\right)} + \underbrace{\sum\nolimits_{C\in CS}\kappa\left(C\right)}_{V^-\left(CS\right)}.$$
\[prop:energyma\] The collective energy purchasing function is $\mplusa$.
The cost of the energy necessary to fulfil the aggregated consumption profiles of the coalitions, i.e., $V^+\left(CS\right)$, is clearly superadditive, while the sum of the coalition management costs, i.e., $V^-\left(CS\right)$, is subadditive, as they increase when coalition sizes increase. Full proof is provided in the Online Appendix.
### Edge sum with coordination cost {#sec:clustering}
In the *edge sum with coordination cost* function every edge of $G$ is mapped to a real value by a function $w:\mathcal{E}\rightarrow\mathbb{R}$ [@deng1994]. Each coalitional value is the sum of the weights of the edges among its members. In order to have a better description of the management and communication costs in larger coalitions, we also introduce a penalising factor $\kappa\left(C\right)$,[^7] with the same definition given in the previous section. Hence, we define this function as $$\label{eq:cluster1}
v\left(C\right)=\sum\nolimits_{e\in edges\left(C\right)}w(e) + \kappa\left(C\right),$$ where the function $edges:\mathcal{FC}(G)\to 2^\mathcal{E}$ provides the set of all the edges connecting any two members of a given coalition $C$, i.e., $edges\left(C\right)=\left\{(v_1,v_2)\in \mathcal{E}\mid v_1\in C \text{ and } v_2 \in C\right\}$. In order to characterise this scenario with an $\mplusa$ function, we rewrite Equation \[eq:cluster1\] as $$v\left(C\right)=\sum\nolimits_{e\in edges\left(C\right)}\left[w^+(e)+w^-(e)\right] + \kappa\left(C\right),$$ $$\text{where}\quad\quad\begin{array}{l l}{\!\!\!w^+(e) = \left\{
\begin{array}{l l}
\!\!w(e), &\text{if $w(e)\geq0$},\\
\!\!0, &\text{otherwise,}
\end{array}\right.}&{\ w^-(e) = \left\{
\begin{array}{l l}
\!\!w(e), &\text{if $w(e)<0$},\\
\!\!0, & \text{otherwise.}
\end{array}\right.}\end{array}$$ In other words, $\sum\nolimits_{e\in edges\left(C\right)} w^+(e)$ represents the sum of all the positive weights of the edges in $edges\left(C\right)$, while $\sum\nolimits_{e\in edges\left(C\right)} w^-(e)$ represents the sum of the negative ones. The *edge sum with coordination cost* function is then defined as $$V\left(CS\right) = \underbrace{\sum\nolimits_{C\in CS}\left[\sum\nolimits_{e\in edges\left(C\right)}w^+(e)\right]}_{V^+\left(CS\right)}+\underbrace{\sum\nolimits_{C\in CS}\left[\sum\nolimits_{e\in edges\left(C\right)}w^-(e) + \kappa\left(C\right)\right]}_{V^-\left(CS\right)}.$$
\[prop:edgema\] The edge sum with coordination cost function is $\mplusa$.
It is easy to verify that $V^+\left(CS\right)$, i.e., the sum of all positive edges, is superadditive, while the sum of the negative ones, i.e., $V^-\left(CS\right)$, is subadditive. Full proof is provided in the Online Appendix.
### Coalition size with distance cost {#sec:coalsize}
The *coalition size with distance cost* can be modelled evaluating each coalition $C$ with the function $$\label{eq:coalsize}
v\left(C\right)=|C|^\alpha - \sum\nolimits_{\left(i,j\right)\in C \times C}d\left(i,j\right),$$
where $\alpha\ge 1$, and $d: \mathcal{A}\times\mathcal{A} \rightarrow \mathbb{R}^+$ is a function that measures the distance between the opinions of agent $i$ and agent $j$. From Equation \[eq:coalsize\] it follows that the input of our problem has size $N^2$, where $N$ is the total number of agents, since we must know the distances between each pair or agents. The *coalition size with distance cost* function of a coalition structure $CS$ is then defined as $$V\left(CS\right) = \underbrace{\sum\nolimits_{C\in CS}|C|^\alpha}_{V^+\left(CS\right)}+\underbrace{\sum\nolimits_{C\in CS}\left[- \sum\nolimits_{\left(i,j\right)\in C \times C}d\left(i,j\right)\right]}_{V^-\left(CS\right)}.$$
\[prop:coalma\] The coalition size with distance cost function is $\mplusa$.
On the one hand, it is easy to verify that $v^+(C)=|C|^\alpha$ is a superadditive function, assuming that $\alpha\ge 1$. On the other hand, $v^-(C)=-\sum_{\left(i,j\right)\in C \times C}d\left(i,j\right)$ is subadditive, since $v^-(C_1\cup C_2)=v^-(C_1)+v^-(C_2)- \sum_{i\in C_1,j\in C_2} d\left(i,j\right)\leq v^-(C_1)+v^-(C_2)$.
These functions will be used in our experimental evaluation in the next section.
Empirical evaluation {#sec:exp}
====================
The main goals of our empirical evaluation of CFSS are:
1. To evaluate its runtime performance with respect to DyCE considering a variety of graphs, both realistic (i.e., subgraphs of the Twitter network) and synthetic (i.e., scale-free networks). Additional experiments on community networks and a detailed discussion on these network topologies are in the Online Appendix.
2. To evaluate the effectiveness of our bounding technique.
3. To evaluate the anytime performance and guarantees that our approach can provide when scaling to very large numbers of agents (i.e., more than $2700$).
4. To compare the quality of our approximate solutions with the ones computed by C-Link [@eps351521] on large-scale instances.
5. To evaluate the speed-up that can be obtained by using multi-core machines.
6. To evaluate the speed-up produced by our edge ordering heuristic.
Following , we consider scale-free networks generated with the Barabási-Albert model with $m\in\left\{1,2,3\right\}$. This parameter determines the sparsity of the graph, as every newly added node is connected, on average, to $m$ existing nodes. It is easy to verify that the average degree of a scale-free network is $\sim 2\cdot m$. We compare our approach with DyCE in our three reference domains, measuring the runtime in seconds. In our characteristic functions we use the following parameters:
- Following , in the *collective energy purchasing* function we set $p_S$$=$$-80$ and $p_F$$=$$-70$. The consumption data is provided by a realistic dataset, comprising the measurements collected over a month from $2732$ households in the UK.
- In the *edge sum with coordination cost* function we assigned a uniformly distributed random weight within $\left[-10,10\right]$ to each edge.
- Following , in both the above scenarios we considered $\gamma=1.3$.
- In the *coalition size with distance cost* function we assigned a uniformly distributed random value within $\left[0,100\right]$ to each distance between a pair of different agents (with $d(i,i)=0$), and we considered $\alpha=2.2$, motivated by the remarks in Section \[sec:anytime\].
We conducted an additional set of experiments in which the graph $G$ is a subgraph of a large crawl of the Twitter social graph. Specifically, such dataset is a graph with 41.6 million nodes and 1.4 billion edges published as part of the work by . We obtain $G$ by means of a standard algorithm [@russell2013mining] to extract a subgraph from a larger graph, i.e., a breadth-first traversal starting from a random node of the whole graph, adding each node and the corresponding arcs to $G$, until the desired number of nodes is reached.
Moreover, we implemented a multi-threaded version of CFSS, namely P-CFSS (i.e., Parallel CFSS), and we analysed the speed-up of P-CFSS using Amdahl’s law [@amdahl], as it provides the maximum theoretical speed-up that can be achieved. All our results refer to the average value over 20 repetitions for each experiment. CFSS[^8] and C-Link are implemented in C, while we used the DyCE implementation provided by its authors. We run our tests on a machine with a 3.40GHz CPU and 32 GB of memory.
DyCE vs CFSS: runtime comparison {#sec:dyceexp}
--------------------------------
In our experiments using scale-free networks, CFSS outperforms DyCE when coalition values are shaped by the above-described benchmark functions (as shown in Figures \[fig:6\]a, \[fig:6\]b and \[fig:6\]c). Specifically, for the *edge sum with coordination cost* function, CFSS outperforms DyCE by 4 orders of magnitude on networks with average connectivity (i.e., for $m=2$), and by 3 orders of magnitude on networks with higher connectivity (i.e., for $m=3$). Most probably this is due to the fact that the upper bound we adopt in this case closely resembles the function, allowing us to prune significant portions of the search space (see Section \[exp:bound\] for a more detailed discussion). In the *collective energy purchasing* scenario with 30 agents and $m=2$, CFSS is 4.7 times faster than DyCE, and it is at least 2 orders of magnitude faster for $m=1$. However, DyCE is significantly faster (44 times) than CFSS for $m=3$. The adoption of the *coalition size with distance cost* function produces a similar behaviour, with a performance improvement for our method. In fact, CFSS is 17 times faster than DyCE for $m=2$, and only 3 times slower for $m=3$. On the other hand, the runtime of DyCE equals the previous case, since this approach is not sensitive to the values of the characteristic function. In our tests using subgraphs of the Twitter network, CFSS is at least four orders of magnitude faster than DyCE when solving instances with 30 agents (the biggest instances that DyCE can solve), and it can scale up to 45 agents. These results confirm the very good performance of CFSS when considering sparse networks. In fact, the average degree of these subgraphs is comparable with the one of a scale-free network with $1<m<2$.
In all our tests, we increased the number of agents until the execution time reached $10^5$ seconds. Notice that, in general, DyCE cannot scale over 30 agents (due to its exponential memory requirements), while CFSS does not have such limitation, hence it is possible to reach instances with thousands of agents, as shown in Section \[sec:anytime\].
;
![Runtime to compute the optimal solution.[]{data-label="fig:6"}](img/f6)
Bounding technique effectiveness {#exp:bound}
--------------------------------
Here we compare the number of configurations explored by CFSS w.r.t. the entire search space, i.e., the one explored by Algorithm \[alg:VisitAllCoalitionStructures\], to measure of the number of search nodes pruned by our bounding technique. We consider $n=30$, adopting scale-free networks with $m=2$. When the coalitional values are provided by the *collective energy purchasing* function, CFSS can compute the optimal solution exploring a number of configurations which is, on average, 0.32% of the entire search space. We measured a similar value in the *coalition size with distance cost* scenario (i.e., 0.28%). In the *edge sum with coordination cost* scenario (which allows a more precise upper bound, as explained in Remark \[cor:edge\]), only 0.0045% of the entire search space is explored.
Edge ordering heuristic {#exp:order}
-----------------------
The above table shows the speed-up obtained by using the ordering heuristic described in Section \[sec:edgeordering\] and considering the *collective energy purchasing* and the *coalition size with distance cost* functions. Even though our heuristic is applicable also in the *edge sum with coordination cost* scenario, such function has not been included in this analysis since, as stated in Remark \[cor:edge\], it allows an ad-hoc bounding method that is more effective than the general one. Our experiments show a clear benefit in the adoption of such a heuristic, producing a maximum performance gain of 843% in the first scenario and 338% in the second one. Across all experimental scenarios, such a heuristic allows an average speed-up of 295% considering both domains.
[**Characteristic function**]{} [**Minimum**]{}
----------------------------------- ----------------- --------- ---------
Collective energy purchasing $176\%$ $367\%$ $843\%$
Coalition size with distance cost $136\%$ $222\%$ $338\%$
Anytime approximate performance {#sec:anytime}
-------------------------------
We evaluate the performance of the approximate version of CFSS on instances with thousands of agents considering the *Performance Ratio* (PR) [@ausiello2012complexity], a standard measure to evaluate approximate algorithms defined as the ratio between the approximate solution and the optimal one on a given instance $I$. As computing the optimal solution for such large instances is not possible, we define the *Maximum Performance Ratio* (MPR) as the ratio between the approximate solution and the upper bound on the optimal solution defined in Equation \[eq:bound\].
\[def:mpr\] Given an instance $I$, an approximate solution $Approx(I)$ and an upper bound on the optimal solution as $Bound(I)$, we define the Maximum Performance Ratio $MPR(I)=\max\left(\frac{Approx(I)}{Bound(I)},\frac{Bound(I)}{Approx(I)}\right)$.
$MPR(I)$ represents an upper bound of the PR on the instance $I$. The MPR provides an important quality guarantee on the approximate solution $Approx(I)$, since $Approx(I)$ cannot be worse than by a factor of $MPR(I)$ w.r.t. the optimal solution.
### Collective energy purchasing {#collective-energy-purchasing}
Figure \[fig:7\]a shows the value of the MPR in the *collective energy purchasing* scenario, using $n\in\left\{100,500,1000,1500,2000,2732\right\}$, adopting scale-free networks with $m=4$ and Twitter subgraphs as network topologies, and considering a time budget of 100 seconds. Other values for $m$ show a similar behaviour (not reported here). We plot the average and the standard error of the mean over 20 repetitions. It is clear that the network topology does not impact the quality guarantees of our approach, hence we only adopt scale-free networks in the following experiments. In contrast, the MPR is heavily influenced by the nature of the characteristic function, as clarified later in this section. In addition, the results show that, for 100 agents, the provided bound is only 4.7% higher than the solution found within the time limit, reaching a maximum of +11.65% when the entire dataset is considered, i.e., with 2732 agents. Such small decrease is due to the fact that, for bigger instances, it is possible to explore a smaller part of the search space in the considered time budget, leaving a bigger portion to the estimation of the bound. Nonetheless, in this experiment CFSS provides a MPR of at most 1.12 and thus solutions that are at least 88% of the optimal. This confirms the effectiveness of this bounding technique when applied to the energy domain, which allows us to provide solutions and quality guarantees for problems involving a very large number of agents. In our tests, the bound is assessed at the root, without any frontier expansion, so it can be computed almost instantly, thus devoting all the available runtime to the search for a solution. This choice is further motivated by the fact that, in this scenario, the bound improves of a negligible value in the first levels of the search tree, due to the particular definition of the characteristic function. More precisely, if we consider a frontier formed by the children of the root, in each of them the bound of $V^-(\cdot)$ will improve by a factor of $2^\gamma-2\approx1.5$ (i.e., the difference between the coalition management cost of the new coalition and the ones of the two merged singletons). On the other hand, the bound of $V^+(\cdot)$ will remain constant: in fact, since we are taking the maximum (i.e., the worst) bound at the frontier (as shown in Equation \[eq:bound\]), the result of this maximisation will still be equal to $v^+(\mathcal{A})$, because in at least one of the children nodes the computation of $\overline{CS}$ will result in joining all the agents together. In this case, it is not worth to expand the frontier from the root, since the gain would be insignificant w.r.t. the additional computational cost.
![Maximum Performance Ratio (MPR) in the considered domains.[]{data-label="fig:7"}](img/f7)
### Edge sum with coordination cost {#edge-sum-with-coordination-cost}
We further evaluate the scalability of our approach by considering Twitter subgraphs as network topologies, and the *edge sum with coordination cost* function, which allows to generate coalitional values for instances with any number of agents. Such a function can be either positive or negative (in contrast with the *collective energy purchasing* one, which is always negative to represent its nature of cost). Hence, it is possible that $Approx(I)$ is negative and $Bound(I)$ is positive, resulting in a *negative* MPR. In order to avoid this unreasonable behaviour, here we consider $MPR(I)=\frac{Bound(I)-LB(I)}{Approx(I)-LB(I)}$, where $LB(I)$ is a lower bound on the characteristic function considering the instance $I$. Notice that it is always possible to compute $LB(I)$ for the *edge sum with coordination cost* function as $LB(I) = V^-(\mathcal{A})$.
Figure \[fig:7\]b shows that, on our machine, CFSS can scale up to instances with 30000 agents, providing solutions with a MPR of 1.127 (at least 89% of the optimal).
### Coalition size with distance cost {#coalition-size-with-distance-cost}
The MPR exhibits a different behaviour when considering the *coalition size with distance cost* function, being heavily influenced by the value of the $\alpha$ exponent. Figure \[fig:7\]c shows how the MPR varies significantly with respect to $\alpha \in \left[2,3\right]$, growing up to 41825.6 for $\alpha=2.4$ and then falling down to 1.13 for $\alpha=2.7$, with a tendency to 1 when increasing this exponent. This behaviour can be explained by reasoning about the structure of the characteristic function. Up to $\alpha=2.4$, the subadditive component (i.e., $-\sum_{C\in CS} \sum_{\left(i,j\right)\in C \times C}d\left(i,j\right)$) dominates the superadditive one (i.e., $\sum_{C\in CS}|C|^\alpha$), hence the search for a solution is not able to find any coalition structure better than the initial one (i.e., the coalition structure with all singletons, which is probably the optimal one). Nonetheless, the MPR keeps growing when we increase $\alpha$, since it equals $\frac{N^\alpha}{N}=N^{\alpha-1}$, i.e., the bound computed at the root (i.e., $V^+(\mathcal{A})=N^\alpha$) divided by the value of the initial solution (i.e., $N$). On the other hand, when $\alpha$ is sufficiently large (i.e., for $\alpha=2.5$), this behaviour is inverted, because $V^+(\cdot)$ has a greater impact and the entire characteristic function tends to become superadditive. Thus, coalition structures closer to the grand coalition represent good solutions, which explains why the MPR tends to 1 when we increase $\alpha$. These remarks motivate us to study the impact of $\alpha$ also on the optimal algorithm. Figure \[fig:alpha\] displays the runtime needed to find the optimal solution on random instances with 25 agents on scale-free networks with $m=2$, showing that the performance of CFSS decreases when we increase $\alpha$ from 2 to 3. The value of the bound provided by Equation \[eq:bound1\] is larger when $\alpha$ grows, hence its quality decreases, producing a less effective bounding technique and, thus, a higher runtime. To summarise, the adoption of a bigger $\alpha$ in the *coalition size with distance cost* function negatively impacts the performance of our approach when computing optimal solutions, while improving approximate solutions as $\alpha$ grows. This motivates our choice of defining $\alpha=2.2$ in the previous experiments, as it represents a good value to benchmark CFSS. In fact, it is big enough to avoid excessively low runtimes in the optimal version, but it does not exceed the 2.4 boundary, beyond which the quality guarantees it provides are extremely good (i.e., the MPR tends to 1).
CFSS vs C-Link: solution quality comparison {#sec:clink}
-------------------------------------------
We further evaluate the approximate performance of CFSS by comparing it against C-Link [@eps351521], an heuristic approach to solve CSG based on hierarchical clustering. We chose C-Link among the other approaches discussed in Section \[sec:heu\] because it is the most recent one and it has also been tested using the *collective energy purchasing* function by its authors. Here we adopt the same experimental setting discussed in the previous section, i.e., we consider scale-free networks with $n$$\in$$\left\{100,500,1000,1500,2000,2732\right\}$ and $m$$=$$4$ (generating 20 random repetitions of each experiment), and we adopt the *collective energy purchasing* characteristic function. We solve each instance with C-Link (adopting the best heuristic proposed by , i.e., Gain-Link) and then we run CFSS on the same instance with a time budget equal to C-Link’s runtime. Figure \[fig:clink\] shows the average and the standard error of the mean of the ratio between the value of the solution computed by C-Link and the one computed by CFSS. Since we consider solutions with negative values, when such ratio is $>1$ the solution computed by C-Link is better (i.e., has a lower cost) than the one computed by CFSS. Our results show that, even though C-Link can compute better solutions, the quality of our solutions is worse only by $3\%$ for $100$ agents. When we consider the entire dataset (i.e., with $2732$ agents) the quality of our solutions is still within the $9\%$ w.r.t. the counterpart. Notice that C-Link slightly outperforms CFSS. This comes as no surprise since the fundamental difference between C-Link and CFSS is that C-Link does a backtrack-free visit of the search graph adopting a greedy heuristic to determine the choice at each step. In other words, C-Link explores only one path of the search graph. On the other hand, CFSS does not employ any heuristic as it is designed to execute a systematic visit of the search graph with backtracking. Notice that we can easily include the C-Link’s greedy heuristic into CFSS to guide the visit of the children nodes in the search. With C-Link’s heuristic, CFSS first explores the same path explored by C-Link, and then, if given more time, continues the visit of the rest of the search space by backtracking. Since we provide CFSS with a time budget equal to C-Link’s runtime, if we employ C-Link’s heuristic then CFSS effectively becomes the same algorithm as C-Link, and hence returns solutions of the same quality.
P-CFSS {#sec:pcfss}
------
Here we detail the parallelisation approach of the multi-threaded version of CFSS, analysing the speed-up with respect to its serial version. Following , parallelisation is achieved by having different threads searching different branches of the search tree. The only required synchronisation point is the computation of the current best solution that must be read and updated by every thread. In particular, the distribution of the computational burden among the $t_a$ available threads is done by considering the first $i$ subtrees rooted in every node of the first generation (starting from the left) and assigning each of them to $t_j$ threads ($1 \le j \le i$). The remaining rightmost subtrees are computed by a team of $t_a - \sum_{j=1}^i t_j$ threads using a dynamic schedule.[^9] Parameters $i$ and $t_j$ are arbitrarily set, since it is assumed (and verified by an empirical analysis) that the distribution of the nodes over the search tree does not significantly vary among different instances. More advanced techniques, such as estimating the number of nodes in the search tree as suggested by , will be considered in the future. We run P-CFSS on random instances with 27 agents on scale-free networks with $m=2$, using a machine with 2 Intel Xeon E5-2420 processors. The speed-up measured during these tests has been compared with the maximum theoretical one provided by Amdahl’s Law, considering an estimated non-parallelisable part of 6%, due to memory allocation and thread initialisation.
As can be seen in Figure \[fig:mt\], the actual speed-up follows the theoretical one up to 12 threads, the number of physical cores. After that, hyper-threading still provides some improvement, reaching a final speed-up of 9.44 with all 24 threads active.
Conclusions {#sec:conclusions}
===========
In this paper we considered the GCCF problem and proposed a branch and bound solution (the CFSS algorithm) that can be applied to a general class of functions (i.e., $\mplusa$ functions). Our empirical evaluation shows that CFSS outperforms DyCE, the state of the art algorithm, when applied to three characteristic functions. Specifically, CFSS is at least 3 orders of magnitude faster than DyCE in the first scenario, while solving bigger instances for the remaining two. Moreover, the adoption of our edge ordering heuristic provides a further speed-up of 296%. P-CFSS, the parallel version of CFSS, achieves a speed-up of 944% on a 12-core machine, close to the maximum theoretical speed-up. Finally, our algorithm provides approximate solutions with good quality guarantees (i.e., with a MPR of 1.12 in the worst case) for systems of unprecedented scale (i.e., more than 2700 agents). Overall, our work is the first to show how coalition formation techniques can start coping with real-world scenarios, opening the possibility of employing coalition formation on practical applications, rather than purely synthetic, small-scale environments.
Future work will look at applying our approach to other realistic scenarios (e.g., the formation of team of experts connected by a social network [@lappas2009finding]) and focusing on different multi-threading models (e.g., GPUs).
\[sec:ack\] COR (TIN 2012-38876-C02-01), Collectiveware TIN 2015-66863-C2-1-R (MINECO/FEDER), and the Generalitat of Catalunya 2014-SGR-118 funded Cerquides and Rodríguez-Aguilar. This work was also supported by the EPSRC-Funded ORCHID Project EP/I011587/1.
[^1]: \[fn:complexity\]A set of $n$ agents can be partitioned in $\Omega((\frac{n}{\ln(n)})^n)$ ways, i.e. the $n$^th^ Bell number [@berend2010improved].
[^2]: This paper subsumes the work of and the non-archival work of .
[^3]: The DFS strategy allows us to traverse the entire tree with polynomial memory requirements, since at each stage of the search we only need to store the ancestors of the current node.
[^4]: Notice that, since Coalition Structure Generation (CSG) is a particular case of GCCF (i.e., CSG is a GCCF problem with a complete graph), $\left\vert\mathcal{CS}(G)\right\vert$ can be, in the worst case, equivalent to the $n$^th^ Bell number, i.e., $\Omega((\frac{n}{\ln(n)})^n)$ [@berend2010improved], where $n$ is the number of agents. Nonetheless, in the problems we consider $G$ is sparse and, hence, $\mathcal{CS}(G)$ contains a lower number of feasible coalition structures.
[^5]: To traverse the minimum number of edges necessary to partition the graph, we need the *smallest* cut-set. Unfortunately, such a problem (known as the Minimum Bisection problem) is a well known NP-complete problem [@Garey:1990:CIG:574848]. However, our heuristic does not need an optimal solution, since if a suboptimal cut-set (i.e., bigger than the optimal one) is used, our algorithm will still partition the graph in a higher number of steps, resulting in a slightly smaller improvement. Therefore, we adopt an approximate algorithm implemented with the METIS graph partitioning library [@Karypis:1998:FHQ:305219.305248].
[^6]: Unit prices (whose values are reported in Section \[sec:exp\]) are negative numbers, i.e., they belong to the set $\mathbb{R}^-=\{i\in\mathbb{R}\mid i\leq 0\}$, to reflect the direction of payments. Thus, the values of the characteristic function are negative as well, hence they represent costs that, maintaining the maximisation task, we aim to minimise.
[^7]: Such penalising factor makes the edge sum with coordination cost function to violate the IDM property (cf. Section \[sec:stategccf\]), therefore the approach proposed by cannot be used.
[^8]: Our implementation of CFSS is publicly available at <https://github.com/filippobistaffa/CFSS>.
[^9]: Once a thread has completed the computation of one subtree, it starts with one of the remaining ones.
| |
UCL, in building on its founding commitment to “open education for all”, is exploring ways to embed open educational practices across the institution. Our presentation will provide an overview of the Open Education (OE) project: how it fits under the institution’s broader Open Science agenda and underlying ambition to make itself open. We will also discuss the development of a comprehensive support infrastructure which equips the community with information and digital literacy skills relevant for their academic pursuits, and why this is needed to underpin the move to openness.
In addition to our activities to launch a repository for OER, develop an OE policy, and establish an Open Education Working Group (under UCL’s Open Science Platform agenda), a key enterprise is our focus on shifting the culture of teaching and learning within the institution to incorporate more “open” elements and establish sustainable, flexible, and cohesive approaches to such open practices. This includes engaging with relevant departments, creating awareness about OE and the project across campus, and showing that open practices add value to the university and to academics.
We have also recognised the need to develop training skills and information/digital literacy workshops related to UCL’s Open Education needs, and ensure that they remain linked as part of the larger open agenda. Under the direction of the institution’s Connected Curriculum/research-based open pedagogy (UCL Teaching and Learning, 2018), UCL students and staff, from undergraduate level, become involved in the dimensions around sharing, publication, and creating networks – and understand the ways their work can act as exemplars of teaching output and have greater impact. By providing the relevant support models we can nurture an environment where education and research is open and accountable to a larger audience.
By delivering an account of our methods to embed open pedagogical practices we hope to enter into discussion, with audience participants, on what (other) OE models have been devised and implemented to create sustainable frameworks for open teaching and learning practice. In particular, we are keen to explore models which harmonise the different institutional agendas pushing us to openness (open access, open research, open education, open science, etc.). A Twitter hashtag, #OpenModels, will be used to capture the audience’s responses and will support the discourse.
All session materials will be made available under an open licence through the UCL OER repository from the date of the conference and onwards.
UCL Teaching and Learning. (2018). Connected Curriculum: a framework for research-based education. [online] Available at: https://www.ucl.ac.uk/teaching-learning/connected-curriculum-framework-research-based-education [Accessed 30 November 2018]. | https://oer19.oerconf.org/sessions/connecting-practices-education-as-part-of-ucls-open-agenda-o-069/ |
The HS underwater acoustic modems with an omnidirectional transducer beam pattern are high-speed devices for effective transmissions in reverberant shallow waters, providing data transfer rates of up to 62.5 kbps for short-range transmissions over 300 m range.
The beam pattern is optimal for horizontal transfers.
High operating frequency ensures great performance even in noisy environments.
The applications include:
-
Short-range operations in shallow waters
-
High-speed communication tasks
-
Data link and positioning for AUVs and ROVs
-
Diver tracking
HS models are available in the following product lines: | https://evologics.de/acoustic-modem/hs |
Background: Major Depressive Disorder (MDD) is among the most prevalent and disabling medical conditions worldwide. Identification of clinical and biological markers ("biomarkers") of treatment response could personalize clinical decisions and lead to better outcomes. This paper describes the aims, design, and methods of a discovery study of biomarkers in antidepressant treatment response, conducted by the Canadian Biomarker Integration Network in Depression (CAN-BIND). The CAN-BIND research program investigates and identifies biomarkers that help to predict outcomes in patients with MDD treated with antidepressant medication. The primary objective of this initial study (known as CAN-BIND-1) is to identify individual and integrated neuroimaging, electrophysiological, molecular, and clinical predictors of response to sequential antidepressant monotherapy and adjunctive therapy in MDD. Methods: CAN-BIND-1 is a multisite initiative involving 6 academic health centres working collaboratively with other universities and research centres. In the 16-week protocol, patients with MDD are treated with a first-line antidepressant (escitalopram 10-20 mg/d) that, if clinically warranted after eight weeks, is augmented with an evidence-based, add-on medication (aripiprazole 2-10 mg/d). Comprehensive datasets are obtained using clinical rating scales; behavioural, dimensional, and functioning/quality of life measures; neurocognitive testing; genomic, genetic, and proteomic profiling from blood samples; combined structural and functional magnetic resonance imaging; and electroencephalography. De-identified data from all sites are aggregated within a secure neuroinformatics platform for data integration, management, storage, and analyses. Statistical analyses will include multivariate and machine-learning techniques to identify predictors, moderators, and mediators of treatment response. Discussion: From June 2013 to February 2015, a cohort of 134 participants (85 outpatients with MDD and 49 healthy participants) has been evaluated at baseline. The clinical characteristics of this cohort are similar to other studies of MDD. Recruitment at all sites is ongoing to a target sample of 290 participants. CAN-BIND will identify biomarkers of treatment response in MDD through extensive clinical, molecular, and imaging assessments, in order to improve treatment practice and clinical outcomes. It will also create an innovative, robust platform and database for future research. Trial registration: ClinicalTrials.gov identifier NCT01655706. Registered July 27, 2012.
© 2016 Lam et al. Open Access. This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated. | http://publications.aston.ac.uk/28124/ |
Only the grandeur of the natural world can save our souls.
a disturbed inner world of the human.
what happens to the natural world happens to us.
For me, the moment of “awakening” and a commitment to an ecological worldview came in the early 1990’s when I first began reading authors who critiqued our current worldview and who called for a paradigm shift in human consciousness.
I realized then that I had lived much of my life struggling with the perception and values of what Joanna Macy (1998) calls the “Industrial Growth Society” (p.116) All my life I had resisted in very personal and private ways Western society’s message of power over, win/lose, isolationism. I often paid the price. Now, however, I see that my convictions and actions were in fact grounded in a vision of the “Life-sustaining Society” which Macy and others now promote.
I also realized I did not need to make the paradigm shift that authors like Macy depicted, for I had never lived the old paradigm. The “new paradigm” was not “new” to me; it was as familiar as my own instincts.
I believe now that those instincts stemmed from my own early spiritual experiences – moments when I actually felt myself connected to the larger Presence touching me in the quiet stillness of my own being. Those experiences led to a deep sense of profound connectedness and mutuality counter to the messages of the dominant culture within which I lived.
The meditation teacher, Stephen Levine, says it this way, “Grace is a sense of interconnectedness…it is the experience of our underlying nature” (cited in Macy. p. 114).
Somehow, in the evolution of our industrialized society, we have lost the ability to find our salvation in a sense of connectedness to Earth and to our Universe. Instead, many of us seek it in an “afterlife” away from the nightmare of industrialized society, or we find it in distractions and addictions that help us cope with daily life as we now know it.
Ecopsychologists suggest that our destruction of Earth’s ability to sustain life is deeply rooted in our perception of ourselves as isolated from Earth’s natural processes. This perception of our reality has us cut off from our own bodies and the body of Earth. It is rooted in a long history of male-dominated ways of perceiving reality. Descartes famous line, “I think, therefore I am,” has defined the human for centuries, and brought us now to the brink of ecological disaster.
What we seek is wholeness and the creation of a new kind of knowing that cultivates rationality, self-confidence, intellect, and power alongside the nurturing, healing, compassionate, intuitive components of personality….Both the psychological and ecological implications of such a change are profound (pp. 298-299).
Gradually, especially as a city dweller for the past 30 years, I have come to agree with the many ecopsychologists who argue that it is the evolution of an urbanized, industrialized society that is at the heart of the social and environmental issues facing Earth’s entire Community of Life today. In the mid-1980’s, I moved to Chicago from Colorado, where I had been living with easy access near my home to the grandeur of the Rocky Mountains, and the soothing sight and sound of pine-forested canyons populated by babbling creeks.
More than twenty-five years later, of course, I have become accustomed, or perhaps deadened, to much of what first seemed so offensive to me about city living on that first day. However, I also have made conscious choices about how and where I will live that help me stay connected to my ecological self.
First of all, I live outside the city, in what I sometimes call “a magic place.” Our house sits on a street bounded by areas of forest preserve through which pass Salt Creek and the Des Plaines River. Apparently decades ago, the local deer population discovered the fruit trees that our landlord planted in the backyard behind the garage, and often, at any time of day, we are treated to views of deer sleeping or munching in the grove there. Mostly they are does, but occasionally a lone buck will appear with his magnificent rack of antlers. Every year, young mothers bring their spotted newborns to introduce another generation to the offerings available in our yard, and we get to watch the little ones run about and play on their long, delicate legs. Lately, a couple of red foxes have joined our backyard menagerie.
I work, though, in the city. Again, however, I have made choices around how I will do that with less stress and still maintaining my connection to nature. Each morning, I walk three blocks through our neighborhood to the commuter train, consciously paying attention to the sky, the air, the flowers in the yards I pass, the sound of bird song, the changing life of the trees. I also intentionally leave about 10 minutes early for the train, so that when I reach the crossing, there is time to sit and reorient my body to the morning sun, and in spring to watch robins eat from the Mulberry tree, or in winter to marvel at the intricacy of the bare branches of the trees.
Once I reach the city, I walk about seven blocks to my office building, stopping along the way to notice the small gardens and planters overflowing with seasonal plants as I pass by them. In my office, I have a “garden” of my own, with several living plants occupying one corner.
In summer, I make it a point at least once a week to walk through Grant Park to the shore of Lake Michigan, to sit and eat my lunch in the sun by the water, or, if there is less time, to eat in a secluded garden area on Michigan Avenue next to Chicago’s Art Institute.
What do I do in the bitter cold of Chicago winters, you might ask? Well, one of my favorite activities in winter, ever since I was young, is to go outside during a snow storm when it is dark, either early in the morning or late in the evening, and shovel the snow from the driveway. The exercise is great, of course, but so is the profound experience of standing amidst falling snow. First, there is the incredible sight of millions of white flakes floating down from out of an indistinguishable sky, or swirling about on the wind. And then, there is the profound hush that falling snow brings, as it whispers its way to the ground, and muffles the sound of cars on the street, or, if deep enough, keeps them completely at bay for a time.
I have come to realize that while the feminist conclusion, “The personal is political,” helped us to see that our personal issues are entangled in public policy and perception, considered in the reverse, it also shows us that changing public policy and perception begins with each one of us, and our consciousness and action. It begins with changing our own perception of reality, from the mechanistic, isolated worldview of the industrialized era, to a view of humans embedded in Earth’s Community of Life.
Such an ecological consciousness will help us to transform even our cities into dwelling places that are psychologically healthy for ourselves and promote the life-sustaining capacities of our marvelous Earth.
Steindl-Rast. D. (1992). Belonging to the Universe. San Francisco, CA: HarperSanFrancisco.
This entry was posted in Current 08 - New Cosmology and tagged ecopsychology, Macy, Rozak, Steindl-Rast, Thomas Berry. Bookmark the permalink. | https://consciousevolutionmemoir.com/2010/10/24/nurturing-an-ecological-consciousness/ |
Gene Matsook says his Rochester football team will not treat Friday's matchup against OLSH any different than it does any other game. An even mindset week to week, he believes, is best for success.
Dan Bradley and OLSH, though, are treating the game against Rochester differently. Its readily apparent how good Rochester is and how good they have been over the past few years, so he doesn’t believe in underselling the matchup. Instead, he emphasizes it to his team, stressing how important the game is.
But no matter if the teams acknowledge the importance of their upcoming game or not, it's rather clear that the game on Friday at Rochester will be a big one.
Both teams have been dominant thus far, outclassing their opponents on the way to 3-0 records. And both teams have been dominant the past two seasons. They’ve been the two teams firmly affixed at the top of the Class 1A Big Seven Conference, both making it to the WPIAL quarterfinals last season. Yes, it’s just one game over a 10-game regular season, but it could well decide who wins the section.
As such, OLSH’s energy this week has reflected that.
“Practices definitely get a little more intense. There’s a little more hitting and a little more yelling, a lot of running. It’s high intensity,” OLSH receiver and safety Andrew Schnarre said.
“I think we’ve established a bit of rivalry, especially coming off the game that we had last year, a fourth-quarter thriller. It’s definitely a game that we look forward to every year because they’re a good competitor.”
Last year’s game — a 16-13 contest that was decided on a OLSH touchdown within the last two minutes — exemplified just how competitive this game has become. Bradley remembers how his team was able to match Rochester's physicality and make enough plays when it mattered. Matsook laments how his team made mistakes on critical plays.
Outside of the result, Rochester, in many ways, likely would like this season’s contest to play out like last year's. In that game, the Rams were able to control the tempo for most of the time with a ground-and-pound run attack and a stout defense.
That same formula has been effective for Rochester this season, as they’ve outscored teams 110-33 thus far.
“I think we’re getting better each week. They’re continuing to improve on all facets of the game,” Matsook said. “We want to be nine units strong. Right now, when we started the season we were about four or five units strong, so hopefully we’re getting that up there to six or seven.”
That steady improvement was especially clear last week in a 42-6 win over Leechburg, when the Rams ran for 507 yards, including Noah Whiteleather’s 356 yards. By Whiteleather’s estimation, it was a breakthrough for Rochester’s offense, an offense that was still finding its way with the graduation of its top rushers, Caleb Collins and Mahlik Strozier.
“Coming into this year we didn’t know how we were gonna look,” Whiteleather said. “But I’ve seen everybody working hard in practice. We talk a lot this year. We’ve got a lot of leadership. Everybody’s stepping up this year.”
While Rochester had to undergo a bit of transition this year, OLSH’s team this season is quite similar to last year's. Yes, the Chargers had to revamp their offensive line, which lost four starters. That line has progressed a good bit, Schnarre and Bradley said. Outside of that, most of the starters on defense are back and so too are all the important skill players, as well as quarterback Tyler Bradley.
As such, the Chargers have rolled, outscoring opponents 143-12, including a 56-0 win over Northgate last week.
“Our defense is playing really well this year. We’re really getting off the ball and causing trouble in the backfield,” Bradley said. “And offensively we really can’t be upset with where we’re at. We’re putting points up.”
The Chargers will look to dictate the tempo with their offense, just as Rochester hopes too, but with an opposite approach to the Rams’ run-heavy attack.
“I think we’re a pretty fast team. We’re definitely going to throw the ball a good bit on them,” Schnarre said. “We’re just looking to speed it up and looking to wear them down, just as they do to us.”
How each defense is able to handle its opposition’s vastly different offensive attack could be key, or at least that’s what Whiteleather believes.
To stop the Rams, OLSH will need to not get pushed too much at that the line of scrimmage, Bradley said, while Matstook said the key for his defense is to be disciplined and limit OLSH’S skill players. Both teams are confident in their ability to do just that. | https://www.timesonline.com/sports/20180914/class-powers-olsh-rochester-ready-for-showdown |
We recently published an update highlighting the challenges we’re...
Stories Behind the Stats: Ten Tun Tap House, AltonRead More
** 3rd July - UPDATE! **
As pubs and bars are allowed to re-open from July 4th, we caught up the Ten Tun Taphouse to check out their plans. Read the first blog first, or skip...
The Stories Behind The Stats: Deadlocked Escape RoomsRead More
So far in this series we've looked in detail at some of the challenges facing businesses who operate closely with us in the hospitality sector, such as pubs, bars and bottle shops, largely showcasing clever... | https://www.sirencraftbrew.com/stories-and-events/blog/bring-on-the-botanicals |
Czech Republic's number of coronavirus cases rises to more than 10,000
The number of coronavirus cases rose to more than 10,000 in the Czech Republic, Health Ministry data showed on Monday.The country of 10.7 million has 10,024 confirmed cases as of the end of Sunday, with 329 deaths and 7,226 recovered. The daily rise in case numbers has been in the range of 31-74 over the past two weeks.
The number of coronavirus cases rose to more than 10,000 in the Czech Republic, Health Ministry data showed on Monday. The country of 10.7 million has 10,024 confirmed cases as of the end of Sunday, with 329 deaths and 7,226 recovered.
The daily rise in case numbers has been in the range of 31-74 over the past two weeks. Czech authorities opened their borders to travel from most European Union countries earlier this month and it has raised the limit for public gatherings to 500 people. | |
The Language Center currently consists of the English Language Center and the Chinese Language Center, which together offer a range of courses and services to meet the needs of the International Campus. The Language Center is located in the Arts and Sciences Building, with staff offices on the 4th floor and the 'Language Clinic' suite on the 1st floor.
English Language Center
As English is the language of instruction on the International Campus, and the major language of academic and professional communication globally, the English Language Center seeks to perform a crucial role in supporting communication and learning across the campus. The Center provides teaching and support to enable students to succeed in their degree study, and in broader English challenges in their life and future career. The ELC takes an English for specific purposes (ESP) approach, teaching both the linguistic and accompanying non-linguistic competencies required for advanced academic and professional English communication.
The ELC’s provision thus includes content on:
writing a range of academic genres (e.g. essays, lab reports, emails, resumes, applications)
speaking in seminars, presentations, and conversations around campus with staff and students
reading and listening to a range of academic genres (e.g. textbooks, lectures, research articles)
workshops for external tests such as TOEFL and IELTS (click on link to book)
research communication for graduate theses and published articles
non-linguistic aspects of academic communication (e.g. data visualization, PowerPoint design)
metacognitive strategies around English language learning
intercultural communication and the contemporary culture of English-speaking countries
supporting academic staff on course design and teaching to reduce unnecessary language-related difficulty
supporting effective English-language communication around the campus generally
Accordingly, our work involves a high level of engagement with colleagues and students across the campus. Indeed, we currently provide courses for every academic department on campus, at every level of study, and frequently provide support to professional departments on English-related issues.
The ELC strives for evidence-based practice, incorporating research findings from education, applied linguistics, and other relevant fields to improve its provision. The Center is also enthusiastic about opportunities for exchange and collaboration with external researchers or practitioners.
Robert Holmes
Instructor
robertholmes@
intl.zju.edu.cn
Robert Holmes gained his MA (with merit) in English language teaching with applied linguistics from Kings College London, and his BSc in forensic science with psychology from London South Bank University, and holds the Certificate in English Language Teaching to Adults (CELTA). His Masters dissertation focused on content- and project-based materials development for EAP. Rob has previously taught at Xi’an Jiaotong Liverpool University, East China University of Political Science and Law, and other institutions in the UK and South Korea. He has particular interest in English for specific academic purposes course design and teaching.
Chinese Language Center
Introduction
The International Campus of Zhejiang University aims to provide an international teaching and research environment that is integrated with the world. And in this rapidly changing global society, no matter what your major is, language is always the tool to explore a culture, to show mutual understanding, and to promote communication. To foster cross-cultural vision and resolve regional conflicts both require language as a medium of communication. Learning Chinese can not only meet the basic needs of life in China, but also expand the horizon, improve leadership, and make you a positive participant in the new era of globalization.
The Chinese Language Center in the International Campus of Zhejiang University provides various levels of Chinese language courses in small groups, with various cultural experiences and communication activities in an immersive learning environment. After taking basic Chinese courses, students can live independently in China. After the advanced Chinese course, students can understand Chinese society directly and deeply in Mandarin and even engage in economic and cultural relevant business. In the near future, the center will be launching short-term Chinese language learning programs for individual groups.
Courses
To accommodate different language learners, the Chinese course curriculum is designed for 4-year period with the goal of developing speaking, listening, reading and writing skills. We also put an emphasis on accuracy of speaking and pronunciation, and thus to improve fluency and intergraded language ability. We provide four levels of Chinese language courses along with elective courses such as Chinese writing, conversation, and professional Chinese courses.
For first level and second level Chinese courses, students will have classes every day for six hours a week. And the course is divided into two sessions: big group lecture class and small group drill class. Small group drill classes have a maximum of 10 people and big group lecture classes have a maximum of 20 people to guarantee learning quality. For the third level and fourth level Chinese courses, students can take elective language courses besides4 hours comprehensive Chinese language course per week. In addition to classroom teaching, the Chinese Language Center also provides a series of lectures, cultural experiencing activities to meet students' need and create opportunities to use the language in real life.
Dalong CHEN
Studied his BA in Teaching Chinese as Second Language at Zhejiang University with minoring in economics and his MA in Literature and Art Theory at East China Normal University, focusing Kant’s Aesthetics.
Jiayi WU
Wu Jiayi, majoring in Chinese language and literature at Zhejiang University, studied China's culture, language and literature. She received a Bachelor degree of Arts in 2014. During the postgraduate period, she focused on researching on the field of the second language acquisition and pedagogy and obtained Master degree of Teaching Chinese to the Speakers of other Languages in Zhejiang University in 2017. In 2015, as a volunteer of Chinese teachers at the Confucius Institute of the University of Western Australia, she conducted one-year Chinese language and culture teaching Program at the local primary and secondary schools in Perth. After returning home, she coached A-level's Chinese course.
Her teaching expertise is to adopt multiple teaching methods to improve students ' Chinese listening and speaking ability. She has offered courses like Chinese Listening & Speaking (Ⅰ) and Chinese Listening & Speaking (Ⅱ).
Jinrong ZHOU
Zhou Jinrong attended the Xiangtan University where she attained a B.A. in Teaching Chinese as a Foreign Language, followed by an M.A. in Teaching Chinese to Speakers of Other Languages from East China Normal University. Her scholarly interests focus on the teaching foreigners to learn Chinese characters. She has been teaching foreigners to learn Chinese since 2007. She holds the Certificate for Teachers of Chinese to Speakers of Other Languages(CTCSOL). Before coming to Zhejiang University, she served as mandarin teacher in Council on International Educational Exchange (CIEE) in Shanghai, Confucius Institute at Victoria University of Wellington in New Zealand, Gateshead, Tyne and Wear in UK, Donghua University. She teaches Chinese Characters Learning and Chinese writing since September 2017 in International Campus Zhejiang University. | https://www.intl.zju.edu.cn/en/language-center |
To qualify for your district championship, you must place (plus ties) in at least one sub-district event as follows:
|Boys 16-18: Score in the top 13||Girls 16-18: Score in the top 6|
|Boys 14-15: Score in the top 12||Girls 14-15: Score in the top 5|
|Boys 12-13: Score in the top 7||Girls 12-13: Score in the top 4|
|Boys 8-11: Score in the top 6||Girls 8-11: Score in the top 4|
+++PLUS+++
The qualifying score must meet the WJGA scoring Guidelines:
|16-18||14-15||12-13||8-11|
|Boys||85||90||100||60*|
|Girls||100||105||115||75*|
|*8-11 score based on 9 holes, played from forward tees on a regulation golf course|
Qualifying for the State Championship
Each district is allocated a specific number of spots for each division. Boys’ spots are allocated primarily based on the number of participants in sub-district events. Girls’ spots are based on the average scores posted in sub-district play statewide. The highest finishers in each division at the District Championship win the spots allocated to the district. Any ties for the last spot in each division will playoff. If a boy player from that district cannot play at state, the first alternate will take his place. Starting on the Sunday before State, the vacancy will be filled from the host district (District three in 2019). In the case of girls, any qualifier who cannot play at State will be replaced from a statewide alternate pool, and be selected by the Executive Director based on average score.
The top five boys and girls on the preceding year’s points list will be given an exemption from qualifying at their District Championship to play in the 2019 State Championship. To use this exemption, the player must play in at least one WJGA sub-district event during the 2019 WJGA season and the player must notify the Executive Director by July 4, 2019.
Qualifying for the Tournament of Champions
-
- Current season winner of:
– WJGA Western Open
– WJGA Eastern Open
– Players Open
– WJGA Sub-District
– WJGA District Championship
– WJGA State Championship
- Made cut at WJGA State Championship
- Top-10 finish at Junior World Qualifier
- Top-10 finish at Western Open, Eastern Open or Players Open for Boys&Girls 16-18; Top-7 finish for Boys/Girls 14-15; Top-4 finish for
Boys/Girls 12-13
- Top-4 finish at District Championship for Boys; Top-3 finish at District Championship for Girls
- Top-2 finish at any Sub-District event for Boys only; 1st place finish for Girls
- Qualified for the USGA Junior Amateur
- Played on any WJGA team during the current season
- Current year Washington or Idaho High School State Champion
- Top-10 from previous year’s points list
- Current season winner of: | http://www.wjga.net/tournaments/qualifying-info/ |
Protein release from poly(epsilon-caprolactone) microspheres prepared by melt encapsulation and solvent evaporation techniques: a comparative study.
Poly(epsilon-caprolactone) (PCL) microspheres containing c. 3% bovine serum albumin (BSA) were prepared by melt encapsulation and solvent evaporation techniques. PCL, because of its low Tm, enabled the melt encapsulation of BSA at 75 degrees C thereby avoiding potentially toxic organic solvents such as dichloromethane (DCM). Unlike the solvent evaporation method, melt encapsulation led to 100% incorporation efficiency which is a key factor in the microencapsulation of water-soluble drugs. Examination of the stability of the encapsulated protein by sodium dodecyl sulphate-polyacrylamide gel electrophoresis (SDS-PAGE) demonstrated that protein integrity was unaffected by both methods of encapsulation. In vitro release of the protein into phosphate buffer examined at 37 degrees C from microspheres prepared by both techniques showed that the release rate from melt-encapsulated microspheres was somewhat slower compared to the release from solvent-evaporated spheres. Both released around 20% of the incorporated protein in 2 weeks amounting to approximately 6.5 micrograms mg-1 of microspheres. Although the diffusivity of macromolecules in PCL is rather low, it is shown that PCL microspheres are capable of delivering sufficient quantity of proteins by diffusion for prolonged periods to function as a carrier for many vaccines. Unlike poly(lactic acid) (PLA) and poly(glycolic acid) (PGA) polymers which generate extreme acid environments during their degradation, the delayed degradation characteristics of PCL do not generate an acid environment during protein release and, therefore, may be advantageous for sustained delivery of proteins and polypeptides.
| |
Ever since Silat Malaysia has been acknowledged worldwide, it has been regarded as a Malaysian martial arts kind. Lots of the university’s courses are designed with direct input from employers, and each pupil has the chance of a work placement, with worldwide study and work opportunities additionally accessible – all which will assist your prospects after graduating.
Artists have always employed their work to mirror upon the lives and times and so artwork objects act as a doorway into the worlds of philosophy, politics, sociology, theology, literature, music, history, classics, science, anthropology and tradition at its widest degree.
Some recent examples embrace placements at Harewood Home, the Cultural Institute, Urban Outfitters, Tigerprint, Leeds Museums and Galleries, Pyramid of Arts and Grasp-Up Gallery. Klee has been acknowledged with varied types of art reminiscent of Abstract artwork, Cubism, Expressionism art, Surrealism, Futurism but most of the instances his artwork paintings usually are not straightforward to categorise.
There may be also a full programme of visiting audio system from throughout the constituent subject areas inside the School of Arts, which includes Movie and Drama. Employers of our most up-to-date graduates include Christie’s London, Sotheby’s Zurich, the Weiss Gallery, London, Aberdeen Art Gallery and Museums, the Pier Arts Centre, Orkney, and the McManus Gallery, Dundee.
Historical past Of Artwork & Structure
Our distinctive and revered programme explores the history of artwork and design around the globe, from traditional manufacture to rising modern practices. Art patronage has been, up until comparatively recently, been neglected within the examine of art history. Students can also apply to check abroad for one 12 months or one semester throughout their degree, taking Historical past of Artwork-associated subjects taught in English.
You’ll choose from a wider range of non-obligatory modules, which cover art historic matters from African artwork to the New York School as well as museum research, vital theory and the up to date art market. The university is part of the Russell Group and has a considerable cohort of staff and students from outdoors of the UK.anonymous,uncategorized,misc,general,other
Division Of Historical past Of Art And Architecture
Leeds Arts College makes its first look in the UK University League Tables in 2019 at 81st place, having gained college standing in 2017.
history of arts education, history of arts and crafts, history of visual arts ppt, history of arts timeline, history of culinary arts timeline
History Of Artwork, BFA University Of Illinois
Oil Paintings are the stuffs of a sure time and sure set, and art historical past of course tries to position these works of their superior setting. In trendy times, art historical past has emerged as a discipline that specializes in instructing folks the way to evaluate and interpret works of art based on their own perspective. Artwork was comprised of carvings and painted pottery until 1500 BC, when what’s ceaselessly known as the “Palace Interval” emerged, and wall painting first appeared in Europe, though solely fragments survive at the moment.
Some instructing might also be delivered by postgraduate college students who’re finding out at doctorate level. Wrexham Glyndŵr offers modular programmes and all college students can choose to study a second language. Renaissance artwork; Renaissance art idea; Renaissance and baroque prints; the history of collecting and museums; historiography of art, significantly the work of Edgar Wind and the Chilly Warfare.
Brief History Of Blended Martial Arts
This article will likely be my feeble try at giving a generalized model of the history of the Korean martial artwork generally known as Tang Soo Do as I understand it. I will not try to trace the roots of Tang Soo Do all the way in which again to its beginning. Many of Kingston’s programs can be found via half-time examine, or sandwich programs, and most of the UG programs assist study or working in lots of areas all over the world. The following example illustrates how the Historical past of Artwork diploma-specific main may be mixed with a second main in the Bachelor of Arts course.
The compulsory module, Frameworks: Histories and Theories of Artwork, Structure, Photography, is designed as an introduction to present theoretical and historiographical points in the research of the self-discipline. Different examples of Mesolithic transportable artwork include bracelets, painted pebbles and decorative drawings on purposeful objects, as well as ancient pottery of the Japanese Jomon culture.
history of indian arts ppt, history of modern art book, history of arts and crafts in nigeria
Series exploring overlooked visible artists from the 20th century. Whereas martial arts helped the monks lead a more fulfilling life via train and mediation the coaching additionally had a extra practical concern, since they needed to deal with bandits and warlords, and couldn’t depend on native governments for help. Courses may be studied via full-time, half-time and online programmes, with September and January begins. | https://www.massimocapodieci.com/a-brief-history-of-pet-portraits-and-pet-work.html |
Lake Manyara National Park
Located 125 km west of Arusha town, nestling by the wall of the Great Rift Valley, Lake Manyara National Park is one of the oldest and most popular sanctuaries in East Africa. The park has a large variety of habitats, making it possible to support a wealth of wildlife in its small area. The main habitats include the shallow soda lake itself which occupies 77% of the National Park total area of 330 sq. km, the groundwater forest, open grassland, acacia woodland and the rift wall.
The most famous spectacle in the park is the tree-climbing lions, which are occasionally seen along branches of acacia trees. Other animals found in the park include buffalo, elephants, leopards, baboons, impala, giraffes, zebra, wildebeest, ostrich and hippos. Popularly referred to as an ornithologist’s paradise, Lake Manyara National Park contains over 400 bird species found in most savanna and river habitats in East Africa. Common water birds to be seen here are pelicans, spoonbills, Egyptian geese, hammerkops and the migratory flamingos, which arrive in hundreds of thousands creating one of Africa’s great natural sights over the soda lake. | https://ndewedotours.com/lakemanyaranationalpark/ |
Invalidating site reference com
There are a few alternatives to cache invalidation that still deliver updated content to the client.One alternative is to expire the cached content quickly by reducing the time-to-live (TTL) to a very low value.
For instance, when a sequence container such as requires an underlying reallocation, outstanding iterators, pointers, and references will be invalidated [Kalev 99].Cache invalidation is a process in a computer system whereby entries in a cache are replaced or removed.It can be done explicitly, as part of a cache coherence protocol.After the cache is invalidated, if the client requests the cache, they are delivered a new version.There are three specific methods to invalidate a cache, but not all caching proxies support these methods. When the client requests the data again, it is fetched from the application and stored in the caching proxy.The C Standard allows references and pointers to be invalidated independently for the same operation, which may result in an invalidated reference but not an invalidated pointer.
However, relying on this distinction is insecure because the object pointed to by the pointer may be different than expected even if the pointer is valid.
For instance, it is possible to retrieve a pointer to an element from a container, erase that element (invalidating references when destroying the underlying object), then insert a new element at the same location within the container causing the extant pointer to now point to a valid, but distinct object.
Thus, any operation that invalidates a pointer or a reference should be treated as though it invalidates both pointers and references.
An erase operation that erases the first element of a deque but not the last element invalidates only the erased elements.
An erase operation that erases neither the first element nor the last element of a deque invalidates the past-the-end iterator and all iterators and references to all the elements of the deque. Invalidates all references, pointers, and iterators referring to the elements of the container and may invalidate the past-the-end iterator.
You must send this message from the thread on which the timer was installed. | https://promo-oxygen.ru/7985invalidating-site-reference-com.html |
Doctor's Notes on Food Poisoning vs. Stomach Bug: Comparison of Differences
Food poisoning is an illness caused by eating or drinking contaminated food or water; the contaminants can be viruses, bacteria, toxins, parasites or chemicals. The stomach flu (gastroenteritis) is a general term used for various inflammatory problems in the digestive tract, some of which can include food poisoning causes. The most common food poisoning symptoms and signs are abdominal cramps, nausea, vomiting, and diarrhea. More severe food poisoning symptoms may include blood in the stool or vomit, dehydration, high fevers, diarrhea the last for three days, headaches, weakness, blurry vision, bloating, the problems, renal problems and numbness tingling or burning sensation to the extremities, seizures and death. Signs and symptoms of gastroenteritis include low-grade fevers, nausea without vomiting, mild to moderate diarrhea, crampy abdominal bloating, most of which are less severe than those for food poisoning. However, more serious symptoms can occur such as blood in vomitus or stools, vomiting more than 48 hours, fever high than 101F, swollen abdomen with increasingly severe abdominal pain and dehydration. Most causes of food poisoning last from a few hours to a few days while gastroenteritis last from 1 to 2 days to months.
Viruses are the most frequent cause of food poisoning in the US; the next highest causes are bacteria while other causes include chemicals, parasites and toxins. Viruses cause about 70% of gastroenteritis problems and bacterial infections cause most of the rest.
Pancreatitis : Test Your Medical IQ QuizQuestion
Pancreatitis is inflammation of an organ in the abdomen called the pancreas.See Answer
REFERENCE:
Kasper, D.L., et al., eds. Harrison's Principles of Internal Medicine, 19th Ed. United States: McGraw-Hill Education, 2015. | https://www.emedicinehealth.com/food_poisoning_vs_stomach_bug/symptom.htm |
We emphasize on the proficiency of the language where students can accomplish their desired Bands under the tutelage of expertise in a conducive environment. The International English Language Testing System (IELTS) measures the language proficiency of people who want to study or work where English is used as a language of communication. We emphasize on improving all spheres of individual’s life. We are providing Special Batches for Spoken English & Grammar Classes. | https://www.coursesuggest.com/professional-courses/eduspa-ielts-institute/ |
Morphological changes in the temporomandibular joint after orthodontic treatment for Angle Class II malocclusion.
The aim of this study was to examine the morphological temporomandibular joint (TMJ) changes that occur after orthodontic treatment in patients with Angle Class II malocclusion. The post-treatment changes in TMJ morphology were analyzed, based on TMJ cephalometric laminographs in 19 patients with Angle Class II malocclusion and labial inclination of the upper incisors after premolar extraction. The condylar pass angle, articular eminence to the Frankfort horizontal plane angle, and total, upper, and lower heights of the articular fossa increased significantly on both sides after treatment and retention. The anteroposterior width of the articular fossa decreased significantly on both sides after treatment and retention. These results suggest that adaptive bone remodeling of the TMJ occurs during the correction of occlusion with labial inclination of the upper incisors by orthodontic treatment after premolar extraction in patients with Angle Class II malocclusion.
| |
That idea that the comic had to elicit the same response from everyone in the room, and that that response didn’t have to be laughter, is what gave him such a unique approach to his comedy. He could be chatting with The View hosts and do everything in his power to irritate all of them, and chalk that up as a win. He could play up his own irreverence in an impersonation of Burt Reynolds, and we’d all accept that it was Burt Reynolds. He could obnoxiously take over a late-night interview he wasn’t involved in, and steal the spotlight.
Related Video
If the audience found him irritating, he won. If the audience found him controversial, he won. And if the audience found him hilarious, he won — and so did everyone. His willingness to go for it in any direction at all times was the key to his fearlessness.
Macdonald was a master of toying with expectations, of snapping tension or upending the, well, norms of polite society. It didn’t matter where he was, whether it be a late-night couch, Saturday Night Live‘s Studio 8H, or the stage of a comedy club, his diehard dedication to deadpan never wavered. Until the end, his humor was crafted just the way he wanted it, and that alone would be worthy of legend. The fact that he also happened to be hilarious was just the polish on the masterpiece.
As we celebrate the life of the comedian, who left us at the age of 61 following a nine-year battle with cancer, revisit five of his most stunning comedic moments below. | https://www.mnnofa.com/music/norm-macdonalds-5-most-memorable-moments/ |
The 14th death anniversary of the vibrant, young and charming Pop Singer Nazia Hassan is being commemorated today all over South Asia, especially in Pakistan and India.
Brought up in Karachi and London, Nazia Hassan was born on 3rd April 1965 into a Muslim family in Karachi.
Nazia’s song “Aap Jaisa Koi Nai”from the Indian film “Qurbani” made her a renowned singer in Pakistan and South Asia in the 80’s.
Her debut album, “Disco Deewane” became the best selling Asian Pop record until that time.
Nazia’s youthful good looks, melodic voice and pacey beats caused a sensation and she became the youngest winner of a Film fare Award.
The youthful teenage was the first Pakistani to win a Film Fare Award when 15 and still retains the honor of being the youngest winner of Film Fare Award in the category of Best Female Playback Singer to date.
The loving sister of two siblings Zohaib Hassan and Zahra Hassan, she was also awarded with Pride of Performance, Double Platinum Award and Gold Discs Award.
Unfortunately, the sub continental princess was not destined to live long and passed away at the age of only 35 on August 13 in the year 2000 after fighting a long battle with the fatal disease of lung cancer. She is buried at Hendon Cemetery London.
Undoubtedly, Nazia will remain ever young and beautiful in our minds. She is really missed because she wasn’t just a star. In fact, she was a part of people’s lives. She was an inspiration, a role model, a person to look up to. She was a magical singer and her songs still retain that magic.
May Her Soul Rest In Peace. | http://www.aaj.tv/2014/08/in-the-loving-memory-of-our-princess-singer-nazia-hassan/ |
We are driven by our craft and motivated to create special spaces for our clients.
That’s why we offer custom cabinets and millwork, and one of the reasons our clients choose us to remodel their kitchens and baths.
From custom cabinetry to crown moldings, Oak Design delivers high quality, old world craftsmanship that stands the test of time. | https://www.oak-design.com/new-page-3 |
Anti-La antibodies.
Published
Journal Article (Review)
Anti-La antibodies usually occur in sera with anti-Ro antibodies and represent important serologic markers of Sjögren's syndrome and neonatal lupus erythematosus. In addition to their diagnostic and prognostic significance, anti-La antibodies have proved valuable reagents for molecularly characterizing its antigenic target, which is a 47 kD ribonucleoprotein located in the nucleus and cytoplasm of mammalian cells. The isotype distribution and fine specificity of the anti-La response as well as its associations with HLA-DR and DQ loci suggest that these autoantibodies arise by a T cell-dependent, antigen-driven mechanism. Further insights into the mechanisms of anti-La production in humans may be gained by studying experimental animal models that develop these antibodies spontaneously or through induction by various immunization protocols. | https://scholars.duke.edu/display/pub727306 |
Leveraging his experience in IT and Operations, David Murtagh leads the teams at MultiPlan that manage the network provider data and lifecycle. His responsibilities span from administering contract data to credentialing providers and maintaining their current demographic information while ensuring its accuracy.
Mr. Murtagh began his MultiPlan career as Director, IT Service Management and Continuous Improvement. In that role, he was responsible for developing a three-year IT roadmap. He then focused primarily on building the organization’s service management function.
Prior to joining MultiPlan, Mr. Murtagh was Operations Manager at Torus Insurance. He was also employed by The Hartford, where he managed process improvement efforts within the company’s Product Development and IT areas.
He earned a Bachelor of Science degree in Business Administration from Boston University, and a Six Sigma Black Belt from Villanova University.
David Murtagh's Session: | https://blockchainhealthcaresummit.com/speakers/speaker?speakerid=1753 |
Jackson, Katrina L.
Advisor(s)
Geller, Kathy Dee
Keywords
Education
;
Universities and colleges--Administration--Women
;
Educational leadership
Date
2015-06
Publisher
Drexel University
Thesis
Ed.D., Educational Leadership and Management -- Drexel University, 2015
Abstract
Women in the position of president at U.S. colleges and universities have defied the odds. Since the 1980s, the number of women presidents in colleges and universities has grown, increasing from 9.5% in 1986 to only 26.0% in 2011. Although the percentage of women presidents of these institutions has increased, there is still an underrepresentation of women in the role of president (American Council on Education, 2012). The purpose of this phenomenological study was to explore the reported and lived experiences of women who became college and university presidents, with the goal of creating a better understanding of the leadership skills, career paths and characteristics necessary for the next generation of women aspiring to these roles. What emerged from this study expands the understanding of the journey and experiences women who have held the position of president endured to get to that position. Characteristics such as integrity, trustworthiness, fairness, a collaborative nature, transparency, and preparedness were shared by the participants. From their childhood experiences and influences to their career paths and challenges faced, their journey helps shed light on what the path has been and what the path may be. Women in positions of president want to inspire other women and provided sound advice for future leaders. This includes being a courageous leader, learning and building one network, and investing in oneself. For those aspiring women who may one day become college and university presidents, the descriptions may provide hope and be used as a beacon to guide their path. The key findings illustrate women in the position of president have some similar personality characteristics and leadership qualities that support their success. Recommendations include creating a pipeline to support the movement of women faculty and administrators to the presidency, creating structured mentor opportunities, obtaining access to a broad network of individuals, seeking special project opportunities, and looking the part of a leader.
URI
http://hdl.handle.net/1860/idea:6692
In Collections
Theses, Dissertations, and Projects
/islandora/object/idea%3A6692/datastream/OBJ/view
Search iDEA
All formats
Search by: | https://idea.library.drexel.edu/islandora/object/idea%3A6692 |
Webinar Opportunity:
OPM will be hosting a couple of webinars to provide additional information about our organization and the kinds of projects our psychologists complete. If you would like to attend a webinar, please send an email to [email protected].
About the Job
Are you a graduate student interested in working with senior industrial/organizational psychologists to deliver employment and organizational assessment services to Federal agencies? You could be part of a team working to develop, implement, and evaluate assessment tools for selection or promotion; develop and administer surveys; analyze data using SPSS, Excel, or R; write technical reports; conduct job analysis/competency modeling or gap analysis and/or program evaluation.
The U.S. Office of Personnel Management (OPM) has multiple Personnel Research Psychologist Student Intern positions that will be filled through the Pathways Student Internship Program. The Pathways Student Internship Program offers paid opportunities to work in agencies and explore Federal careers while still in school. These internships are in OPM’s HR Solutions, Assessment & Evaluation Branch (AEB). AEB provides assessment services on a reimbursable basis to other Federal agencies and offers a fast-paced environment where your duties and responsibilities will involve interacting with other psychologists, HR Specialists, and program managers across offices within and external to AEB.
The duties of the intern position will be performed under the direction of senior Industrial/Organizational psychologists and project managers, and typical duties include:
More information about the Pathways Student Internship Program can be found here: https://www.opm.gov/policy-data-oversight/hiring-information/students-recent-graduates/#url=intern
Job Requirements
To meet the Intern Eligibility requirements, you must be a student accepted for enrollment or enrolled and seeking a degree (diploma, certificate, degree) in a qualifying educational institution, on a full or half-time basis (as defined by the institution in which the student is enrolled).
Key Requirements
Qualifications
Basic Education Requirement: All applicants must have successfully completed a full 4-year course of study in an accredited or pre-accredited academic institution leading to a bachelor's or higher degree with a major or equivalent in psychology to receive further consideration.
Additionally, each grade level has specific experience and/or education qualification requirements for selection. The minimum requirement for the GS-9 grade level, which is the least senior position, is at least two years of progressively higher graduate level education in Industrial/Organizational Psychology or a related field (for example, applied psychology, social psychology, applied social psychology, organizational development) OR one year of specialized experience performing Industrial/Organizational Psychology work (for example, conducting literature searches; performing statistical analyses; identifying job requirements or analyzing job analysis data; administering routine assessments or organizational surveys; maintaining the integrity of data and performing quality assurance controls on data). An equivalent combination of education and experience may also be used to qualify for the GS-9 grade level. Please see the job opportunity announcement for the specific qualification requirements for the GS-9 and GS-11 grade levels.
How to Apply: | http://ptcmw.org/jobs/8129116 |
10 Signs You’re Out of Shape:
A physical activity is any bodily movement produced by the skeletal muscle which requires energy expenditure.
According to the World Health Organization, 60 to 85 percent of the population worldwide does not engage in enough physical activity. Physical inactivity is the 4th leading risk factor for global mortality.
Sedentary activities include:
- riding in a car;
- playing cards;
- writing letters;
- talking on the telephone;
- listening to the radio or music;
- using a computer;
- watching television;
- thinking;
- reading;
- sitting.
Recent research established that having a high level of sedentary behavior significantly impacts in a negative way your overall health, especially diet, body weight, and physical activity.
A relationship between sedentary behavior and deleterious health consequences was noted as early as the 17th century by Bernadino Ramazzini, an occupational physician.
Moreover, a 12-year study of 17,000 Canadian adults concluded that people who spent most of their time sitting were 50 percent more likely to die during the follow-up than people that sit the least, even after controlling for smoking, age, and sex.
Here Are 10 Signs And Symptoms Of Being Out Of Shape:
#1 Your Are Obese
Obesity is generally characterized as a body mass index (BMI) between 30 and 39.9, while extreme obesity is a body mass index of 40 and above.
It is well known that sedentary behavior and obesity coexist and that both are associated with cardiovascular disease, especially in women.
#2 You Have A High Resting Heart Rate
Your resting heart rate is the number of times your heart beats a minute. The normal range for resting heart rate is anywhere between 60 and 90 beats per minute. Above 90 is considered high.
The average resting heart rate of an elite 30-year-old male athlete ranges from 49 to 54 beats per minute, while the resting heart rate for women of the same fitness level and age ranges from 54 to 59.
#3 You Have Shortness of Breath After 1 Flight of Stairs
Dyspnea is the medical term for shortness of breath, and it feels like you have an intense tightening in the chest, need more air, or even as though you are suffocating.
If you are out of breath with pretty low-intensity movements and can’t remember the last time you worked out, exercising more frequently will most likely help.
Important note – shortness of breath on exertion is a symptom that your lungs are not getting enough carbon dioxide out or not getting sufficient oxygen in. It can be a warning sign of something serious.
#4 You Have Sleep Problems
Presently, an estimated 40 percent of American adults get less than the recommended amount of sleep.
Prolonged sedentary behavior tends to be linked with an increased risk of sleep disturbance and insomnia in the existing literature.
#5 You Can’t Do 5 Pushups At A Time
A pushup uses your own body weight as resistance, working your upper body and core at the same time.
For most individuals who are out of shape, 5 pushups seem like an impossible goal.
#6 You Have A Cardiovascular Disease
Recent evidence suggests that the negative effects of sedentary behavior on markers of vascular health, and, to a lesser degree, traditional cardiovascular disease risk factors, are likely responsible for the increased cardiovascular disease incidence and mortality linked with sedentary behavior.
According to a 2014 study, the most sedentary people had greater waist circumference, higher body mass index (BMI), and higher systolic blood pressure, with a substantial upward trend in each tertile.
#7 You Have Type 2 Diabetes
Many underlying factors come together to raise the risk for type 2 diabetes mellitus, including environmental factors like physical inactivity, calories intake, nutrition, and unhealthy sleeping patterns.
Actually, it is a well-established fact that a lack of physical exercise puts you at a higher risk of diabetes mellitus and obesity.
According to statistics, 90 percent of type 2 diabetes diagnoses could be prevented if just a few risk factors were eliminated.
#8 You Are Depressed
Too much lying around watching TV or sitting at the computer notably increased the risk of depression, according to a 2014 analysis of studies that were published in the British Journal of Sports Medicine.
Also, a 2015 study done at the Deakin University’s Center for Physical Activity and Nutrition Research concluded that watching TV, sitting at a desk, playing video games, and looking at your phone not only contribute to a lack of physical activity, but they can also substantially raise anxiety as well.
A Harvard Health Publications report established that a 60-minute walk three times a week or a brisk 35-minute walk 5 days a week had a substantial influence on mild to moderate depression symptoms.
READ MORE: Spiritual Meaning Of Appendicitis
#9 You Have Poor Memory
According to a preliminary study by UCLA, sitting too much is strongly associated with changes in a section of the brain that is important for memory.
Moreover, middle school students who are in the best physical shape brought home better report cards and outscored their classmates on standardized tests, according to a study issued on December 6, 2012, in the Journal of Sports Medicine and Physical Fitness.
READ MORE: Spiritual Causes And Meaning Of Hemorrhoids
#10 You Have Back Problems
The spine surgeon says that sitting for a prolonged time distorts the natural curve of the spine, which means your back muscles have to hold your back in shape.
The good news is that by being proactive, you can make bad episodes less painful and less frequent, as per the American Academy of Orthopaedic Surgeons.
Tip – placing your hands on your lower back and stretching backward will take the pressure off your back and keep the muscles from getting stiff.
READ MORE: Simply Orange vs Tropicana
How Long Does It Take To Get Out Of Shape?
According to the data, 1 month of inactivity will result in about a 20% decrease in your VO2max (the maximum rate of oxygen consumption measured during incremental exercise).
What To Do?
You don’t have to be an Olympic runner or spend all day on the treadmill either.
Moderate-intensity physical activities are sufficient for most people, like – cycling or brisk walking. Moderate-intensity activities are those that get you moving fast enough to burn off 3 to 6 times as much energy per minute as you do when you are sitting.
It is recommended to start off at a slow pace, walking for about half an hour at least 3 days a week or every second day. As your endurance level improves, increase to 60 minutes. | https://www.yourhealthremedy.com/health-tips/symptoms-of-being-out-of-shape/ |
Ancestry:
The ancestors of the Equus family first appear in the fossil record rougly 60 million years ago. The Hyracotherium was a forest dweller which resembled a smaller version of today's horse. As grasses spread during the Oligocene and Miocene (5 to 38 million years ago), the animal's descendants moved onto the plains. There, they evolved the physical adaptations seen in today's Equus - from longer limbs to flatter teeth suitable for grazing.
Until about 2 million years ago, almost 20 genera roamed the earth. Today, there are 6 surviving species of Equus including the horse (E. przewalskii) and the Asian wild ass (E. hemionus). The most rare species is the Somali Ass (E. africanus), only a handful of which are believed to survive in Somalia and Ethiopia.
Range:
There are three species of zebra alive today in Africa. The Mountain Zebra (E. zebra) and Grevy's Zebra are rare and restricted in their distribution to arid zones on the continent. The Plains or Burchell's Zebra (E. burchelli) is the most abundant member of the family, and can be found in eastern, central, and southern Africa.
Bachelor herds are usually led by young stallions whose rank depends on age. Bachelors will spend much of their time preparing for their later roles as herd stallions by play fighting and engaging in challenge rituals. By age 5, bachelors are ready to start harems, leaving the bachelor herd in search of a filly.
Activity:
Plain zebras spend most their day grazing, although they will also engage in dust bathing, drinking, and brief periods of resting. Zebras will move little at night, and sleep laying down. At least one member of the herd will stand guard to protect against lions or hyenas. Zebras have a powerful back kick that can easily break the jaw of an attacking predator. If a harem with young foals is approached by a lion or hyena, the females will surround the young while the stallion attacks the predator. | https://www.greenleap.com/area/critters/critter.jsp?title=Zebras |
We’ve been working in the oil and gas sector since we started out and we’ve noticed a common problem facing companies in frontier regions: how to move people safely and securely with limited resources.
The number of vehicles available in relation to the number of people being moved is normally an unfavourable ratio – there simply aren’t enough vehicles to keep up with demand in terms of journeys.
We’re also conscious of the risks and hazards that present themselves in frontier regions. These range from roads that are unsuitable for heavy loads, through to gatherings of people at certain times of day which not only delay journey time but can also have security implications. We want to figure out the best way to quantify and minimise any associated risks.
We’re interested in how we can make better use of GPS, telematics and transport data to increase safety for personnel and vehicles. The availability and affordability of ‘On Board Devices’ (OBDs), communications networks, transport network data and transport apps will have an increasing role to play in improving road safety in frontier regions.
Behind the scenes, we’re exploring how technology can help to optimise resources, for example by making sure managers have visibility of journey requests, minimising low-priority moves, or by doubling up payloads to make better use of available space. This helps improve the overall efficiency of existing fleets.
‘Routing engines’, familiar to all satnav users, are traditionally used to calculate the most efficient route along a set of specified waypoints. They can also help to estimate the total distance travelled on different types of roads for example urban, highway or off-road, which is a useful metric in the context of vehicle maintenance. Part of the challenge here is that the GPS data from vehicles we’ve worked with tends to be relatively sparse (sampled once every five minutes for example).
Other practical information on routing can be found in OpenStreetMap (OSM). OSM has an increasing amount of data relating to road safety including highway classifications, local speed limits and even highway lighting conditions.
We’ve been looking at open source routing algorithms (like the pgRouting and OSRM) to help make recommendations based on other information like journey time, fuel consumption, or the ability to avoid a hazard, such as a narrow bridge, or a known hotspot of some kind.
We’re inspired by the common practice in the space industry of carrying ‘secondary payloads’ exemplified by recent work by companies like SpaceX, whose secondary payload manifest allows others to ‘hitch a ride’ on an existing mission if there’s room. We can do this by databasing routes and cargo to make sure movements are not duplicated.
Another company that has done interesting work on this front is UPS, whose famous ‘no-left turn’ solution for optimizing circular route planning has significantly sped up delivery times, cut fuel consumption and reduced costs.
We’re in the research and development phase at the moment and are interested in hearing from people with experience of routing algorithms, or experience in logistics management.
Drop us an email if you have any thoughts or questions. | https://www.inquiron.com/using-sensors-and-data-to-improve-journey-management/ |
A periodic function is a function that repeats its values on regular intervals or “periods.” Think of it like a heartbeat or the underlying rhythm in a song: It repeats the same activity on a steady beat. The graph of a periodic function looks like a single pattern is being repeated over and over again.
TL;DR (Too Long; Didn't Read)
A periodic function repeats its values on regular intervals or “periods.”
Types of Periodic Functions
The most famous periodic functions are trigonometric functions: sine, cosine, tangent, cotangent, secant, cosecant, etc. Other examples of periodic functions in nature include light waves, sound waves and phases of the moon. Each of these, when graphed on the coordinate plane, makes a repeating pattern on the same interval, making it easy to predict.
The period of a periodic function is the interval between two “matching” points on the graph. In other words, it’s the distance along the x-axis that the function has to travel before it starts to repeat its pattern. The basic sine and cosine functions have a period of 2π, while tangent has a period of π.
Sciencing Video Vault
Another way to understand period and repetition for trig functions is to think about them in terms of the unit circle. On the unit circle, values go around and around the circle when they increase in size. That repetitive motion is the same idea that’s reflected in the steady pattern of a periodic function. And for sine and cosine, you have to make a full path around the circle (2π) before the values start to repeat.
Equation for a Periodic Function
A periodic function can also be defined as an equation with this form:
f(x + nP) = f(x)
Where P is the period (a nonzero constant) and n is a positive integer.
For example, you can write the sine function in this way:
sin(x + 2π) = sin(x)
n=1 in this case, and the period, P, for a sine function is 2π.
Test it by trying out a couple of values for x, or look at the graph: Pick any x-value, then move 2π in either direction along the x-axis; the y-value should stay the same.
Now try it when n=2:
sin(x + 2(2π)) = sin(x)
sin(x + 4π) = sin(x).
Calculate for different values of x: x = 0, x = π, x = π/2, or check it on the graph.
The cotangent function follows the same rules, but its period is π radians instead of 2π radians, so its graph and its equation look like this:
cot(x + nπ) = cot(x)
Notice that tangent and cotangent functions are periodic, but they are not continuous: There are "breaks" in their graphs. | https://sciencing.com/what-is-a-periodic-function-13712268.html |
Exceptional Educator with an effective ability to teach students of all ages. Outstanding competency in organizational management with the ability to train staff, manage, and develop high- performance teams, set and achieve strategic training objectives and manage a budget. Talent for creating a stimulating and challenging learning environment. Unwavering commitment to quality programs and data-driven program evaluation. Extensive background in instructional and program development. Excellent Interpersonal Skills and a role model to others. Dedicated Business Instructor with a strong work ethic and a commitment to excellence in teaching. Skillfully manages lectures and promotes open classroom discussions.
EDUCATION
Master of Science Business and Marketing Education University of North Carolina
Greensboro, NC
Bachelor of Science
Business Information Systems
Winston-Salem State University
Winston Salem, NC
AREAS OF EXPERTSE
• Charismatic
• Best practices in on-line instruction
• E-learning programs
• Clear communicator of complex
ideas
• Microsoft Suites
Professional Experience
01/2007 to 01/2013 Liberty Healthcare Inc. Long Beach, CA IT Procurement Manager
• Detailed and concise course
planning
• Creative instruction style
• Organized and detailed
• Personable and approachable
• Adult learning specialist
• PowerPoint
• Maintained positive and professional relationship with vendors, clients, and internal team members.
• Spoke effectively to large and small groups of people, presenting training and facilitation, traveling as needed.
• Used Microsoft Word, Excel and PowerPoint to prepare and maintain records, correspondence and reports.
• Communicate effectively, both orally and in writing.
• Established and implemented competency-based workforce through educational modules and resources.
• Prepared and gathered current instructional materials and equipment. 01/2013 to 01/2015 Basin Youth Build Academy Carson, CA IT Instructor
• Participated in campus and community activities.
• Provided instruction and facilitation of IT courses for adults of all ages.
• Collaborated with staff, faculty, and students to provide a warm, friendly, and nurturing atmosphere.
• Developed, reviewed and updated student education plan, working with 60 students.
• Provided and executed individual and group instruction.
• Initiated, developed and implemented student assessments.
• Conferred with staff members to plan and schedule lessons promoting learning, following approved curricula.
• Served as a mentor to adult students and faculty members. 2004 to 01/2010 Compton Unified School District Compton, CA Program Coordinator
• Met with other professionals to discuss individual student's needs and progress.
• Planned and conducted activities for a balanced program of instruction, demonstration, and work time that provided students with opportunities to observe, question, and investigate.
• Collaborated with the staff to provide meaningful, student-centered learning experience. 01/2000 to 01/2004 Forsyth Technical Community College Winston-Salem, NC Instructor
• Employed appropriate teaching and learning strategies to communicate subject matter to students Researched, identified, evaluated and implemented current industry standards and demands.
• Contribute to selection and development of instructional materials in accordance with course objectives.
• Incorporate core competencies into curriculum; develop, update, and post course syllabi in a timely manner.
• Planned and organized instruction in ways that maximized documented student learning.
• Provided Academic and Career advising Guided and counseled students with adjustment or academic problems.
• Demonstrated a continued commitment to undergraduate teaching through full participation in the college community.
• Fostered students' commitment to lifelong learning by connecting course materials to broader themes and current events.
• Taught introductory and upper level courses in Business.
• Fostered students' commitment to lifelong learning by connecting course materials to broader themes and current events.
Other Teaching Experience: | https://www.postjobfree.com/resume/adcu3z/instructional-teacher-los-angeles-ca |
For Graduate Students
Welcome! UC Libraries staff are here to help you with your coursework, research and teaching needs. If you don’t find what you need please ask us either through Chat, using the online reference form, or emailing one of the subject librarians.
Research and Learning Support
Library Resources for Graduate Students
Points of Contact
Searching Assistance (video tutorials)
Online Teaching Assistant Support
All you need from UC Libraries to integrate librarians and library resources into your Canvas/Blackboard instance, support online learning and navigate copyright – plus tips and tools to share for citation and plagiarism.
Additional Assistance for Graduate Students during COVID-19
- COVID-19 Graduate Student Government Guide – Resources to Cope
- Bearcat Emergency Fund
- UC UHS Student Support Fund
- Options for home internet access by local providers or https://www.everyoneon.org/
NEXT Apprenticehip Program - provides any student who does not have appropriate technology to complete their course work the ability to check out laptop computers. Please contact Aaron Burdette with any questions. | https://libraries.uc.edu/online/for-graduate-students.html |
During the War of 1812, the British attacked the heart of our nation, Washington D.C. They were frustrated from the burning of their federal buildings in York, Canada, which is present day Toronto ,so they thought burning down the capitol was the best revenge. During the three days they were in the city, they set fire to most of it causing the majority of the small population to flee. When the citizens started to return, they found their homes and lives destroyed. The destructive actions from the British were brutal and set us up for a long road to recovering the nation’s capitol.
In 1095 on November 27 in Clermont,France, Pope Urban the II called for a Crusade to help the Byzantines and free the city of Jerusalem. The official start date was set as August 15, 1096. This order little did he know would be the cause of a battle that turned into 9 war’s that last for nearly 200 years. This event in history clearly has a outcome that is way more negative than positive. Have you ever imagined being in the middle of a 200 year war people dropping like flies just because of an argument over one city?
You will read about Lucius Junius Brutus (Tarquin the proud). His culture the Etruscan. Roman Republic Government. Rome’s everyday life and inventions. Lucius was a powerful emperor.
Fifteen years ago on 9/11 our world faced a tragic event we wouldn’t soon forget. On 9/11 our World Trade Center that is located in the great state of New York were brought down by Osama BIn Laden and his clan Al Qaeda. This moment in history played a big part in shaping our country today. The attack on 9/11 shaped our country not only by the destroying of the buildings but also by the people we lost in the event. The attack
He set Rome ablaze and used the Christians as scapegoats. He accused them of arson and persecuted many by burning them alive or allowing dogs to tear them to pieces (Lunn-Rockliffe). Emperor Diocletian (284-305) was also notorious for the persecution of Christians. A fire broke out in his palace which caused him great anger. Like the Romans did to the Christians when Nero was in rule, they blamed them for the fire.
The uproar was brought about by the absolution of policemen who wrongfully beat an African American man after he was pulled over for speeding. The New York Draft Riots were one of America 's most decimating mobs. It started as a gentle rally against the national draft, however, turn took a more terrible as it turned out to be all the more a racial battle. In the book, The Gangs of New York, Asbury gives an exceptionally top to the bottom depiction of the New York Draft Riot. As indicated by Asbury, "The battling seethed through the road of New York City from Monday to Saturday, it had started as a dissent against the Conscription
In his lifetime Antoninus fully justified the honorific surname of Pius, bestowed on him by the senate: his death, unlike that of most other emperors, was appropriately calm and dignified. The temple of Faustina was built by emperor Antoninus Pius in honour to his late wife Faustina in AD 141. Emperor Antoninus deified his wife and had this temple built in order to honor her. When he died, the Roman Senate dedicated the Temple to the both of them and it became known as Temple of Antoninus and Faustina Historians consider The Temple of Divus Antoninus Pius and Diva Faustina to be one of Antoninus Pius greatest creations. It is the best preserved building in the Roman Forum.
Washington had a lot on his plate saving our butts anyways. He was a veteran, a president, and not to mention a tall and wealthy business man who just wanted to help his country. He was a great and in my opinion, the geatest of the greats. He was the best leader and no doubt deserves to be on our currency and a lot more for what he has done. He never told a lie, and I will never tell a lie in saying that he is number one in my
The life of George Washington. His life was very interesting he did many great things. He was the first president of the United States. One reason why I choose him is because he was a really smart and intelligent man. He helped a lot of people George has had smallpox before.
This created rage and the Visgoths began to fight back against the Empire. The Visgoth King Alaric led an attack on “the eternal city”, Rome, and ended up ransacking the city of Rome in 410 CE (Andrews and Damen). On the other side of the Roman Empire, Vandals disguised as pirates continued to disrupt trade within and outside of Roman borders. “Vandals’ attacks involved prolonged, physical ruin,” says Damen, “ a destruction so complete and indiscriminate, so emblematic of wonton atrocity.” That aided the movement of the Visgoths and other Germanic tribes, that later ended the empiric reign over Rome. A Germanic leader Odoacer led a revolt and killed off the last Roman Emperor Romulus Augustus, in 476 CE, which was the last emperor to ever rule the Roman Empire.
The text consistently emphasizes certain traits of the emperor, in particular fairness, mercy, and a deep respect for traditional values. The text makes a concerted effort, at showing Augustus to be a restorer and upholder of the traditional republic, rather than a reformer or destroyer. His deep respect for the Senate is especially emphasized. The names of the Consuls, who were his “colleagues”, are included in almost all of the main events of Augustus’s life, and the emperor explicitly states that although he had more “influence” than any other Roman, he had no “greater power”, than each of his colleagues, in each magistracy. By portraying himself as a loyal servant of the Senate and people, Augustus hides the true extent of his power. | https://www.ipl.org/essay/Nero-And-The-Pax-Roman-Empire-P3N7YGBEN8VT |
What is the Cost of Living at Cleveland State University?
Did you know there are added costs in addition to tuition to go to college? Investigate just how much extra you'll pay for living expenses at Cleveland State University.
Room and Board Costs & Expenses
Housing and meal plans are priced out separately at Cleveland State University. The typical student spent $8,790 for housing and $5,184 for dining in 2019 - 2020.
The table below will show you the anticipated costs of on-campus and off-campus housing and meal plans for Cleveland State University.
|Expense||On Campus||Off Campus|
|Room and Board||$12,882||$12,882|
|-- Housing||$8,790|
|-- Meals||$5,184|
|Other Living Expenses||$3,470||$3,470|
|Books and Supplies||$800||$800|
|Total||$17,152||$17,152|
Planning on Spending 4 Years On Campus? Projected Cost is $70,846
Housing and meal plans at Cleveland State University have changed an average of 3.1% per year these past five years. Current incoming freshmen should expect to spend roughly $17,469 for room, board and other costs based upon existing movements. Those students would wind up paying $17,630 in their 2nd year, and $17,792 in their 4th year.
Students obtaining their bachelor's degree will wind up paying a total sum of $70,846 in living costs by the time they complete their degree, while students obtaining their associates degree will pay a total sum of $35,099.
More on Your Campus Living Situation and Meal Plan
|Are Freshmen Required to Live On Campus?||No|
|On-Campus Housing Capacity||1,011|
|Max Number of Meal-Plan Meals Per Week||N/A|
Above Average Campus Costs
Students at Cleveland State University spend $17,152 to stay on campus, while the average student pays just $14,745. | https://www.collegefactual.com/colleges/cleveland-state-university/paying-for-college/room-and-board/ |
Pressure Inside Protons Is 10 Times That In Center Of Neutron Stars
Neutron stars are the densest objects known in the universe, and the almost-impossible-to-imagine pressure at their centers would certainly be enough to crush or tear apart anything that was unfortunate enough to find itself there. But it turns out that the relatively humble proton is a whole order of magnitude greater than neutron stars when it comes to pressure on the inside.
Nuclear physicists from the Department of Energy’s Thomas Jefferson National Accelerator Facility (Jefferson Lab) accomplished, for the first time, the feat of measuring the pressure distribution inside a proton — one of the three larger subatomic particles, along with the electron and the neutron, that makes up all atoms. And they found that quarks — elementary particles, three of which come together to make up protons — experience, near the center of protons, pressure of 100 decillion (10 followed by 34 zeros, or one zero short of a trillion trillion trillion) Pascal. That is about 10 times greater than the pressure at the heart of neutron stars.
“We found an extremely high outward-directed pressure from the center of the proton, and a much lower and more extended inward-directed pressure near the proton’s periphery,” Volker Burkert, Jefferson Lab Hall B leader and a coauthor on a paper describing the measurements, explained in a statement Wednesday.
The quarks that make up a proton are held together by the strong force — one of the four fundamental forces in physics — which is carried on another elementary particle called the gluon. The strong force inside protons defines the pressure distribution.
“Our results also shed light on the distribution of the strong force inside the proton. We are providing a way of visualizing the magnitude and distribution of the strong force inside the proton. This opens up an entirely new direction in nuclear and particle physics that can be explored in the future,” Burkert said in the statement.
Making this measurement required putting together two separate theoretical frameworks. The process involved scattering electrons off quarks inside protons, which then emitted high-energy photons. These photons were detected along with the scattered electrons and the recoiling protons, and observing them allowed researchers to measure the pressure distribution inside the protons, a measurement the Jefferson Lab statement said was “once thought impossible to obtain.”
“This is the beauty of it. You have this map that you think you will never get. But here we are, filling it in with this electromagnetic probe,” Latifa Elouadrhiri, a Jefferson Lab staff scientist and coauthor on the paper, said.
Next, the researchers hope to apply the technique to reduce the uncertainties in their analysis, and also to discover other mechanical properties of the proton, such as its internal shear force and its mechanical radius.
“This work opens up a new area of research on the fundamental gravitational properties of protons, neutrons and nuclei, which can provide access to their physical radii, the internal shear forces acting on the quarks and their pressure distributions,” the paper’s abstract says. | https://www.ibtimes.com/pressure-inside-protons-10-times-center-neutron-stars-2681700 |
Background {#Sec1}
==========
The genealogical relationship between sequences in a population is an important issue in recent analyses of the dynamics of sequence evolution at the population level. The genealogical relationship among a number of sampled sequences drawn from a particular generation of a large haploid population can be modeled using the Kingman's coalescent process \[[@CR1], [@CR2]\]. This method has been successfully applied in haploid type data such as bacteria simulation, the estimation of population genetics parameters, and the inference of demographic events. However, the coalescent process involves no recombination and this cannot be ignored when studying diploid populations. For example, the histories of different loci in a genomic region may differ due to recombination events.
The first model of coalescence with recombination was described by Hudson \[[@CR3]\]. This was shortly after Kingman's coalescent process was formulated. Due to the increased complexity added by recombination, a graph rather than a single tree is needed to describe the genealogical relationship. This graph, called an ancestral recombination graph (ARG), is made up of many local coalescent trees \[[@CR4]\]. An ARG can be considered a random graph. Each branch in the ARG represents a lineage that carries some ancestral material to the sample. Here, the term \"ancestral material\" refers to chromosomal regions that are eventually inherited by any of the samples of interest drawn from the present-day population. The node in ARG at which two branches converge denotes a coalescent event, and the node at which one branch splits into two denotes a recombination event.
An algorithm that can rapidly generate independent ARGs from populations evolving with both coalescence and recombination can be of great use. First, they can facilitate data analysis. Samples produced using various models can be combined with data to test hypotheses. Second, it can be used to estimate the recombination rate. The question of whether recombination events are clustered in hotspots is of enormous interest at present. It also has considerable relevance to the efficient design of association studies \[[@CR5]\].
There are two main representative algorithms that can simulate ARG according to a given recombination rate. One is Hudson's *ms* \[[@CR6]\]. It is the simplest and is used in many applications. The other is Wiuf and Hein's spatial algorithm \[[@CR7]\]. These two algorithms stress different aspects of the process. The algorithm of *ms* has a Markovian structure and is computationally straightforward. Ancestral lineages related to the sampled chromosomes remain unchanged until coalescence or recombination. In contrast, Wiuf and Hein's spatial approach of simulating genealogies along a sequence has a complex, non-Markovian structure. The distribution of the next local tree depends on all previous local trees rather than on the current genealogy alone. It begins with a coalescent tree at the left end of the sequence and adds more different local trees gradually along the sequence, which form part of the ARG. The algorithm terminates at the right end of the sequence when the full ARG is determined.
To compare existing algorithms, the recombination events in history were classified into five types \[[@CR8]\]: Type 1: recombination with breakpoint located in ancestral material; type 2: recombination with breakpoint located in non-ancestral material with ancestral material on both sides; type 3: recombination with breakpoint located in non-ancestral material with ancestral material on only the left side; type 4: recombination with breakpoint located in non-ancestral material with ancestral material on only the right side; type 5: recombination in an individual carrying no ancestral material. These five types of recombination are shown in Figure [1](#Fig1){ref-type="fig"}.Figure 1**Five types of recombination in the history of a population.**
Because only type 1 and type 2 contribute to the gene structure of the sample, ARG should in principle contain only these two types of recombination and the branches containing other types of recombination are regarded as redundant ones in simulated ARG. It seems that *ms* is the briefest way to simulate the ARG according to its distribution because it considers only type 1 and type 2 recombination. Wiuf and Hein's method also simulates the other three types of recombination, which may produce some redundant branches and increase computation burden in generating ARG. When simulating hundreds of thousands of ARGs with a large recombination rate is required (e.g. to estimate recombination rate of a long DNA sequence based on full likelihood), even *ms* is not efficient enough, neither is it easy to approximate. Although the original spatial algorithm of Wiuf and Hein's method produces a lot of redundant branches, several approximate spatial algorithms have been developed to reduce the redundant branches in ARG and simulate large samples of long sequences \[[@CR8]--[@CR10]\].
Likelihood-based inference is one statistical method that is commonly used to apply the corresponding algorithm to estimations of the recombination rate. The likelihood can be estimated by simulating ARGs from the coalescent distribution given the recombination rate *r* and mutation rate *θ*. The simulated data can be examined to see if they match the observed data. By repeating this process many times with different values of *θ* and *r*, maximum likelihood estimations of the statistics can be obtained \[[@CR9]\]. However, because the vast majority of ARGs is not consistent with the sample and contributes nothing to the likelihood, this naïve method is infeasible. With a complete history, it is easy to calculate both the probability of the data and of the history given the coalescent model and associated parameters. The central difficulty is that, from an essentially infinite set of histories that could give rise to the data, it is hard to find histories that are highly probable under the assumed model. There are two approaches that have been developed to handle this difficulty \[[@CR11]\].
The first approach involves sophisticated Monte Carlo methods such as importance sampling \[[@CR12]\] and Markov Chain Monte Carlo \[[@CR5], [@CR13], [@CR14]\]. MCMC starts from an initial guess and then tends to make subsequent modifications that are more likely to be accepted, with a probability that is proportional to how likely they occur under the assumed model. Importance sampling approximates the optimal proposal density to calculate the likelihood. Both methods create bias towards the simulation of ARGs, which makes significant contributions to the likelihood.
As a supplementary method, the second approach of estimating recombination rate is to simplify or approximate the coalescent model itself. Based on Wiuf and Hein's spatial point of view, McVean and Cardin developed sequentially Markov coalescent (SMC), an approximation of the standard coalescent process \[[@CR9]\]. This algorithm reduces the topology simulated to a tree rather than a graph. If two ancestral lineages have no interval in common where they share ancestral material, they are not allowed to coalesce. By restricting coalescent events in this way, the resulting process has a Markovian structure in the sequential generation of genealogies along a chromosome. The SMC starts with a coalescent tree at the left-hand end of the sequence and progressively modifies the tree with recombination events as it moves to the right. Marjoram and Wall modified the approximation \[[@CR8]\]. In their system, the old lineage is not removed until after the point of coalescence of the new one has been determined. This allows for the possibility that the new lineage coalesces with the one that was to be erased and that no change occurs in the genealogy. Chen, Marjoram, and Wall described an intermediate approach (MaCS), which is a compromise between the accuracy of the standard coalescence and the speed of the SMC \[[@CR10]\]. In the SMC, coalescent events are restricted to edges within the last local tree only. While in MaCS, coalescent events are restricted to edges among any of the last *k* (denoted as the tree retention parameter) local trees. It models the relationships between recombination events that are physically close to each other and treats those that are far apart as independent.
The essence of these approximate methods is to simplify the recombination of type 2 events and the coalescence of lineages that contain distant ancestral material. These methods offer significant improvements with respect to computational efficiency and sequence length. However, the effects of these simplifications have not yet been clearly classified.
This paper reports the establishment of a new method of modeling coalescence with recombination. It offers several improvements over that of Wiuf and Hein's method. A new algorithm based on the new model is proposed to generate ARGs equal to *ms*. Similar to the algorithm designed by Wiuf and Hein, the present algorithm constructs ARG spatially along the sequence \[[@CR7]\]. However, it will not produce any redundant branches, which are inevitable in Wiuf and Hein's algorithm. It is here suggested that the above approximate methods (SMC \[[@CR9]\], SMC′ \[[@CR8]\], MaCS \[[@CR10]\]) be viewed as special cases of our new algorithm. Using simulated analysis, the present algorithm was compared to MaCS. The time to the most common ancestor (TMRCA) in the local trees of ARGs generated by the present algorithm was even more similar to that produced by *ms* than that produced by MaCS was. The present method can generate sample-consistent ARGs, which might significantly reduce the computational burden.
Results {#Sec2}
=======
Model assumptions {#Sec3}
-----------------
This present work was performed with the same assumptions as those made by Griffiths and Marjoram \[[@CR15]\]:A gene, here treated as a length of DNA, is represented by the unit interval \[0,1).The population is assumed to evolve through discrete generations in a Wright-Fisher manner, which means that each generation is of 2 *N* genes in size. As usual, time is measured in units of 2 *N* generations. *N* → ∞ and 4*Nr* → *ρ* remains fixed, where *r* is the regional recombination rate per generation per gene, and *ρ* is the global population recombination rate.The present algorithm was designed under the infinite sites model, in which the mutation is independent of the coalescent with recombination so that it occurs with a Poisson distribution on the ARG.In the present algorithm, a gene copies the genetic information of its parents if recombination does not occur. If recombination does occur, only one breakpoint is assumed and the genetic information of the gene comes from its parents. Specifically, each gene chooses its parents from the previous generation according to the following rules: a) with probability 1--*r*, a gene from the previous generation is uniformly chosen; b) with probability *r*, a recombination event occurs, and two genes are uniformly chosen from the previous generation; c) each gene chooses its parents independently.
Meanwhile, the position of the breakpoint *S* is selected (independent of the generating events of the other genes) according to a given distribution with density p(*s*). The intervals \[0,*S*) and \[*S,*1), which are from the first and second parents, respectively, form the offspring gene. Here the random variable *S* possesses a continuous distribution.
Generation of ARG along sequences with *SC* {#Sec4}
-------------------------------------------
Generating ARG is key to simulating coalescent processes with recombination. Most studies of theoretical population genomics do not generate ARG, even though many algorithms have been developed to do so. These include Hudson's *ms* \[[@CR6]\], Wiuf and Hein' algorithm \[[@CR7]\], Chen et al.'s MaCS \[[@CR10]\], SMC \[[@CR9]\], and SMC′ \[[@CR8]\]. These algorithms can be roughly classified into two categories according to the different methodologies they use. One generates ARG back in time. Hudson's *ms* is a representative example \[[@CR6]\]. It is the most accurate algorithm because of its complete ARG space. The other generates ARG by gradually constructing a series of local trees along the sequence, such as Wiuf and Hein' algorithm \[[@CR7]\], MaCS \[[@CR10]\], SMC \[[@CR9]\], and SMC′ \[[@CR8]\]. These algorithms are collectively called spatial algorithms. Particularly, MaCS, SMC and SMC' are approximate spatial algorithms, which can generate ARG with longer sequences than *ms* can because they lose some information during the generation of ARG space.
In this work, a new spatial algorithm called the Spatial Coalescence simulator (*SC*) is proposed. It generates ARG more accurately along the sequence than other spatial algorithms do. If *X*^*S*^ denotes \[*0, S*\] segments of the ARG, and denotes the local tree of S-site, then X^0^ = is a standard coalescent tree without recombination, and *X*^1^ is the total desired ARG. The basic idea underlying spatial algorithms is constructing the ARG from X^0^ to *X*^1^ step by step. This process can be generalized into the following brief steps and the full version can be found in Methods.
Step 1. j = 0, *S*~0~ = 0, Build a standard coalescent tree *X*^0^.Step 2. Generate a breakpoint of recombination *S*~*j*\ +\ 1~ in the interval \[*S*,1) and choose a location from the current ARG X^S^~*j*~ .Step 3. Build a new ARG *X*^S*j+*1^ by adding a new coalescent branch to the current ARG X^S*j*^, and the coalescent event begins from the selected location and ends to any position of X^S*j*^.Step 4. Repeat steps 2 to 4 until *S*~*j*+1~ \> 1, and then take the total ARG *X*^1^ as the ARG *X*^S*j*^.
With a joint consideration of all the five classification types of recombination, we can reach the following conclusions: 1) the location of type 1 recombination must be on rather than the whole current ARG *X*^*Sj*^; 2) the location of type 2 recombination must be on the other branches of *X*^*Sj*^ which are called 'old branches' (see Figure [2](#Fig2){ref-type="fig"} for an example of *X*^*Sj*^ and old branch); 3) not all recombination with location on old branches are type 2 recombination. Further, because each branch of contains ancestral material of *S*~*j*~-site, and the next break point is S~*j*+1~, therefore, each branch of contains ancestral material of \[*S*~*j,*~*S*~*j*+1~), and it must be type 1 recombination. With respect to recombination on old branches, the only information that can be obtained is that there could be ancestral material on \[0, *S*~*j*~) with an algorithm generating ARG without redundant branches when determining *X*^S*j+*1^. It is also certain that there must be no ancestral material on \[*S*~*j,*~*S*~*j*+1~), but it is not clear whether there is ancestral material on \[*S*~*j*+1~, 1).Figure 2**An example of current ARG.** The graph displays the \[0, 0.5\] part of an ARG, the black lines represents branches constituting the current local tree, the gray lines represent all old branches, the numbers in brackets display intervals which denote the ancestral materials carried by nearby branches, the numbers without brackets denote recombination rates occur in the underlying nodes.
Based on these conclusions described above, the following can be obtained:Wiuf and Hein's algorithm simulates type 1, 2, and other three types of recombination events because this algorithm fixes any recombination breakpoints without determining its type during step 2. It is therefore obvious that Wiuf and Hein's algorithm generates more redundant information than *ms* (*ms* simulates types 1 and 2 but no other unnecessary recombination \[[@CR8]\]).MaCS simulates only type 1 recombination events and does not consider other types. This is because, during step 2, MaCS just fixes the type 1 recombination breakpoints by choosing a location from the current tree, but not the whole current ARG. In this way, MaCS ignores type 2 recombination information which *ms* does not.
Now the key problem for spatial algorithms is distinguishing type 2 from type 3 and adding useful recombination to the final total ARG.
To solve this type of problem, a new algorithm is here proposed to distinguish type 2 from type 3 events. In this algorithm, it is type 2 rather than type 3 recombination events that are entered into previous local trees after the latter type 1 breakpoints have been fixed. Specifically, the second step of this process differs from that of Wiuf&Hein's, which we put recombination break points on the current local tree instead of the current ARG. This caused the type 1 recombination breakpoints to be fixed after step 2. Then step 3 is refined: A coalescent event beginning at the selected location was added to the current tree, and the end of the coalescent event is also located. If the location is on an old branch, then type 2 recombination breakpoints can be found back on this old branch. Finally, a full ARG can be built without missing any type 2 recombination events.
In this way, the new algorithm can be formulated as a sequence of random variables {(*S*~*i*~, *Z*^i^), i ≥ 0}, where *S*~*i*~ are the type 1 recombination breakpoints and *Z*^i^ describes the branches added at the break location of the ARG *X*^S*i-*1^ (Figure [3](#Fig3){ref-type="fig"}). They include type 2 recombination breakpoints and corresponding coalescent events.Figure 3**Generation of ARG along sequence.** *X* ^*s*^ denotes \[0, s\] segments of the ARG, *S* ~*i*~ denotes the *i* ^th^ type 1 recombination breakpoints and *Z* ^*i*^ denotes the branches added to *X* ^Si*-*1^. *X* ^Si^ is the collection of *Z* ^*i*^ and *X* ^Si*-*1^.
In fact, the probability distribution on ARG space generated by this algorithm, which is based on a spatial model along a sequence, is identical to that produced by an algorithm based on the back-in-time model. To see this intuitively, we construct the whole ARG by constructing its constrain or projection on \[0, s) from s = 0 to s = 1. The difference between the constrain of \[0, *S*~*j*~) and \[0, *S*~*j*\ +\ 1~) (*X*^S*j*^ and *X*^S*j+*1^, respectively) can be more than 1 branch. That's because if there is a recombination with break point in \[*S*~*j*~, *S*~*j*\ +\ 1~), we can obtain it at least when we obtained *X*^S*j+*1^. So, in order to get *X*^S*j+*1^, we choose to add one or more than one branches to *X*^S*j*^.
The experimental proof of the above identification can be seen in the performance of SC and SC-sample section. The details of the procedures associated with the present algorithm can be found in the methods section. The Mathematical framework of the algorithm is provided in another paper \[[@CR16]\].
Generate sample-consistent ARG {#Sec5}
------------------------------
The idea of sample-consistent ARG first appeared in the work of Song \[[@CR17]\]. In their study, ARG was used to estimate the minimal number of recombination events. ARG with the minimal recombination numbers definitely helped the study of recombination. We do not think that ARG with minimal number of recombination represents a true ARG. Therefore, we attempt to design an algorithm that can model a group of ARG which are consistent with sample in a reasonable way, rather than simply produce all the ARGs in the whole ARG space, neither does it generate ARGs with minimal number of recombination events. We believe that by this way our method generates sample-consistent ARGs which reflect true genealogical information of the samples and it will definitely help us estimate parameters of population demographic history.
In the present study, we further modified the *SC* algorithm to generate sample-consistent ARG. One ARG is called sample-consistent if the given sample of sequences can be generated using the ARG under the infinite sites model, which means that each site on the sequences can be explained using the local tree of the ARG. An example of sample-consistent ARG is given in Additional file [1](#MOESM1){ref-type="media"}: Figure S1. Notably, sample-consistent ARG is a very small part of the full ARG space. In our simulation study with *ms*, less than 10 sample-consistent ARGs were found in millions of simulated ARGs. The algorithm described above, *SC*, was modified slightly. We therefore developed a new algorithm called *SC*-sample, with which sample-consistent ARGs can be generated. Suppose there are sample sequences with 0 and 1 coded for different alleles, then, the procedure of this *SC*-sample algorithm can be described as follows: Step 1 Generate a standard coalescent tree *X*^*0*^, which is consistent with the left site of the sequence.Step 2 One by one, confirm whether the following sites are consistent with the current tree *X*^*0*^, and find the first inconsistent site *P*~1~. Then generate a breakpoint of type 1 recombination *S*~1~ in the interval \[0, *P*~1~\], and choose a location from the current ARG *X*^*0*^.Step 3 Build a new ARG X^S1^ by adding branches to *X*^*0*^ at the chosen location to make the local tree after *S*~*1*~ consistent with the first site after *S*~*1*~.
Repeat step 2 to 3 until *S*~*j*\ +\ 1~ \> 1 to get an ARG consistent with the sample for the full sequence.
Performance of *SC*and *SC*-sample {#Sec6}
----------------------------------
It is here proven that the distribution, i.e. the statistical properties, of the ARG generated by *SC* coincides with that generated by *ms* \[[@CR6]\]. MaCS (*h* = *L*) was found to simulate all the type 1 but no type 2 recombination. In order to determine the equivalence of *SC* and *ms* and assess the influence of type 2 recombination, the mean and variance of the first 100 local trees generated by *SC*, *ms*, and MaCS were compared (Figure [4](#Fig4){ref-type="fig"}, Additional file [2](#MOESM2){ref-type="media"}: Figure S2). The difference between *SC* and *ms* and that between MaCS and *ms* were determined separately using the difference of mean of the local tree's height and variance of the local tree's height, respecting the mean and variance of *ms*'s local tree's height, details in **Methods**. Twenty haplotypes were simulated for a total of 100,000 rounds with *ρ*(=4*N*~*e*~*Lr*~*p*~) of 100 at *L* = 167 kb.Figure 4**Comparison of differences in the mean and variance of the first 100 local trees' height between SC and Macs using** ***ms*** **as a control.** Boxplot with 75% quantile and 25% quantile as top border and the bottom border, respectively.
The present results show that *SC* is more similar to *ms* with respect to both the mean and variance of the TMRCA than MaCS is. This confirms that the results of the theoretical study that ARGs generated by *SC* and *ms* share similar statistical properties, indicating that *SC* performs better than MaCS in the modeling of ARG.
In practice, it is very important to take into account the time cost and the RAM usage of the computer programs that implement the algorithm. The two values were the average of 10 replicates between *SC* and *ms*. Results are shown in Table [1](#Tab1){ref-type="table"} for *n* = 20, *n* = 100 and *n* = 1000 sequences using different recombination rate parameters *ρ* = 4*N*~*e*~*Lr*~*p*~. The results show that *ms* runs slightly faster but needs more RAM, though *SC* and ms share similar statistical properties. In addition, even the latest version of *ms* cannot simulate sequences as long as those *SC* does.Table 1**Comparison of average time cost and memory usage between** ***SC*** **and** ***ms***Sample sizeRegion (***L***)***ρ***(4***N*** ~***e***~ ***Lr*** ~***p***~)***SC**ms***2010 Mb100045 s (14 MB)13 s (63 MB)10 Mb10,0003 h 43 min 22 s (210 MB)2 h 3 min 53 s (344 MB)100 Mb100057 s (14 MB)N/A100 Mb10,0004 h 30 min 18 s (238 MB)N/A10010 Mb10001 min 57 s (19 MB)23 s (148 MB)10 Mb10,0007 h 11 min 34 s (281 MB)N/A100 Mb10002 min 11 s (20 MB)N/A100 Mb10,0007 h 13 min 3 s (289 MB)N/A10001 Mb10003 min 13 s (26 MB)12 s (215 MB)10 Mb3 min 20 s (26 MB)N/AMemory usage (MB) is in parentheses. Mb: sequence length in million base pairs. N/A entries denote test cases that were terminated when we ran on a server with 4 CPUs of 2.40GHz and 24GB for total memory. ***ρ*** denotes the population recombination rate.
Next, the performance of *SC*-sample was evaluated. Two hundred samples were generated with different recombination and mutation rates. Then *SC*-sample was used to generate 1000 ARGs that were consistent with each sample. Consistency was confirmed by adding mutations to the ARG. Then the recombination and mutation parameters of humans were used to repeat the experiment shown above. Then 100 independent genealogies of 20 chromosomes were generated and complete sequences were simulated, each with an interval of 30 kb. A constant *c* = 1.13 *cM*/*Mb* was assumed, which is the sex-averaged recombination rate \[[@CR11]\]. The per-site mutation rate was assumed to be 1× 10^-8^, and the effective population size was assumed to be 12,500. The ratio of strictly sample-consistent ARG generated by SC-sample and SC were calculated, the results are shown in Figure [5](#Fig5){ref-type="fig"}.Figure 5**Ratio of strictly sample-consistent ARGs to all ARGs.** ARGs are not considered strictly sample-consistent unless they are both sample-consistent and the number of type 1 recombination of that ARG is within 10% of estimate using 100,000 simulations in 4 different cases. Case 1: ρ = 10, μ = 10 with *SC*-sample. Case 2: ρ = 50, μ = 50 with *SC*-sample. Case 3: ρ = 16.9,μ = 7.5 with *SC*-sample, which employ the mutation rate and recombination rate in human. Case 4: ρ = 10, μ = 10 with *SC*.
Because there is no available sample-consistent algorithm for comparison, the performance of the *SC*-sample algorithm could not be fully evaluated. However, the result shows that sample-consistent ARGs randomly chosen reveal directly some recombination information of the sample. The numbers of recombination events of the sample-consistent ARGs are close to the mean number of samples (Figure [6](#Fig6){ref-type="fig"}). That means the *SC*-sample generated sufficient numbers of ARGs which are close to the true recombination events without generating many ARGs that were inconsistent with samples. *SC*-samples may be very helpful in estimating the recombination rate in the future, considering that full sequence data are now becoming available.Figure 6**Distribution of number of type1 recombination in ARGs generated with SC - sample.** The red vertical line indicates the expected number of type 1 recombination in a particular scenario**.**
Discussion {#Sec7}
==========
In the present study, a new method for modeling coalescent processes with recombination was developed. This method offers some improvements over Wiuf and Hein's method. It covers all the commonly used spatial simulation algorithms with approximations, i.e. SMC, SMC′, and MaCS. Based on this method, a new algorithm, *SC*, was developed for the simulation of ARG and the generation of data. This algorithm has been shown to be able to simulate ARG with the same distribution as that produced by a back-in-time simulation algorithm. Another relevant algorithm, *SC*-sample, was also developed for generating sample-consistent ARG. The present method and algorithms have considerable potential to facilitate modeling and statistical inference of recombination.
Another study showed that the distribution of ARG generated by the new algorithm is identical to that generated by a typical back-in-time model \[[@CR16]\]. In this study, computer simulation experiments confirmed that this feature was common to both the back-in-time method and the along-sequence method with respect to the simulation of ARG. In practice, *SC* takes slightly more time than *ms* for the same simulation but use less RAM. However, *ms* does not work as well as *SC* when the sequence is very long (e.g., 100 Mb). These comparisons indicate that neither of these two methods can generate ARG of long sample sequences in a satisfactory manner. Considering the above situation and rapid accumulation of huge genomic data, one possible solution or a tradeoff could be effected by approximating ARG to replace full ARG.
Several approximate methods have been developed for the simulation of ARG, such as the sequentially Markov coalescent (SMC), a related method called SMC′ and Markovian coalescent simulator (MaCS). These existing methods can be considered as special cases of the present method. Marjoram and Wall classified recombination into 5 types \[[@CR8]\]. However, these methods all ignore type 2 recombination and some of the coalescent events associated with it. This may affect ARG reconstruction and statistical inference of recombination.
The influence of type 2 recombination on ARG was assessed by comparing the statistical properties of *ms*, *SC*, and MaCS (h = L). It was concluded that ignoring type 2 recombination would reduce both the mean and variance of the times to MRCA, although the reduction is not very remarkable. These results indicated that type 1 recombination might play a more important role in history than type 2 recombination. However, type 2 recombination was found to be present extensively and much more common than type 1 when long sequences are considered and simulated, suggesting that this type 2 recombination should not be ignored in simulation, especially for the simulation of long sequences. Therefore, an algorithm that takes into account type 2 recombination should be used for the simulation of long sequences. This is one of the reasons why the *SC* algorithm was developed.
During the simulation of recombination, regardless of which algorithm was used, the process is complex and time consuming. Many non-sample-consistent ARGs are generated. For this reason, a new concept, sample-consistent ARG, was developed, and an algorithm, *SC*-sample, was used to simulate sample-consistent ARG. However, on the one hand, simulations based on *SC*-sample save a considerable amount of time and significantly increase efficiency over that of any non-sample-consistent algorithm.
Taken together, the two algorithms developed in this study improved the modeling of coalescence with recombination. In a future study, new approximated methods should be developed to handle large-scale simulation of big data. Coalescence with recombination can be modeled using a random sequence {(*S*~*i*~, *Z*^*i*^) : *i* ≥ 0}. These different methods of approximation actually use different *S*~*i*~ and *Z*^*i*^. One possible solution is to approximate the random sequence {(*S*~*i*~, *Z*^*i*^) : *i* ≥ 0} from the mathematical aspects. These methods, when well established, can greatly facilitate studies of recombination modeling and recombination rate estimation.
Conclusions {#Sec8}
===========
In this study, we developed a new method for modeling coalescent processes with recombination, and we demonstrated that our method has comparable performances with *ms* in generating ARG, a computer program commonly used to simulate coalescence. An outstanding feature of our method is that it does not produce any redundant branches which are inevitable in Wiuf and Hein's algorithm. In addition, our method can generate sample-consistent ARGs. Interestingly, we elucidated that the existing approximate methods (SMC, SMC', MaCS) are all special cases of our method. We believe our new method and algorithm will facilitate the modeling of recombination and advance our understanding of evolution of recombination events within and between populations.
Methods {#Sec9}
=======
SC {#Sec10}
--
As a modified version of MaCS, *SC* can be outlined as follows. The algorithm can recursively construct part graph X^*Si*^ and each branch can be assigned some label *k* ≤ *i*. All the branches with label *i* form the local tree . This procedure is explained in Figure [7](#Fig7){ref-type="fig"}. Throughout the paper, we use s to denote a site on DNA sequence and t to denote the time that corresponds to certain locations on the ARG. In the following steps, step 1 is initializing, step 2--6 use a big circulation to construct the full ARG. Step 2 is to find the type 1 recombination break points and define the end condition of the big circulation, step 3 is to choose the location on the current tree of the type 1 recombination, step 4 is to consider the coalescence of the new branch caused by the type 1 recombination, step 5 is to use a small circulation to find all the type 2 recombination, and step 6 is to update the label for differing current tree and the old branches and goes to another bid circulation. Additional file [3](#MOESM3){ref-type="media"}: Figure S3 shows an example of the SC method.Figure 7**Updated steps of** ***SC*** **to generate new ARG from current ARG.** The same step numbers and case numbers as in the methods section are used here. Step 3 is a new type 1 recombination created on the current ARG. In step 4, the new branch coalesces into an old branch or a branch on current tree. If the new branch coalesces into the current tree, a new ARG has been constructed. If the new branch coalesces into an old branch, then there are two cases, case 5.1 and case 5.2. In case 5.2, a new ARG is generated. In case 5.1, a new branch is generated which could be dealt with in step 4. When a new ARG is generated, it turns into current ARG and a new round begins. For more details, see the Methods section.
Step 1. Construct a standard coalescent tree (c.f. \[[@CR2]\]) at the position *S*~0~ = 0 (the left endpoint of the sequence) and assign each branch of the tree with label 0. Let .
Step 2. Assume that has already been constructed along with local tree . Take the next recombination point *S*~*i*\ +1~ along the sequence according to the distribution
Here, is the total branch length of the current local tree , *ρ* is global population recombination rate, and p(*u*) is the density of the distribution of break point (see Model Assumptions for the explanation of p(*u*)). If *S*~*i*\ +\ 1~ ≥ 1, break; otherwise, go to step 3.
Step 3. Uniformly choose a recombination location on . Let *j* = 0, and let denote the latitude (i.e. the height from the bottom to the location) of the chosen location.
Step 4. At the site of recombination, a new branch with label *i* + 1 is created by forking off the recombination node and moving backward in time (i.e. along the direction of increasing latitude). With equal exponential rate 1, the new branch will tend to coalesce to each branch in that has a higher latitude than . In this way, if there are *l* branches in at the current latitude, then the time before coalescence is exponentially distributed with parameter *l*. Note that at different latitudes there may be a different number of *l* of branches. Let the branch to which the new branch coalesces be called EDGE, and let be the latitude of the coalescent point and *j* = *j* + 1.
Step 5. If the EDGE is labeled with *i*, which means the EDGE has coalesced to the current tree, go to step 6; if the EDGE is labeled with a *k* and *k* is less than *i*, which means the EDGE has coalesced to an old lineage, then a potential recombination event should be considered. The waiting time *t* of the possible recombination event on the EDGE is exponentially distributed with parameter .
Case 5.1. If is less than the latitude of the upper node of the EDGE which is denoted by *H*, then it is the next recombination location. Let . The part of the branch above is no longer called EDGE. Let *j* = *j* + 1 and go to step 4.
Case 5.2. If , choose the upper edge of the current EDGE with larger labels to be the next EDGE. Let , *j* = *j* + 1 and go to step 5.
Step 6. Let be the collection of all the branches in and all the new branches labeled *i* +1. Starting from each node 1 ≤ *m* ≤ *n* at the bottom of the graph, specify a path moving along the edges in increasing in latitude until it reaches the top of the graph. Whenever a recombination node is encountered, choose the edge with the larger label. The collection of all the paths then forms the local tree . Update all the branches in with label *i* + 1.
It is here noted that step 5 shows the key differences between the present method and other spatial algorithms. This step finds the missing type 2 recombination events. Essentially, the present algorithm gives the new branch a chance to leave when it coalesces into an old branch.
Actually, the existing approximating algorithms SMC, SMC′, MaCS can all be considered special cases of the present random sequence framework. The only difference is that these algorithms use a simpler *Z*^*i*^ than the present method does. These values are approximated as *Z*^*i*^ - *SMC*, *Z*^*i*^ - *SMC*\', and *Z*^*i*^ - *MaCS*, respectively. One of the main differences lies in the fact that *Z*^*i*^ may construct many branches while *Z*^*i*^ - *SMC*, *Z*^*i*^ - *SMC*\', and *Z*^*i*^ - *MaCS* each constructs only one branch (Figure [8](#Fig8){ref-type="fig"}). When a new branch coalesces to a branch with label of whose value is less than *i*, *SC* allows the branch to leave due to the missing type 2 recombination, while the other three algorithms ignore these missing recombination and do not allow the branch to leave . The other difference is that, in step 4 (see above), in *SC*, the new branch can coalesce to each branch in and the other three algorithms only allow the new branch to coalesce to certain branches in . SMC allows the new branch to coalesce to the branches with label *i* except for the branch where the recombination occurs, while SMC′ allows the new branch to coalesce to the branches with label *i* and MaCS restricts the coalesced branch to those with labels larger than *i* -- *k*, where *k* is a fixed integer. In summary, all these existing approximation algorithms used a simpler version of *Z*^*i*^ than the present algorithm, and *Z*^*i*^ only allows the new branch to coalesce once to some particular branches.Figure 8**A schematic diagram of the update steps of SMC, SMC′, and MaCS under our framework.** Regardless of which branch the new branch coalesces into in Step 4, a new ARG is constructed.
*SC*-sample {#Sec11}
-----------
In order to relate a coalescent tree to the sample, we assign a value of the sample to each leaf node. Suppose the sample sequences are coded by 0/1, we can value the other nodes of the tree by 0 or 1. There should be a mutation existing on an edge if the values of the top and bottom nodes of it are different. The mutation edges are different with different value schemes. So there must be some schemes which make the coalescent tree with the minimum mutation number (MMN). The following is an algorithm to get the MMN.
Step 1. Value the coalescent tree from bottom to the top according to the following rules: (a) Value a node with 1 if its two son nodes are all with value 1. (b) Value a node with 0 if its two son nodes are all with value 0. (d) Value a node with 2 if one of its two son nodes is with value 0, the other is 1. (e) Value a node with 1 if one of its two son nodes is with value 1, the other is 2. (f) Value a node with 0 if one of its two son nodes is with value 0, the other is 2. (g) Value a node with 2 if its two son nodes both have value 2.Step 2. For the top node of the coalescent tree, (a) change its value to 0 if it is valued with 2, (b) remain its value if it is valued with 0 or 1.Step 3. Revalue each 2-valued node with its parent node's value.Step 4. The number of the mutation edges is the MMN.
See Additional file [4](#MOESM4){ref-type="media"}: Figure S4 for an example of the MMN algorithm.
Based on *SC*, a method capable of directly generating ARG which is consistent with the sample sequences is implemented. The basic idea is to pose some constraints during the gradual constructing of ARG to make sure that every local tree of the ARG is sample-consistent. The algorithm *SC*-sample is a modified version of *SC*. The differences between *SC*-sample and *SC* are outlined as follows: Step 1. At position *S*~0~ = 0, a coalescent tree *T*~0~ is constructed as a modification of the standard coalescent: (a) Give the value of the left site (first site) of the sample to the leaf node. (b) Randomly choose two nodes with the same value to coalesce and give the same value to the parent node. (c) When only one node with value 0 or 1 remains, the simulation becomes the standard coalescent simulation.Step 2. Assign each leaf node of the current tree a value of the sites after the current position of the sample, and then use the above MMN algorithm to assign every node of the current tree by making sure the number of edges with the value of the top and bottom nodes difference is minimal. In this way, each node on the current tree is valued by a 0--1 vector. Edges that have different values at the top and bottom nodes at a given are called mutation edges of the site.Step 3. Denote the first site that has more than one mutation edges as *P*~*i*\ +\ 1~. The next recombination point *S*~*i*\ +\ 1~ is uniformly chosen between *S*~*i*~ and *P*~*i*\ +\ 1~ (or *S*~*i*\ +\ 1~ is regenerated as before until *S*~*i*\ +\ 1~ \< *P*~*i*\ +\ 1~).Step 4. Randomly choose a recombination site on the mutation edges of the site at position *P*~*i*\ +\ 1~.Step 5. The new lineage can only coalesce into 3 different types of edges: type A, the edges in the current tree with their values from *S*~*i*~ to *P*~*i*\ +\ 1~ are the same, type B, an old branch that leads to type A edges in the current tree, type C, a branch beyond the local MRCA.
The other parts between the *SC* and *SC*-sample methods are the same. Figure [9](#Fig9){ref-type="fig"} shows the *SC*-sample dynamically.Figure 9**Generation of sample-consistent ARG with SC-sample. A)** Generation of a binary tree is consistent with the left site. **B)** Determine whether the current local tree is consistent with the second site, the answer is yes since there is only one mutant edge. **C)** Determine whether the current local tree is consistent with the third site. This tree is not consistent because there are two mutant edges, so P~1~ = 0.5. **D)** Generate the next recombination point that is uniform on \[0, P~1~\] and obtain P~2~ = 0.4. The dotted lines are the branches onto which the new branches are supposed to coalesce. **E)** The new branch coalesces and the \[0, 0.4\] part of the ARG is simulated.
Uniform distribution was used when finding the next recombination point because the algorithm was designed to be independent of a prior recombination rate. Any ARG generated using this method can be viewed as a randomly selected ARG consistent with the sample. In principle, all kinds of algorithms can be modified. These can be considered special cases in our model (such as SMC and MaCS) to generate ARG consistent with the sample.
Calculation of the differences among ARGs by the mean and variance of the heights of local trees {#Sec12}
------------------------------------------------------------------------------------------------
In order to study the differences among the ARGs generated using *SC*, MaCS, and *ms*, the mean and variance of the heights of the first 100 local trees were compared. Considering that the mean and variance of the tree height vary from the first to the last local tree, a new method is proposed to measure the difference of ARGs. With the mean values of the *i*^th^ local tree's height generated by *SC* and *ms*, the difference between *SC* and *ms* was calculated as follows:
For the variance of the *i*^th^ local tree's height, the difference is as follows:
Here, represents mean height of the *i*^th^ local tree generated by *SC*. It represents variance of the height of the *i*^th^ local tree. So, based on the mean and variance of the local tree's height, the difference in ARGs generated by difference algorithm can be estimated.
Availability and requirements {#Sec13}
-----------------------------
The algorithm for *SC* and *SC*-sample are implemented in C++ by modifying the code of MaCS. *SC* is implemented by the same set of demographic models used in *ms*, but *SC*-sample can only handle the classical homogeneous effective population size model (constant population size). Like MaCS, *SC* considers variations in recombination rate and intragenic gene conversion using a piecewise constant model and intragenic gene conversion, respectively \[[@CR18]\]. The source code of SC and SC-sample can be downloaded from the website: <http://www.picb.ac.cn/PGG/resource.php>
Project name: SC and SC-Sample Project home page: <http://www.picb.ac.cn/PGG/resource.php> Operating system: GNU/Linux Programming language: C++ Other requirements: g++ version 4.4.6 or higher; boost version 1.41 or higher License: GNU GPL Any restrictions to use by non-academics: license needed
Electronic supplementary material
=================================
{#Sec14}
######
Additional file 1: Figure S1: An example of a sample-consistent local tree. Each leaf denotes one site of a gene which is coded by 0/1. A sample-consistent local tree denotes a binary tree that follows the infinite-site model, in which all the nodes labeled 1 coalesce first or all the nodes labeled by 0 coalesce first. (PDF 17 KB)
######
Additional file 2: Figure S2: Comparison of differences in the mean and variance of the first 100 local trees' height between SC and Macs using *ms* as a control. Boxplot with 75% quantile and 25% quantile as top border and the bottom border, respectively. Twenty haplotypes were simulated for a total of 10,000 rounds with *ρ*(=4*N* ~*e*~ *Lr* ~*p*~) of 1000 at *L* = 167 kb. (PDF 5 KB)
######
Additional file 3: Figure S3: An example of SC method. An example of ARG. B,C and D describe the way of generating the ARG. The black thick branches make up the current tree. The gray branches are all old branches. The dashed lines are the path of the new branch. The black thin are un-simulated branches. The numbers in brackets display intervals which denote the ancestral materials carried by nearby branches. The numbers without brackets denote the recombination rates occur in the underlying nodes. In B,C, D, the numbers near edges are the labels of the SC method. (PDF 190 KB)
######
Additional file 4: Figure S4: An example of the MMN algorithm. Step 1, value the leaf nodes with the sample. Step 2, value each node from bottom to top. Step 3, revalue each 2-valued node. Step 4, thick lines denotes mutation branches, get the MMN = 3. (PDF 54 KB)
**Competing interests**
The authors have declared that no competing interests exist.
**Authors' contributions**
Designed, coordinated and supervised the study: ZM and SX. Developed computer program: LL and YL. Performed the data analysis: YW, XC and YZ. Contributed reagents/materials/analysis tools: ZM and SX. Wrote the paper: YW, YZ, XC,YL, ZM and SX. All authors read and approved the final manuscript.
We would like to thank the colleagues for their helpful discussion. In particular, we are greatly indebted to Drs. De-Xin Zhang and Wei-Wei Zhai for offering their valuable insight idea and constant encouragement throughout this work. These studies were supported by the Strategic Priority Research Program of the Chinese Academy of Sciences (CAS) (XDB13040100), by 973 project (2011CB808000), NSFC Creative Research Groups (11021161), National Natural Science Foundation of China (NSFC) grants (91331204, 31171218, 11001010), by the Fundamental Research Funds for the Central Universities (2011JBZ019, 2012RC048). S.X. is Max-Planck Independent Research Group Leader and member of CAS Youth Innovation Promotion Association. Z-M.M. also gratefully acknowledges the support by National Center for Mathematics and Interdisciplinary Sciences (NCMIS). S.X. also gratefully acknowledges the National Program for Top-notch Young Innovative Talents of The \"Ten-Thousand-Talents\" Project and the support of K.C.Wong Education Foundation, Hong Kong. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
| |
A school, a year and 3 initials: Investigators search for owner of ring found at pawn shop
Deputies aren't sure if it was stolen, but they're concerned the owner of this Medical College of Georgia ring might be searching for it.
Author:
Christopher Buchanan
Published:
4:37 PM EDT October 22, 2019
Updated:
4:37 PM EDT October 22, 2019
GREENE COUNTY, Ga. — A Georgia sheriff's office is on the lookout for someone who may be missing a class ring after they came across it at a local pawn shop.
The Greene County Sheriff's Office has released photos of the ring, which appears to belong to a 1987 graduate of the Medical College of Georgia with the initials L.I.S. The graduate may also have a master's in nursing.
"We aren't certain the ring is stolen, but we would like to speak to the original owner just to be sure since the person who pawned it did not purchase it," the sheriff's office wrote.
The sheriff's office said it had contacted the college for the year's graduation records but added that it "had not led to a breakthrough yet."
Now, the Greene County Sheriff's Office has released pictures of the ring in hopes that it can solve the mystery and get the ring back to its rightful owner - or at least verify it was meant to end up in a pawn shop. | |
Sentencing Scoresheet Compliance
Report
October 2001
Executive Summary
This report is in fulfillment of Florida Statute 921, which mandates that the Florida Department of Corrections shall, no later than October 1 of each year, provide the Legislature with a sentencing scoresheet compliance report. This report details the compliance of each judicial circuit in submitting to the Department sentencing scoresheets for offenders convicted of felonies between July 1, 2000 and June 30, 2001.
Included in this report is the following information:
Using FY00-01 commitments to the Department of Corrections as a baseline for comparing scoresheet submissions, a compliance rate is provided for each judicial circuit and county. Commitments include all felons convicted and sentenced to state prison, probation or community control as well as modifications to and revocations of supervision. Scoresheets reported are those received through August 31, 2001. The statewide rate of compliance for this time period is 67.4%.
2For scoresheets received with sentence dates during FY00-01 (including those with non-department sanctions), county, circuit and region totals are provided, with breakdowns according to the source of preparation: State Attorney (72.1%) or Department of Corrections (27.9%).
Circuit and county totals for scoresheets with FY00-01 sentence dates are listed according to most severe type of sanction. Of the 118,208 scoresheets received, 24,797 (21.0%) had a state prison sanction, 62,915 (53.2%) received a community supervision sanction and 28,785 (24.4%) received a county jail sanction.
The methodology for identifying missing scoresheets and calculating the compliance rate is described below:
Consider all sentencing events (including new admissions, revocations and modifications) and compare them to the sentencing scoresheets received for each offender. There are three possible outcomes: the sentencing event matches the scoresheet, there is no scoresheet to match the sentencing event or there is a scoresheet but no record of a sentencing event. The compliance rate is then calculated by adding together the first two options (resulting in the total number of sentencing events) and comparing it to the number of scoresheets that match.
Heggs V. State
In February 2000, the Florida Supreme Court, in its review of Curtis Leon Heggs, determined Chapter Law 95-184 to be unconstitutional due to a violation of the single subject rule of the Florida Constitution. This Chapter Law contained substantial changes in sentencing law referred commonly to as the "1995 Sentencing Guidelines."
In some cases, the 1995 Sentencing Guidelines provided for greater punishment than the 1994 Sentencing Guidelines. Some enhancements were made to the Guidelines, designed primarily to target offenders with more serious current or prior criminal records. There were a significant number of changes and the nature of the changes provided for significant effects for some offenders. However, many offenders sentenced under the 1995 Guidelines would have scored identically under the 1994 version of the law.
In May 2000, the Supreme Court revised and finalized their opinion in this case and further clarified who is eligible for re-sentencing. This decision stated that "only those persons adversely affected by the amendments made by Chapter Law 95-184 may rely on our decision here to obtain relief. Stated another way, in the guidelines context, we determine that if a person's sentence imposed under the 1995 Guidelines could have been imposed under the 1994 Guidelines (without a departure), then that person shall not be entitled to relief. "
The Supreme Court declined to rule in Heggs as to when the window period closed for offenders claiming their guidelines sentences are invalid due to the amendments contained in Chapter Law 95-184. However, it its review of Xzavier Trapp, the Supreme Court determined that the window period for challenging the Sentencing Guidelines provisions amended in Chapter Law 95-184 opened in October 1, 1995 and closed on May 24, 1997.
Approximately 192,267 sentencing events occurred under the 1995 sentencing structure for offenses committed between October 1, 1995 and May 24, 1997, based on the Guidelines scoresheets received by the Department of Corrections through February 29, 2000. Original analysis indicated that 73,467 of these sentencing events evidenced scores adversely affected by the changes brought about in the 1995 guidelines. This analysis did not determine that the 1995 sentence was outside the range of the 1994 sentence, only that the scoresheet calculation reflected a difference in points.
Due to the potential number of re-sentencing events, it would be inaccurate to attempt to analyze and compare the 1995 sentencing guidelines to other versions of the sentencing scoresheet. As re-sentencing events occur, the original 1995 scoresheet will be replaced in the database with a 1994 scoresheet. This is necessary because the re-sentencing invalidates the original sentencing event. Likewise, the new sentence will overwrite the original sentence in DOC sentence structure and databases. A computerized record of the original sentence is not kept in the database that is used for data analysis.
As such, it is not possible to determine which scoresheets are the result of a re-sentencing or to verify that a re-sentencing event was a result of the Heggs ruling. Therefore, any analysis of this time period would yield a mixture of offenders sentenced under both the 1994 and 1995 guidelines.
For the analysis of the compliance rates, we are excluding all scoresheets with offense dates that occur from October 1, 1995 through May 24, 1997.
Questions regarding this report should be directed to Stacey Anderson, Florida Department of Corrections, Bureau of Research and Data Analysis, 2601 Blair Stone Road, Tallahassee, Florida 32399-2500 or by phone at (850) 488-1801.
| |
The invention relates to a method for manufacturing a super-large-section reinforcement-attached double-wall steel-concrete cross beam, which comprises the following steps of: manufacturing a double-wall steel-concrete cross beam, a plate unit and a reinforcement-attached block body in a factorization manner, then adopting long-line horizontal total splicing, and finally realizing crossed installation of plates and reinforcements by steps and processes. The double-wall steel shell type beam has the advantages that the manufacturing mode of combining the rib-attached block body and long-line horizontal type total splicing is adopted, and the problems that a double-wall steel shell type beam is few in partition plate, low in rigidity, small in operation space, large in segment assembly difficulty, large in deformation control difficulty and the like are solved; secondly, manufacturing of the bar-attached steel shell type cross beam rod piece is achieved through assembly cross operation of parts and steel bars and step-by-step welding; the manufacturing precision of the cross beam rod piece is effectively guaranteed while the connecting precision of the steel bars is guaranteed; 3, the total assembling time is reduced, the assembling efficiency is improved, the high-altitude operation time is reduced, and the safe operation of constructors is ensured; and fourthly, the manufacturing method adopts a manufacturing process of combining the rib-attached block body and the horizontal long-line total splicing, the technology is advanced and reliable, the cost is saved, the construction progress can be obviously accelerated, and the manufacturing method has obvious economic benefits. | |
Wandsworth Council wants your help to decide on the final design for the refurbishment of Harroway Gardens. £428,115 has been allocated via the S106 scheme to secure enhancements at this site, bringing the council’s investment in parks and open spaces in this area to over £700k in recent years. The S106 scheme pays for new and improved community facilities using funding collected from private development projects in the surrounding area.
Thank you to all of you who contributed to the first round of consultation held in summer 2021. Based on the feedback we received, three different park designs have been developed. We would now like to hear your views to help decide on a final design for the refurbishment.
What happens next?
Once this phase of consultation has concluded, all of the feedback will be carefully considered to decide on the final design. Whilst every attempt will be made to construct the final design to plan, certain elements may have to be altered during the implementation stage due to unforeseen technical or logistical issues.
In case you are not able to visit Harroway Gardens during the consultation period, photos of the Gardens are included as part of the online consultation to aid your decisions.
If you would like a paper copy, please contact us at [email protected] or call (020) 3959 0060.
Harroway Gardens Improvements drop-in sessions
If you have any questions, you can join us at one of our drop-in sessions at Caius House, next to Harroway Gardens:
We will be based in the café area of Caius House, 2 Holman Rd, London SW11 3RL. The entrance is at the intersection of Yelverton and Holman Road. | https://haveyoursay.citizenspace.com/wandsworthecs/harroway-phase2-21/ |
Preparing for Construction - February 2021
Beginning in spring 2021, Calder will be experiencing Neighbourhood Renewal construction. Planning and design for renewal are now complete.
The final neighbourhood designs were developed based on feedback received throughout the public engagement process since June 2019, and also considered City policies, programs and technical considerations. The final neighbourhood designs include:
-
Upgraded residential streets, including measures to slow traffic
-
Improved connections for people who walk and bike
-
Enhanced park and green spaces
Learn More
With public events postponed and physical distancing a priority, we are committed to delivering project information and meaningful online public engagement opportunities. To engage with us:
-
View the final designs (23MB) for the neighbourhood
-
Watch a video that answers key questions
-
Ask the project team a specific question
-
Understand what to expect during construction
-
Learn about the Local Improvement process and cost-sharing for sidewalk reconstruction
-
View information about Low Impact Development (LID)
Thank you for participating virtually as we adjust our practices for COVID-19. | https://www.edmonton.ca/transportation/on_your_streets/calder.aspx |
Metro Center in Washington, D.C.? We’ve got plenty . . .
1: Building Innovation 2013 is
delivered by the National Institute of
Building Sciences – an authoritative source of innovative solutions for the
built environment. For nearly 40 years, the Institute, a non-profit,
non-government organization, has served as an interface between government and
the private sector, with the primary purpose of bringing together
representatives of the entire building community to review advancements in
science and technology and develop solutions for our built environment.
2: Building Innovation 2013 is
focused on Improving Resiliency through High
Performance and will present the latest advancements in a
wide-range of building industry areas that offer genuine solutions for improving
security, disaster preparedness, performance, sustainability, information
resources and technologies for our nation’s buildings and infrastructure. Within
four tracks, Conference attendees will experience the Institute in action as a
leader and advocate for the industry and discover how the Institute’s programs
and activities work to develop innovative solutions for a number of
building-related challenges.
3: Building Innovation 2013 is
the only place you’ll find the authentic event on federal construction: FEDCon®
— The Annual Market Outlook on Federal Construction —
where attendees will hear the most authoritative, up-to-date information on
federal agency building and infrastructure budgets, construction forecasts and
regulatory updates. The Institute initiated FEDCon®, now in its 20th
year, to give private-sector architects, engineers, general and specialty
contractors, and manufacturers insight into what they need to know to deliver
services and products to the U.S. Federal Government — the world's largest
facility owner and procurer of design and construction services.
4: Building Innovation 2013 is
where the popular and informative buildingSMART alliance
Conference is on the schedule. It’s the only place where the very
experts who make the critical decisions on building information modeling (BIM)
standards come together to share their knowledge on the various aspects of
implementing BIM. This Conference, focused on Integrating BIM:
Moving the Industry Forward, will deliver an understanding of how
BIM can better integrate the design, construction, fabrication and operation
processes, and also provide you with the latest metrics available to assess
industry progress.
5: Building Innovation 2013 is
the only Conference that gives you Innovative
Technology Demonstrations directly from the developers who
initiated the cutting-edge tools. Don’t settle for second-hand information on
the Construction Operations Building information exchange (COBie)
Calculator and Specifiers Properties information exchange (SPie)
Catalog. Find out first-hand all about these IE standards, as well as
the new information exchanges for Building Programming (BPie), HVAC (HVACie),
Electrical Systems (SPARKie), Building Automation Modeling (BAMie) and Water
Systems (WSie). Attend these demonstrations, along with the buildingSMART
Challenge at Building Innovation 2013, and gain insights
straight from the source.
6: Building Innovation 2013 is
home to the popular Building
Enclosure Technology and Environment Council (BETEC) Symposium,
where the field’s leading experts in building enclosure research, design and
practice unite to tackle the latest issues. For 30 years, BETEC has delivered
quality symposia and continues its commitment with this Symposium titled,
Fenestration: A World of Change, which will examine
the most current data available on fenestration performance and technology.
7: Building Innovation 2013
kicks off the inaugural Multihazard
Mitigation Council (MMC) Symposium, designed to guide hazard
mitigation policies for the next decade. At this Symposium, focused on
Large-Scale Mitigation Planning and Strategies,
industry experts will participate in interactive sessions to tackle
long-standing multihazard mitigation problems in the United States and then
present their conclusions to a panel of high-level policy makers, with the goal
of setting long-term solutions.
8: Building Innovation 2013
highlights the revolutionary tools developed through the Institute’s
collaboration with the U.S. Department of Homeland Security (DHS) Science and
Technology Directorate (S&T) Infrastructure Protection and Disaster
Management Division (IDD) for use in evaluating buildings against the threat of
multiple hazards. The Integrated Resilient
Design Symposium: Evaluating Risk, Improving Performance,
introduces attendees to these invaluable tools and demonstrates how they are
being used to assess potential risks to buildings from blast, chemical,
biological and radiological (CBR) threats, and natural hazards, while
incorporating high-performance attributes into building design.
9: Building Innovation 2013
offers the only Symposium specifically addressing the needs of persons with low
vision. The Low Vision
Design Committee Symposium: Creating Supportive Environments for Persons
with Low Vision, presents the latest state-of-the art theory
and practices for designing for people with low vision from the designers,
users, clients and low vision medical specialists that focus on this growing
segment of the population – which is expected to be more than 50 million people
by the year 2020. Find out how designing for persons with low vision can create
environments that are more universally user-friendly for everyone.
10: Building Innovation 2013
provides the chance to explore what social, economic and environmental
sustainability means to various segments of the building industry and how an
effective, holistic approach can move the industry Beyond Green™. The
Sustainable Buildings Industry Council Symposium: Fostering
Innovation to Go Beyond Green™, is the only event where you’ll
meet the winners of the 2012 Beyond Green™ High-Performance Building
Award and see their real-world examples of sustainability first-hand.
11: Building Innovation 2013 is
the place where academic professionals will gather to work on establishing a
common educational strategy for BIM education. During the BIM Academic
Education Symposium: Setting the Course for a BIM Educational
Strategy, representatives from more than 25 colleges and
universities will focus on certification, accreditation and credentialing.
Coordinated by the buildingSMART alliance for the 4th year, this event will be
held in collaboration with the AGC BIM Forum.
12: Building Innovation 2013
allows you the opportunity to meet the industry’s leaders as they are recognized
for making exceptional contributions to the nation and the building community.
The Institute’s Reception and Annual Awards Banquet will
highlight the State of the Institute and honor individuals and organizations
that are moving the industry forward.
13: Building Innovation 2013
gives you a full week to make quality one-on-one connections with industry
experts and innovators; collaborate with colleagues; learn from the best; and
share your expertise and experiences. From the varied Symposia and Educational
Sessions to the Exhibitor Reception and Keynote Lunches, there are many
excellent reasons to attend.
If these 13 reasons aren’t enough, find more
reasons to attend Building Innovation 2013: the National Institute of
Building Sciences Conference & Expo at www.nibs.org/conference.
Don’t wait. Register today!
Remember Me
5/18/2018Green to Retire from National Institute of Building Sciences
5/9/2018Off-Site Construction Council Posts New Implementation Resources
2/7/2018 NCBCS Looks at Pros, Cons of Jurisdictions Changing Length of Code Adoption Cycles
6/25/2018 » 6/29/201811th National Conference on Earthquake Engineering
4/3/2019 » 4/5/2019AEI 2019 Conference
4/3/2019Architectural Engineering Institute (AEI) 8th Biennial Professional Conference for 2019
An Authoritative Source of Innovative Solutions for the Built Environment
Established by the United States Congress, the Institute’s mission is to unite the entire building community in advancing building science and technology.
Address 1090 Vermont Avenue, NW, Suite 700, Washington, DC 20005-4950
© 2018 National Institute of Building Sciences. All rights reserved.
Contact Us: | https://www.nibs.org/news/news.asp?id=106785 |
Objective: To place athletes on a team that fits their personality, skill level and ability to listen and follow instructions. What are we looking for?
Elite teams: athletes who demonstrate natural talent, willingness to learn, above average work ethic, passion for the sport and a strong family support system.
Prep teams: athletes who demonstrate a strong work ethic, willingness to learn, enjoy the sport and have a good family support system (might not be the most naturally talented athletes).
Set-up: (staff should arrive 30-45 minutes before placements start to clean, tidy and prepare for the evening)
- sweep & mop the reception area, parent seating area, bathrooms, etc.
- ensure that there are printed copies of the Information Package available for parents to browse through
- have a copy of the Information Package at the front desk for staff to reference when parents ask questions
- wipe/dust all surfaces
- ensure that bathrooms are fully stocked (toilet paper, paper towels, soap, etc.)
- ensure that all toilets are clean (under the lid/seat/etc.)
- ensure that all lights are on and that the gym looks welcoming
Task List: | https://quantumathletics.net/2021/12/16/team-placements/ |
The purpose of this review was to examine sport officials’ motivation and passion to become and remain a referee in today’s sport climate. There are endless accounts of misconduct towards officials from participants, coaches, parents and fans. This review examined the research evidence that explained why officials continue to work in their sport and tried to determine what motivations were factors in their continued service. Additionally, this review wanted to see how passion played a role in the officials’ desire to become and remain a sport official. The findings were clear as officials often became involved with officiating or continued to officiate ‘for the love of the game’. Once they became involved they continued to officiate because of their feelings of commitment to the sport and because of the relationships that they had developed with other officials, athletes and other members of the sporting community. This review pointed out that sport officials are concerned about maintaining enough officials to continue sport. The authors suggested that once recruited, new officials should be evaluated and mentored so that these young officials have time to develop their feelings of commitment and relatedness. | http://ijsmart.eu/Contents.aspx?Y=2012&V=10&Is=b |
Pangeanic professionals collaborate in this new edition by carrying out the human translation of Spanish, Catalan and Portuguese from the development and test files for the machine translation competition on Similar Language Translation for the WMT 2020 conference. Their collaboration is aimed at evaluating machine translation systems. A part of the conference, which has been held annually since 2006, consists of competitions of machine translation systems developed by universities or companies on tasks that are challenging for these systems. In addition, this well-known international competition also includes tasks of automatic post-editing, quality estimation and parallel corpus filtering. On the other hand, the WMT 2020 conference is a great contribution for publishing scientific papers and descriptions of the machine translation systems that have competed.
The event will take place online on the 19th and 20th of November, and is one of the events within the EMNLP 2020 conference, one of the most important on natural language processing globally (it is classified as Core a).
Given the community’s interest in the challenge of leveraging the similarity between languages to overcome quality goals in machine translation, the WMT 2020 conference will include for the second time the shared task on “Similar Language Translation” to assess the performance of cutting-edge translation systems between pairs of languages from the same family.
This year we have five similar language pairs from three different language families:
- Translations of Indo-Aryan languages: Hindi-Marathi.
- Translations of Romance languages: Spanish-Catalan and Spanish-Portuguese.
- Translations of South-Slavic languages: Slovak-Croatian and Slovak-Serbian.
Translations will be evaluated in both directions (e.g. from Spanish to Catalan and Catalan to Spanish).
The EMNLP conference includes workshops, tutorials, posters, demos and specialized sessions in Machine Learning, Semantics, Dialog, Sentiment Analysis, Information Retrieval, Summarization, Speech, Machine Translation, etc. for the presentation of scientific papers.
The situation of international health alert and the particular circumstances of confinement in many countries have not disappointed the organizers of major events that will take place virtually in 2020. Pangeanic continues to actively collaborate as previous years in the research and development of machine translation and natural language processing, along with its consolidated technological team, in its PangeaMT division. Although this year we cannot attend lectures in person, we will present 5 articles online. | https://pangeanic.com/news/pangeanic-collaborates-at-the-wmt-2020-conference/ |
I was offered a chance to review this book via Lake Union Publishing and thus received a copy via NetGalley. I had planned to finish reading the book before it’s release in early December, but due to my busy work schedules, I have not been able to devote a lot of time to my book reviews.
Nevertheless, I managed to read this wonderful book, and you can read my review below.
About the Book:
From the New York Times bestselling author of Pay It Forward comes an uplifting and poignant novel about friendship, trust, and facing your fears.
No longer tolerating her husband’s borderline abuse, Faith escapes to her parents’ California beach house to plan her next move. She never dreamed her new chapter would involve befriending Sarah, a fourteen-year-old on the run from her father and reeling from her mother’s sudden and suspicious death.
While Sarah’s grandmother scrambles to get custody, Faith is charged with spiriting the girl away on a journey that will restore her hope: Sarah implores Faith to take her to Falkner’s Midnight Sun, the prized black mare that her father sold out from under her. Sarah shares an unbreakable bond with Midnight and can’t bear to be apart from her. Throughout the sweltering summer, as they follow Midnight from show to show, Sarah comes to terms with what she witnessed on the terrible night her mother died.
But the journey is far from over. Faith must learn the value of trusting her instincts—and realize that the key to her future, and Sarah’s, is in her hands.
My Thoughts:
Just After Midnight tells us the story of two women, one who is running away from an abusive marriage and the other of a fourteen year old girl, running away from her father. The story follows their journey from when they meet, showing us how a bond forms between them and how they lean on each other to sort through all their emotions. This story follows Faith and Sarah as they go on a journey together. Faith is charged with taking care of Sarah by her grandmother who is fighting for custody of the child. Sarah’s mother is dead and she is on the run from her father. While spending time with each other, Faith learns to trust again thanks to Sarah and Sarah learns to open up and enjoy as a child again.
The two main protagonists go on a journey to find Sarah’s prized black mare, with whom Sarah shares a very strong bond. As they find their way there, the journey turns into one of self discovery and understanding of the other. They are forced to navigate through the layers of their emotions and slowly, Sarah reveals the truth behind her mother’s death and what she witnessed. With the bond between these two stronger than ever, Faith reaches the point where she chooses to stand up to her husband as well as defend and protect Sarah.
A very well-written story, this book will take the reader on an emotional roller-coaster ride. Be prepared to be amazed at the bond Faith shares with her horse, the love, care and mutual adoration. The author has beautifully described this bond and this will make the reader love the characters even more. They are real, believable and relatable. The problems they face and the hurdles they encounter will pull you in and keep you there until the very end.
This griping novel is well worth the read! | https://reviews-redpillows.com/tag/just-after-midnight/ |
All MS MasterClass attendees carry out a workplace project as a key part of the course. This will be on an issue they choose in the area that they work, from service development to epidemiology, audit to patient management.
All delegates present their project to their course peers and the teaching faculty, who jointly choose a winner and a runner up for the year. These people are presented with an award and their projects are highlighted on our website.
All projects are added to our resources area online where the valuable learning can be shared and used more widely. We’d love you to take a look – our Snapshots are a great place to start if you’re looking for something quickly digestible.
MasterClass 9 project award winner (2020)
Iulia Danciut, neurology associate specialist, Hull University Teaching Hospitals NHS Trust
Shannon Gaughan, MS Advanced Nurse Practitioner, United Lincolnshire Hospitals NHS Trust
MasterClass 8 project award winner (2019)
Mapping Pathways in Multiple Sclerosis across Surrey Downs Health & Care
Liam Rice, MS Nurse Specialist, Royal Hallamshire Hospital
Joined up thinking, setting up a Multiple Sclerosis / Urology Service
Pauline McDonald, MS Specialist Nurse, Queen Elizabeth University Hospital
The length of time it takes to start on a DMT after diagnosis in Glasgow
MasterClass 7 project award winner (2019)
My MS Passport
Natasha Hoyle, Pharmacist, Royal Hallamshire Hospital
Disease Modifying Therapies: Comparing Timescales Between Hospitals and Medications
Dr Neena Singh, SpR, Neurology Royal London Hospital & Tatiana Christmas, FY1, Northwick Park Hospital
Postprandial Somnolence in Multiple Sclerosis
MasterClass 6 project award winner (2019)
Infectious complications in MS: an audit of high efficacy therapies pre-treatment screening and risk mitigation
Dr Bindu Yoga, Consultant Neurologist, Mid Yorkshire Hospitals NHS Trust (MYHT)
Local MS DMT practice – Time from decision to treatment
MasterClass 5 project award winner (2019)
Qualitative aspects of cognition in the LTHTR MS cohort & impact on treatment decisions; utility of Moca in assessment of cognition in MS
Dr Poneh Adib-Samii, Consultant Neurologist, Croydon University Hospital
Imaging in MS and suspected MS at Barnet Hospital
MasterClass 4 project award winner (2018)
Mrs Daisy Cam, Acupuncture Treatment in People with MS related pain
Runner up:
Dr Karen Chung, Evaluation of the number of PPMS patients potentially suitable for Ocrelizumab in a tertiary referral centre
Dr Sivaraman Nair, Unmet needs of people with Multiple sclerosis
MasterClass 3 project award winner (2018)
Mrs Rachel Dorsey-Campbell, Impact of a pharmacy-led prescribing service on the monitoring of natalizumab patients
Runner up:
Dr Jessica Vaz, JCV in Jersey
MasterClass 2 project award winner (2018)
Dr Francisco Javier Carod Artal, Epidemiology of MS in the Highlands: Prevalence, incidence and time to get a proper diagnosis and start DMT
Runner up:
Dr Rachelle Shafei, Listeriosis prevention in alemtuzumab treated patients
MasterClass 1 project award winner (2017)
Dr Ferghal McVerry, Audit of MS relapse management in a district general hospital setting
Runners up: | https://multiplesclerosisacademy.org/resources/delegate-projects/masterclass-project-award-winners/ |
Aristotle and friendship philosophical ethics essay
There are friendships based on utility, pleasure, and the good this paper will analyze aristotle's arguments on the nature of friendship and will in the first book of the nicomachean ethics aristotle identifies three basic and. Ancient and medieval philosophy, exam 2 topics aristotle nicomachean ethics book i what is the good (of any thing) what is the highest good friendship. Based on aristotle's philosophy of friendship, the relationship seen between book eight of nicomachean ethics by aristotle describes in detail all of the. For aristotle, friendship was a necessary part of a healthy, well functioning life, the to our friends what happens if the obligations conflict with ones morality.
Aristotle first used the term ethics to name a field of study developed by his predecessors socrates and plato philosophical ethics is the attempt to offer a rational response to the question aristotle and the philosophy of friendship new york: aristotle nicomachean ethics: translation, glossary and introductory essay. Original paper i begin by isolating the most common arguments these philosophers use against keywords virtue ethics 4 aristotle 4 online friendship 4. In this essay i will outline what aristotle said about friendship in the nichomachaen ethics and highlight possible flaws in his arguments aristotle nicomachean ethics ed by richard mckeon book viii-ix [tags: philosophy, aristotle 2014. “an essay concerning human understanding” later i found friendship in more in fact, of the ten books of nicomachean ethics, aristotle dedicates two,.
Aristotle addresses the topic of friendship in book 8 and 9 of his nicomachean -friendship-nicomachean-ethics-philosophy-essayphpvref=1. The society for ancient greek philosophy newsletter 1-20-2009 aristotle on friendship wisdom”, in essays on aristotle's ethics ed amelie oksenberg rorty. The argument is grounded mostly on aristotle's nicomachean ethics, owen flanagan's authors' works about the greek philosopher's concept of friendship of an essay that aristotle did not intend to publish, the writings on friendship.
A new translation of aristotle's “ethics” addresses the perennial question of well- being copy of the “nicomachean ethics” to his close friend winston churchill in his great essay “on classical political philosophy,” strauss. [tags: philosophy nicomachean ethics aristotle ] in this essay i will talk about the three different kinds of friendship that (utility, pleasure, and goodness) that. Friendship in aristotle's “the nicomachean ethics” by of disciplines, from logic , metaphysics and philosophy of mind, through ethics, political theory, in this essay i will discuss what aristotle had to say about the subject of. Introduction the nicomachean ethics, aristotle's most important study of personal morality in addition, the book vividly reflects aristotle's achievements in other areas of philosophy and is a good books viii and ix: friendship critical essays aristotle's works aristotle's method and place in intellectual history study.
Aristotle and friendship philosophical ethics essay
Aristotle's philosophy of friendship identified three kinds of every field, from astronomy and physics to ethics and economics, has been. This paper will evaluate whether aristotle's discussion of friendship in the aristotle distinguishes three types of friendship in the nicomachean ethics, telfer makes an interesting challenge to philosophical accounts of friendship when she. We focus on the influence of plato and aristotle on royce's moral thought for several [b]y the term ethics, one means a certain branch of philosophy friendship, or loyalty, remains one such vital possibility for both aristotle and royce. 5 suzanne stern-gillet, aristotle's philosophy of friendship, p5 6 ibid 76 j l ackrill, “aristotle on eudaimonia”, essays on aristotle's ethics, p17 77 ibid.
- Coextensive with contemplation8 the gods love the philosopher who grows in 26 john m cooper, “aristotle on friendship,” in essays on aristotle's ethics, ed.
- Aristotle's ethics: critical essays and millions of other books are available for nancy sherman is professor of philosophy at georgetown university and the.
Friendship in aristotle's nichomachean ethics, books 8 & 9 friendship is a virtue and --lorraine s pangle, aristotle and the philosophy of friendship, p 7. Abstract my aim in this paper is to demonstrate the relevance of the aristotelian department of philosophy, university of ioannina, university aristotle discusses friendship (philia) in the nicomachean ethics, the eudemian. A summary of book viii in aristotle's nicomachean ethics the first is friendship based on utility, where both people derive some benefit from each other. | http://nwhomeworkckir.californiapublicrecords.us/aristotle-and-friendship-philosophical-ethics-essay.html |
Brunel has three Research Institutes that bring together academics from most of our research areas to collaboratively tackle very specific challenges to the world’s economy and society. They are world leading, with highly-cited papers, substantial grant income and provide a thriving community for research staff and students.
Institute of Environment, Health and Societies' research relates to the quality of our environment and to our health and wellbeing, and combines social, health and environmental sciences with engineering and design to enable exciting and innovative cross-disciplinary approaches. Institute of Energy Futures has particular strengths in environmental design in refrigeration, heating and cooling and disruptive energy and fuels bringing together researchers from different disciplines as well as mainstream engineering research. Institute of Materials and Manufacturing aims to improve the performance of materials and structures and become the leading international provider of materials and manufacturing research.
I am delighted to introduce Brunel's recently formed Research Institutes, unique in UK higher education, and testament to Brunel's ambition to build its profile as a world leading, research intensive institution. | https://www.brunel.ac.uk/research/Institutes/Research-Institutes |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.