text
stringlengths 0
7.84M
| meta
dict |
---|---|
Paul Pogba celebrates after Manchester United beat Manchester City
Manchester United boss Jose Mourinho says he has asked record signing Paul Pogba for consistency and stability in his performances.
Pogba has endured a mixed season, losing his starting place at one point, but he was superb in the Manchester derby last weekend, scoring twice as United secured a memorable 3-2 win.
Ahead of Sunday's game with West Brom, Mourinho says he has told Pogba he wants to see greater consistency from him, particularly in training.
Mourinho said: "I will tell you what I told him after the (City) match and it's exactly what I told him - I don't expect you to be man of the match every week, I don't expect you to score two goals every week.
"I expect you to be consistent in a certain level.
"So, if you ask me, I'm expecting Paul now to be man of the match every week? No.
"If I expect him to score goals every match? No.
"But I expect Paul - and I think that's the challenge he has to put to himself - to keep a certain stability and not to have the good match and the so-so match and the bad match.
Jose Mourinho says he has been really happy with Paul Pogba in recent weeks
"I think he has to try to keep that level of stability and, from that stability of course will appear the special match with the special performance, like it happened against City.
"The first thing is stability at training level, stability during the working week. And the past two, three weeks I'm really happy with him."
United take on bottom side West Brom on Sunday looking for an eighth consecutive domestic victory. If they lose, rival Manchester City will be crowned champions.
|
{
"pile_set_name": "OpenWebText2"
}
|
Cameron Bancroft (disambiguation)
Cameron Bancroft (born 1992) is an Australian Test cricketer.
Cameron Bancroft may also refer to:
Cameron Bancroft (actor) (born 1967), Canadian actor
|
{
"pile_set_name": "Wikipedia (en)"
}
|
We have already discussed how pollo (Spanish for chicken) and poultry are related. But it gets more interesting!
From the same root is also… pool. Yes, pool! How so?
Well, in medieval France, they used to play the jeu de la poule, the game of the chicken. Everyone would pool their money together, and throw stones at the chicken to see it run in a different direction each time, a bit like how you… play pool today! Yes, this is where the game pool comes from!
Nerds love to pattern-match, to find commonalities among everything. Our approach to learning languages revolves (the same -volve- that is in "volver", to "return") around connecting the Spanish words to the related English words via their common etymologies - to find the linguistic patterns, because these patterns become easy triggers to remember what words mean. Want to know more? Email us and ask: [email protected]
|
{
"pile_set_name": "Pile-CC"
}
|
Earlier this month, more than thirty delegates from Labour Students, the Young Fabians and Rainbow Rose took part in the annual Young European Socialist (“YES”) Summer Camp 2018. This was the largest UK delegation in living memory, with members from across the Labour movement joining our European comrades.
Historically, Labour has been reluctant to participate in YES events. To produce such a show of strength this year was striking, and perhaps reflected a desire to engage with the left across Europe at a time when Labour’s future relationship with it remains uncertain.
UK political culture is markedly insular. Most continental youth wings are actively engaged in YES and other initiatives. In the UK, however, awareness of it, its strengths and weaknesses, and the possibilities it offers members are relatively unknown. The lack of a delegation to YES is usually perceived as a sign of weakness, but Labour’s youth movement has rarely made participation a priority. The existence of two UK teams this year and their sheer size came as a genuine surprise to our European comrades, many of whom said “we’ve missed you”.
YES is the youth wing of the Party of European Socialists (“PES”), the grouping of which UK Labour is a longstanding member. It is through this formal affiliation that we have sister parties in Europe. PES, meanwhile, has a relationship with the Socialist International and the Progressive Alliance, two global alliances that emerged through splits earlier this decade.
As such, activity in YES is a key component of participation in the European and worldwide left movement. It is an important part of situating our politics within what is happening in Europe and further afield. It offers young members the opportunity to realise that our struggle, the progressive fight in the UK, should not be seen in isolation, but as part of a broad effort.
YES is not only a summer camp, but also offers European seminars on political economy, education and other pressing topics – all of which concern, or should concern, progressive activists in the UK. It is the biannual YES Congress that puts forward policies, which PES then pursue in the European Parliament. For one, YES Congress first formulated the idea of the European Youth Guarantee.
Perhaps even more crucial is the ability to share ideas. Delegates from as far away as Mali and Ghana were invited and funded to learn and share their experiences. Asif Mohammed, a UK delegate, spoke at a workshop on refugees. Isaac Stanley ran a workshop on social economy and economic democracy in collaboration with a representative from the Czech organisation Idealiste.cz. Leaders of PES and the Socialists and Democrats Group in the European Parliament join and socialise with participants. Delegates were able to meet with members of PES Women and Rainbow Rose, the women’s and LGBTI networks within PES. One of the highlights of the Summer Camp, however, is bilateral delegation meetings, where activists are able to pool experiences with their counterparts in other states.
This year, the UK team met the German Jusos, with whom we discussed the dangers of Pasokification that the SPD face by entering a coalition government with the centre-right, and the problem of antisemitism on the left. We discussed Brexit and the Irish border with Irish Labour and SDLP in Northern Ireland and how to hold a diverse and divided country together with the Belgian JS. We shared experiences of the populist right with the Italian GD and debated how to combat the threat. We learned how to build about the successful experiences of Antonio Costa’s socialist government from the Portuguese JS.
These are all key discussions that the UK left, and the UK youth movement, need to be a part of. Our views on Brexit in the delegation were diverse, from backing a second referendum to a clear Leave position, but we all came back enriched with an understanding of what our comrades in Europe think about the negotiations. If the course stays, we will have left the European Union by the time of the next camp – the experience of understanding the position of our friends in Europe will enhance our contribution to the debate on the left.
All this is to say it is vital we maintain this engagement even after Brexit. Maintaining youth engagement is a powerful way to sustain connections. Young members should have the opportunity to take part in YES activities and seminars in the years to come. The Labour Party should actively encourage it, through bursaries and funding. The Young Fabians have, after years of prevarication, affiliated as observer members, and there are hopefully enough voices in the organisation to safeguard that status. There are hopefully now also 30 more voices in the Labour movement to do so more broadly across Young Labour, Labour Students and the trade unions. For better progressive ideas, shared and pooled with like-minded comrades across Europe. If the workers of the world are to unite, it is through participation in these organisations and events.
The next YES event will be the Progressive Youth Forum on the 28th – 30th of September in Frankfurt. We urge all young members who want to become involved in YES to join the UK YES events Facebook group for details of all events here or see the YES website here.
Jade Azim
Phil Freeman
Danny Filer
Charlotte Norton
Marley Robinson
Zainab Mohammed
Hunter C Christopher
Tom Follett
James Potts
Eden Kruh-Atar
Sam O’Bree
Rachael Ward
Adam Allnutt
Marley Robinson
Beth Steventon Crinks
Gertrude Kennedy
Ramesh Mendis
Ruth Day
Aisling Gallagher
Aisha Malik-Smith
Kuba Stawiski
Isaac Stanley
Charlie Hindhaugh
Jack Parker
Stella Tsantekidou
Ben van der Merwe
|
{
"pile_set_name": "OpenWebText2"
}
|
Technological progress is exponential and in the next two decades areas like gene therapy, neuroscience, artificial intelligence, nanotechnology, advanced robotics and automation are all likely to converge. Such advances will fundamentally disrupt society as we know it. Autonomous decision-making systems and machines will be the big game changer, and making choices about our future will become difficult as technology replaces major human decision-making. Didier Schmitt raises the prospect of a future where technology becomes less subjugated to humans as humans become more subjugated to technology. He argues that it is a dilemma the public at large must be fully conscious of so that society can anticipate rather than become enslaved by technology. In essence he is talking about shaping the future and not being shaped by it - and in this exclusive extract from Scion|ce the author casts his retrospective, imaginative space eye back from the future.
The general public’s interest in space activities has always been present and ‘useful space’ came from an unexpected direction. Who indeed could have foreseen that the competition born of Cold War that carried the first men into Earth orbit would one day be replaced by international cooperation?
This was the case with regard to the International Space Station (ISS), the starting point for a real multi-national cooperation project. There were highlights that caught global attention such as when Philae landed on a comet in 2014, and then half a century later when a mini submarine was successfully plunged under the ice of Europa, a satellite of Jupiter. A great moment of political unification. This technology was not a special achievement, because the autonomous submarines had already taken over under our terrestrial seas and oceans, but it still needed a mini-nuclear generator to melt the 2 km ice sheet at -160 degrees Celsius.
Real change emerged when a common, very long-term vision was sketched out with the citizen participation of all nationalities, a fine example of crowd-shaping - participatory shaping - of public policy. The virtual representations gave a significant boost to this rush towards space. But let us not forget the dreams that such missions inspire. Proof of that was provided during the capture of the first asteroid by the crew of Orion; an event followed by more than four billion spectators in immers’Vision.
A few years later some six billion people ‘lived’ the descent of ‘marsonautes’ into Gale Crater. For a moment we had forgotten the ultimate goal: to find traces of proto-thermal bacteria based on triple-stranded DNA. Six months later, this historical discovery was confirmed by the automatic mission which returned with samples. In the high security P5 laboratory, the very definition of life was questioned and the Nobel Prize was not long in coming. This upset the mental representation of what makes mankind, as well as our place in this universe. It is now certain that other forms of life exist on some of the three billion planets registered in inhabitable areas of nearby galaxies.
Space dividend
“It is difficult to look further than you see” - Winston Churchill
A posteriori we now realise that the cost of these excursions in the sidereal vacuum has always been negligible compared to the expenses of past military conflicts. At the beginning of the century, these expenditures were compared to a cinema ticket or a pack of cigarettes as the equivalent of the annual contribution per person and per participating country.
Such comparisons no longer make any sense, not because tobacco and cinemas have long since disappeared but because of the indirect benefits of ‘pacification’ by the transnational links they create. This was called the space dividend.
Enthusiasm went as far as to generate philanthropic and participatory co-funding to see ideas become reality when those ideas were not necessarily a priority for scientists or politicians. All continents were represented in one or another initiative, with participation going well beyond the initial 30 contributing countries. The public’s enthusiasm grew as adventures were experienced through ever more realistic holographic communications.
The peak of attention was reached when we approached a catastrophe due to a dust cyclone at the advanced Martian base. It was an Apollo 13 adventure to the power of ten. Having missed their return window, the crew was forced to stay 500 more days. They were unable to rendezvous with the comings and goings of the ‘Earth-Mars’ cycler which, with its magnetron resonance propulsion, has an elliptical orbit and works as a shuttle that never stops.
Fortunately, the experiments of the polar bases influenced the concept of the Martian base, leading to a base design in which not all tasks were automated, mainly to keep the crew occupied and empowered but also to ensure the repair of the vital elements of the station. A 3D printer saved the mission.
The Orion spacecraft - a key to future outer space exploration.
A lunar tourist
The return to the Moon has been a political and strategic debate because arguments in favour of the need for technological validation before going to Mars were dubious. The Chinese decided to make it a showcase of their technical competence and political power. Other space agencies had developed detailed plans for a permanent base to better prepare future missions to Mars.
“When someone states that something is possible he is almost certainly right, when someone says something is impossible he is probably wrong” - Arthur C Clarke
But the exorbitant cost just to carry out tests under conditions very different from the Martian surface and atmosphere became the Achilles’ heel of the plan and public opinion put paid to the idea. Nevertheless, European industry found the opportunity to showcase its know-how by embarking on a commercial Moon adventure with funds from Middle East countries which also needed international visibility.
The first lunar tourist was a landmark moment because it was all visible using telescopes from Earth, ruling out any attempt to deny the reality of this trip, as had been the case for NASA’s first lunar expeditions. In fact, tourism had developed in low Earth orbit (LEO) for two decades but setting foot on the Moon was reserved for a professional elite.
The availability of the Chinese station for the benefit of the international community from 2027 onwards was a trigger for this market. The ISS was thus entirely reserved for ‘scientific’ tourists, who thereby became its operators and guinea pigs. The professional crew, which operated it and ensured its security, remained on board for three consecutive years, which enabled them to study countermeasures for the Martian missions. And for those who wanted strong sensations there was always the extra-stratospheric solo return glide at supersonic speed.
Terraforming
In the same vein, the battle of principle continues for the followers of Martian terraforming. They have already raised funds for ‘seeding’ synthetic photo bacteria in those numerous places where salt water is present in summer. They would produce oxygen which will generate a greenhouse effect within half a century and could thus make the planet habitable, ultimately without the need for the inhabitants to wear spacesuits. To draw the attention of a reluctant audience to the subject of colonisation, the scien’Twists are already announcing a lottery for one-way trips to the Red Planet.
The last word for the already convinced public was the recent fortuitous discovery of the habitable exoplanet Gamma2042. An exploratory journey is not yet on the agenda, even though the scientific grail of dark matter has finally been found and the existence of space-time gaps identified with certainty by astrophysicists. It will probably be necessary to wait until the next century to attempt a breakthrough because, in parallel, there is still work to be done on the stabilisation of antimatter. Which means that the youngest of us will see such an expedition!
Meanwhile, the enigma of the recent atmospheric oxygen loss of G2033 still remains to be elucidated. A real mystery, since this phenomenon occurred in less than three years. No explanation can be advanced before the quark telescope is put into operation near the Lagrange point 5, in the wake of Earth. But everyone has in mind the high probability of extraterrestrial life having failed in some way. In this context, it is significant that just 50 years ago we did not even know about the existence of exoplanets.
Animal bay on a futuristic spaceship.
Planetary exploration
The collection of data on Saturn’s rings is done with hundreds of probes inspecting their rocks. The atmosphere of Venus is modelled with the aid of autonomous aer’Bots which drift with the winds. Following the characterisation of gravitational waves by the eli’Sa mission in the 2030s, radio telescopes scrutinise the depths of our universe in order to understand the nature of the shock waves coming probably from the neighbouring universe.
We have just celebrated 400 years of Galileo’s prosecution for his demonstration of heliocentrism and we will soon commemorate the hundredth anniversary of the first artificial satellite, Sputnik. But there is still so much to understand.
In the past, space programmes were designed for 30 years ahead. For the ISS, for example, the design, construction and operation took 10, 10 and 30 years respectively. But as in other areas, everything suddenly accelerated. It began with private initiatives decades ago, as with the constellations of thousands of satellites for the then internet 3.0. These private low-cost programmes of the time surprised everyone, especially the ones reaching space and then the trans-atmospheric flights. It should be said that these would not have come into existence without the strato’S horizontal take-off and landing rocket plane. All space programmes were boosted because the price of access to Earth orbit decreased to a fourth of its initial value.
On Earth as it is on Mars - Earth’s martian science city.
Mineral resources
“As for the future, it is not a question of foreseeing it, but of making it possible” Antoine de Saint-Exupery
But this was also at the origin of a number of challenges such as the one about the use of mineral resources of asteroids. These initiatives - driven by purely private interests - gave rise to unexpected opposition by ‘citizens of the world’.
These very people prohibited the exploitation of the Moon by classifying it as a ‘world heritage’ site. The combined effect of the movement for the preservation of celestial objects and the universal momentum for total recycling of rare elements definitely blocked those projects, even though they were already well advanced.
It is true that the international guidelines on recycling boosted innovation. Returning to minerals would have pushed us back into a waste era that was definitely thought to be over.
This shows that the concept of a service society and the priority given to ‘being and not having’ are now firmly rooted. The company or’Bit finally gave up after a fatal accident occurred during the wrapping of a small comet that began to spray a pocket of water on contact with solar rays. We were more or less accustomed to deadly accidents during take-offs and landings, but not to the fact of having to bring back the body of an explorer. The reputation of the company had already fallen after the discovery of genetic manipulations on the astero’Nautes so that they were more resistant to radiation and were cognitively more efficient. These difficulties show clearly that Earth will remain our living space for a long time.
Narrow escape
However, the technology of connecting and docking these interstellar dinosaurs turned out to be beneficial when the comet AL318 was discovered by the amateur network damus’Nostra. Thus, everything was in place for the deviation of its trajectory, as the yotta’Flop supercomputer estimated the probability of the comet’s impact on the South Atlantic to be 99.9 percent. The 250 m tall tsunami would have forever changed the face of our planet. Since then, no one has ever questioned space budgets!
We narrowly escaped another cataclysm: a chain reaction of orbital debris. The alarm bell could have been sounded back in the 2020s but because of the competition on the costs of satellites no one wanted to take a first step to remedy the situation.
This time the decisions were immediate, as it was also necessary to urgently activate the ecliptic plan of the Beta tourist station, owned by the company or’Bit. Nobody can imagine the economic and societal cataclysm that would have resulted from the destruction of half of our space potential of 5,000 microsatellites.
It is true that we tend to forget what is not physically on Earth. If the domino effect had occurred beyond 600 km, the orbital slots would have been unusable until the end of this century. The million pieces of debris could never have been cleaned, even with the international programme clean’Space designed to capture and take old satellites out of service. And yet this technological weakness of our society was known.
Lunar future.
Climate forecasting
The met’Sat one-month forecast of air quality and rainfall allowed a giant leap forward in anticipating seasonal disasters and mobilising the global fund for climate nomads - there are already 150 million of them.
If this progress had been made 20 years earlier, we would have avoided many setbacks. Indeed, the true revolution was the environmental awareness that followed the real-time modelling of oceanic, terrestrial and atmospheric systems by merging spatial and aerial data - obtained via pico-drones in a swarm - along with terrestrial data obtained by crowdsourcing from millions of ‘citizen sensors’.
What would this modelling actually be without the sharing of geo’Synchron real-time video observations? The digitization of the planet led to the extinction of the last climate-sceptical Mohicans. Let us note that it was necessary to stop the Oogle type hegemonies by classifying the continuous flow of satellite images as open’Access world heritage.
Other applications blossomed, such as the ones focused on the optimisation of eco-agriculture, the management of multiple energies, environmental-dependent health issues or the monitoring of ecosystems by the patri’Ot citizen association. The term ‘space applications’ has definitely become obsolete because all these observation systems are part of a continuum.
Technological warfare
“The best way to predict your future is to create it” - Peter F. Drucker
The superiority of space systems over any other form of technological warfare exposed the fragility of our society. The escalation between attack and defence systems of satellites was quickly pointed out as being strategically unsustainable.
The interweaving of civil and military systems had become such that each hostility could lead to a cascade as in the era of nuclear deterrence. This was voluntary, since civil aircraft and combat drones were navigated from the same satellites. Despite this, advocates of the concept of ‘deterrence by dependence’ struggled at first to make their point.
The aim was to render obsolete the race for technologies to neutralise opposing space systems because of the unacceptable consequences in all civilian sectors of a ‘spatialised conflict’. The balance of fear was the only solution, given that codes of good conduct and other international agreements never succeeded because they were so easily circumvented. But above all, it was necessary to accept the new evidence of a common adversary that emerged: cyberterrorism 4.0, which found a gap in the quantum networks.
It began to do more harm than the conflicts of the 20th century and only a collective and global response could stem it. Each object, portable or not, has long been geo-locatable in order to be part of the Internet of Things. The dependence on satellites had also become too great, so that alternative location geo’Loc microchips - in a network integrated by the internet - had to be invented to avoid the risk of an economic collapse if satellites were put out of service for one reason or another. Alternative solutions have also emerged, such as relay nanosatellites capable of being orbited in their hundreds if necessary.
The fusion of civil and military space programmes ultimately confirmed the protection of Earth’s orbit as a ‘common good’.
About the author
Didier Schmitt was scientific adviser and foresight coordinator in the Bureau of European Policy Advisers to the President of the European Commission (2012-14) and worked in the Space Policy Unit at the European Commission from 2009-12. At the European Space Agency he managed human and robotic exploration preparation programmes, including the use of the International Space Station (1997-2009). In his academic career, he was associate professor at the Toulouse medical school and the International Space University (1992-1997). His educational background is a PhD in biosciences in addition to being a certified medical doctor. Currently working in the European Union diplomatic service, he is a regular opinion writer in mainstream French newspapers on future issues in science, technology and policy.
|
{
"pile_set_name": "OpenWebText2"
}
|
I work as a freshman History teacher for an impoverished school and explained to my Santa that many of my students show up hungry. I asked for some snacks for my students and dry erase markers (because I have a dry erase board and use them daily). To my shock, my Santa sent a pack of pencils, seven packs of dry erase markers and a TON of snacks... 10 packs of Oreos, 10 packs of Pepperidge Farm cookies, 8 packs of goldfish, 3 boxes of fruit snacks and 4 boxes of fruit bars. Freakin' incredible!!!!! I'm so grateful that my kids will not have to go hungry! Thank you so much to Tracy and her mom Theresa!
|
{
"pile_set_name": "OpenWebText2"
}
|
Since the launch of Microsoft’s Kinect gaming hardware there have been repeated open-source initiatives, hacks and projects aiming to utilise the hardware in an open-source environment.
Matt Cutts – a Google employee, although this is not a Google-funded project – has launched his own contest: to help kick-start open-source development using the $150 Kinect.
"I’m starting my own contest with $2000 in prizes. There are two $1000 prizes. The first $1000 prize goes to the person or team that writes the coolest open-source app, demo, or program using the Kinect. The second prize goes to the person or team that does the most to make it easy to write programs that use the Kinect on Linux."
Matt even offers up some ideas to get potential developers started – including the one no self-respecting Kinect contest would be complete without mentioning ;) : –
"Idea 1: A Minority Report-style user interface where you can open, move, and close windows with your movements."
You can find out more on entering by pointing your browser @ mattcutts.com/blog/open-kinect-contest. There you can also read some very neat ideas
The contest runs for the rest of this year closing at midnight (PCT) December 31st.
Thanks to Pierre
|
{
"pile_set_name": "OpenWebText2"
}
|
Guidewires have long been used to facilitate diagnostic and therapeutic medical procedures. Generally speaking, a guidewire is the initial member inserted into a body cavity during many transluminal procedures. A guidewire is an elongated fine wire device intended to readily pass through body passageways to a location at which a medical procedure or treatment is to take place. Thereafter, in a typical arrangement, a catheter is threaded over the thus inserted guidewire, with the catheter following the pathway defined by the guidewire. In general terms, a guidewire is flexible, at least at its remote distal end tip.
Remote distal end tip flexibility is often enhanced by providing one or more fine coils at the distal portion of the guidewire and securing these coils to the distal end of the guidewire's core. Typically, this securement application also includes a rounded distal tip that imparts some atraumatic characteristics to the guidewire. In the usual approach, these components are secured together by soldering, brazing, welding or by using an adhesive such as ultraviolet-curing adhesives, catalytic-curing such as epoxy or anaerobic adhesives such as cyanoacrylate adhesives.
Distally tapered guidewires are generally composed of a stainless steel or austenitic metallic core which is not amendable to heat treatment for hardening the base metal. Stainless steel alloys employed in the medical field generally have a high chromium and low carbon content to provide resistance to oxidation and corrosion. Stainless steel or austenitic alloy guidewires are amendable to work hardening but the final process yields a wire that is hardened primarily in the outer layers. Any hardness developed by the work process decreases or is totally absent as the center of the core is approached where it remains relatively soft. After stainless steel or austenitic alloy guidewire cores are work hardened, they are distally tapered by standard diameter reduction processes, which exposes the relatively soft inner cross sectional layers and becomes the entire core of the distal end. This results in the stainless steel guidewire having inherently disportionate hardening throughout the length of the wire yielding suboptimal torsional characteristics. Inconsistent proximal to distal rotational movement makes it difficult for the clinician to penetrate small blood vessels while inadequate hardness affects the guidewire's catheter tracking capabilities. Stainless steel guidewires that suffer from inadequate entire cross sectional area hardening do not have high torsional capabilities for allowing the navigation through tortuous coronary, kidney or neurological vessels. In addition, such guidewires can suffer from snap or have unpredictable final tip positioning. Furthermore, it is important for a guidewire to be able to conform over sever curves and sharp angles without causing plastic deformation thereby having high ductility.
Therefore, there is a need for a torsionally strong, fully hardened, high ductility guidewire.
|
{
"pile_set_name": "USPTO Backgrounds"
}
|
# Overview of configuration changes {#concept_anb_bbf_5db .concept}
You can change the configurations of an instance and its Internet bandwidth after it is created.
## Upgrade or downgrade instance configurations {#ChangeType .section}
You can only upgrade or downgrade the configurations of vCPU and memory \(that is, [instance type family](../../../../reseller.en-US/Product Introduction/Instance type families.md#)\) simultaneously by changing the instance type. Depending on the method of billing applied to your instance, you can change an instance type as follows:
- Subscription:
- Upgrade: See [upgrade configurations](reseller.en-US/User Guide/Instances/Change configurations/Upgrade configurations of Subscription instances.md#). The new configurations take effect after you [restart the instance](reseller.en-US/User Guide/Instances/Restart an instance.md#) in the console or by using the [RebootInstance](../../../../reseller.en-US/API Reference/Instances/RebootInstance.md#) interface.
- Downgrade: See [renewsal for configuration downgrade](../../../../reseller.en-US/Pricing/Renew instances/Renew for configuration downgrade.md#) . You can downgrade the configuration of an instance when you renew the instance. The new configuration takes effect after you [restart the instance](reseller.en-US/User Guide/Instances/Restart an instance.md#) in the ECS console within the first seven days of the new billing cycle.
- Pay-As-You-Go: See [change configurations of Pay-As-You-Go instances](reseller.en-US/User Guide/Instances/Change configurations/Change configurations of Pay-As-You-Go instances.md#). You must stop the instance to use this feature.
**Note:** Stopping an instance disrupts services. Exercise caution when performing this action.
## Adjust Internet bandwidth {#ChangeBandwidth .section}
You can adjust the Internet bandwidth of an instance. The methods vary according to your business needs and the billing method of the instance. The following table lists the methods.
|Billing method |Supports permanent upgrade?|Is it effective immediately? |Available feature |Description|
|:--------------|:--------------------------|:----------------------------|:-----------------|:----------|
|Subscription|Yes|Yes|[Upgrade configurations of Subscription instances](reseller.en-US/User Guide/Instances/Change configurations/Upgrade configurations of Subscription instances.md#)|Only applicable to VPC-Connected ECS instances to which no EIP addresses are attached, or classic network-connected ECS instances. The Internet and intranet IP addresses remain unchanged after you upgrade your configurations.|
|Subscription|Yes|Effective from next billing cycle|[Renew for configuration downgrade](../../../../reseller.en-US/Pricing/Renew instances/Renew for configuration downgrade.md#)|Adjust bandwidth in the new billing cycle. When the Internet bandwidth is set to 0 Mbit/s, the Internet IP address of a VPC-Connected instance is released in the new billing cycle, but that of a classic network-connected ECS instance is retained.|
|Pay-As-You-Go or Subscription|Yes|Yes|[Change EIP Internet bandwidth](reseller.en-US/User Guide/Instances/Change configurations/Change EIP Internet bandwidth.md#)|Only applicable to VPC-Connected instances to which [EIP addresses](https://partners-intl.aliyun.com/help/doc-detail/27714.htm) are bound. You can adjust the Internet bandwidth on an EIP address at any time.|
## Assign a public IP address {#AllocatePublicIp .section}
Assign a public IP address to an ECS instance while [creating it](../../../../reseller.en-US/Quick Start for Entry-Level Users/Step 2. Create an instance.md#). If you skip it, you can even assign after an ECS instance is created. However, the feature is only available for Subscription instances. For more information, see the following table.
|Feature|Is it effective immediately?|Description|
|:------|:---------------------------|:----------|
|[Upgrade configurations of Subscription instances](reseller.en-US/User Guide/Instances/Change configurations/Upgrade configurations of Subscription instances.md#)|Yes|Only applicable to VPC-Connected ECS instances to which no EIP addresses are attached, or classic network-connected ECS instances. Set the Internet bandwidth to a non-zero value to assign a public IP address.|
|[Renew for configuration downgrade](../../../../reseller.en-US/Pricing/Renew instances/Renew for configuration downgrade.md#)|Effective from next billing cycle|
|
{
"pile_set_name": "Github"
}
|
---
abstract: 'In this short note we use a simple model to describe the dynamical effects of break-up processes in the subbarrier fusion involving weakly bound nuclei. We model two similar cases involving either a neutron or a proton halo nucleus, both schematically coupled to the break-up channels. We find that the decrease of the coulomb barrier in the proton break-up channel leads, [*ceteris paribus*]{}, to a larger enhancement of the subbarier fusion probabilities with respect to the neutron-halo case.'
author:
- Raj Kumar
- 'J. A. Lay'
- 'A. Vitturi'
bibliography:
- 'fusion.bib'
title: Enhanced subbarrier fusion for proton halo nuclei
---
Subbarrier heavy-ion fusion processes have been in the last decades an interesting issue for the low-energy nuclear physics community for the natural link involved between structure and dynamics. It has been in fact recognized that the basis feature characterizing the subbarrier behavior is the dynamical coupling to the internal degrees of freedom of the two fusing partners [@Bal98; @Das83b; @Hag12]. The proper description of a fusion process, therefore, is essentially demanding to single out the relevant coupled channels involved and to determine the associated diagonal and coupling potentials. This makes the situation with weakly bound nuclei more complex, due to the non trivial inclusion of the strongly coupled continuum break-up channels and the consequent opening of final three-body (or four, in the case of two-particle halo nuclei) channels. This had led from a theoretical point of view of diverging results on the enhancement/suppression of the fusion probabilities, and to extremely difficult experimental measurements to determine (and separate) different fusion and reaction channels [@Agu11; @Agu09; @Scu11; @Hag00; @Vin13; @Gom11; @Nak04].
Given the complexity of the situation, every case behaves differently and has to be specifically treated, with particular ion-ion potentials, associated heights of the coulomb barrier, coupling form factors, specific relevant transfer channels and $Q$-values. For this reason it is not easy, in a fully treated coupled-channel description, to single out the role of specific issues. One of these is the possible role of the charged break-up channels in proton-halo nuclei with respect to the more common neutron break-up channels in neutron-halo nuclei. For this reason we introduce here a very simplified two-channel model, the first being the entrance channel and the second representing the full set of continuum break-up channels. In this channel we neglect the ejected particle (neutron or proton) and properly rescale energies and ion-ion potential. Our model has been applied, as representative cases of neutron or proton haloes, to the fusion with $^{58}$Ni of either $^{11}$Be and $^{8}$B. To single out just the dynamical effects due to the neutron/proton nature of the two halo nuclei, potentials in the different channels have been constructed using the simple parameterization of Broglia and Winther [@BW] and an equal strength for the coupling between entrance channels and the “break-up” ones.
In Fig. \[pot\] we display the resulting ion-ion potentials for the $^{8}$B+$^{58}$Ni (left frame) and $^{11}$Be+$^{58}$Ni (right frame) reactions. For comparison in the same figures we also display the corresponding ion-ion potentials in our ”break-up" channels, i.e. for the $^{7}$Be+$^{58}$Ni and $^{10}$Be+$^{58}$Ni cases. For a quick view, we also show in the figure as a line one energy $E$ in the incoming channel (20 MeV in the case of $^{8}$B and 17 MeV in the case of $^{11}$Be) and the corresponding energy in the break-up channel. This energy can be estimated by subtracting the energy needed for break-up and the average excitation energy, $\langle E^* \rangle$, in the core-nucleon relative motion, and then sharing the energy between then according to a distant break-up scenario. In this way, we consider $E_{bu}=(E-S_{1N}-\langle E^* \rangle) \cdot \frac{A-1}{A}$. $S_{1N}$ reads for the one neutron or one proton separation energy, i.e. $S_{1p}=0.136$ MeV for $^8$B and $S_{1n}=0.504$ MeV for $^{11}$Be. $\langle E^* \rangle$ is approximated by the peak energy for the dipole electromagnetic transition probabilities, $\langle E^* \rangle=0.5$ MeV for $^8$B and $\langle E^* \rangle=0.4$ MeV for $^{11}$Be. It is evident from the figure that while in the neutron case the barriers in the incoming and break-up channels are similar (while the energy at disposal in the latter is smaller), in the proton case the reduction in energy in the break-up channel is more than compensated by the lower Coulomb barrier due to the reduced charge in the projectile.
![Ion-ion potentials for $^{8}$B+$^{58}$Ni in the left frame and $^{11}$Be+$^{58}$Ni in the right frame (solid lines). The dashed lines corresponds to the break-up channels, i.e. for the $^{7}$Be+$^{58}$Ni and $^{10}$Be+$^{58}$Ni respectively. The nuclear part of the potential is computed according the proximity potential of Broglia and Winther [@BW]. []{data-label="pot"}](Fig1.pdf){width="1.\columnwidth"}
Fusion probabilities are calculated by solving the corresponding coupled-channel equations under ingoing-wave boundary conditions (IWBC). The coupled-channel formalism for direct reaction processes given by Austern [@Aus87] expands the total wave function in terms of the wavefunction for the internal state of the projectile $\phi_{\beta}$ and the radial wave functions $\chi_{\beta}$ that acounts for the relative motion between projectile and target: $$\Psi^{(+)}=\Sigma_{\beta}\frac{\chi_{\beta}(R)}{R}\phi_{\beta}.
\label{eq.1}$$ This leads to a set of coupled equations for the radial wave functions: $$\frac{d^{2}\chi_{\beta}}{dR^{2}}+\frac{2\mu_{\beta}}{\hbar^{2}}[E_{\beta}-V_{\beta}^{eff}(R)]\chi_{\beta}
=\frac{2\mu_{\beta}}{\hbar^{2}}\Sigma_{\alpha\ne\beta}V_{\beta\alpha}^{coup}(R)\chi_{\alpha}
\label{eq.3}$$ In these expression V is the interaction potential while, for a given channel $\beta$, $\mu_{\beta}$ is the reduced mass, and $E_{\beta}$ is the relative energy.
In our model case, we will only consider two channels, the incomming channel and one channel representative of the break-up and later fusion without the ejected particle. The two channel problem in one spatial dimension $R$ is given by: $$\begin{aligned}
&&\frac{d^{2}\chi_{1}}{dR^{2}}+\frac{2\mu_{1}}{\hbar^{2}}[E_{1}-V_{1}]\chi_{1}
=\frac{2\mu_{1}}{\hbar^{2}}V_{coup}\chi_{2},\nonumber\\
&&\frac{d^{2}\chi_{2}}{dR^{2}}+\frac{2\mu_{2}}{\hbar^{2}}[E_{2}-V_{2}]\chi_{2}
=\frac{2\mu_{2}}{\hbar^{2}}V_{coup}\chi_{1},
\label{eq.5}\end{aligned}$$ where, in our case, $E_{1}=E$, the incoming energy, and $E_{2}=E_{bu}$, the enegy in the break-up channel.
The total potential for each channel $V_{1,2}(R)$ is given by the sum of Coulomb and a nuclear proximity potential given by Broglia and Winther [@BW] parameterization. The coupling potential $V_{coup}$ is given as a derivative Woods Saxon form with same radius and difuseness of the proximity potential for the incoming channel. The strength is set to a 10% of the strength of the same proximity potential.
The coupled channel equations are solved by imposing the boundary conditions that there are only incoming waves at R=$R_{min}$, i.e. the minimum position of the Coulomb pocket inside the barrier, and there are only outgoing waves at infinity for all channels except for the entrance channel ($\beta$=1), which has an incoming wave with amplitude one as well. This boundary condition is referred to as the incoming wave boundary condition (IWBC) [@Bal98; @Hag12; @Das83], and is valid for heavy-ion reactions, where there is strong absorption inside the Coulomb barrier. The numerical solution is matched to a linear combination of incoming and outgoing and Coulomb wave functions at finite distance $R_{max}$ beyond which both the nuclear proximity and the coupling potential are negligible. The boundary condition of a wave incident from the right in channel $\beta$=1 and transmitted and reflected waves in both channels is given by, $$\begin{aligned}
\chi_{\beta}(R) \xrightarrow{R\rightarrow\infty} & \delta_{\beta1} H^{(-)}_{\ell}(k_{\beta}R)&+~~r_{\beta}H^{(+)}_{\ell}(k_{\beta}R); \nonumber\\
\chi_{\beta}(R=R_{min}) =&t_{\beta} H^{(-)}_{\ell}(k_{\beta}R),&
\label{eq:12}\end{aligned}$$ where $\ell$ is angular momentum, $H^{(+)}_{\ell}$ and $H^{(-)}_{\ell}$ are the outgoing and incoming Coulomb wave functions, respectively and $k=\sqrt{2\mu E/\hbar^2}$ is the wave number associated with the energy $E$. The total transmission probability is then given by, $$\begin{aligned}
T=\sum_{\beta}\mid T_{\beta}^2\mid = |t_1|^2+\frac{v_2}{v_1} |t_2|^2
\label{eq:13}\end{aligned}$$ where $v_1$ and $v_2$ are the velocities corresponding to channel 1 and 2.
The fusion cross-section, in terms of partial waves, is given by $$\sigma=\sum_{\ell=0}^{\ell_{max}}\sigma_{\ell}=\frac{\pi\hbar^2}{2\mu_{1} E}\sum_{\ell=0}^{\ell_{max}}(2\ell+1)T_{\ell}(E).
\label{eq:14}$$
The probability of transmission for the partial wave can also be calculated simply by a shift of energy, $$T_{\ell}\cong T_{0}\left[E-\frac{\ell(\ell+1)\hbar^{2}}{2\mu_{1} r_{0}^2} \right],
\label{eq:15}$$ where $r_{0}$ is the position of the barrier for the s-wave [@Bal98].
![Fusion cross sections for the $^{8}$B+$^{58}$Ni (left panel) and $^{11}$Be+$^{58}$Ni (right panel) reactions. Solid lines represent the case without break-up, with a single channel and no coupling, whereas the dashed lines show the two channels case with coupling to the proton (left) and neutron (right) break-up channels.[]{data-label="fullxsec"}](Fig2.pdf){width="1.0\columnwidth"}
The resulting cross section for both $^{8}$B+$^{58}$Ni and $^{11}$Be+$^{58}$Ni fusion reactions are shown in Fig. \[fullxsec\]. For each reaction, we compare the situation without break-up, where there is no coupling to the second channel (solid lines), with the possibility of coupling to the break-up channel (dashed lines). In both cases, and as a result of this coupling, a certain enhancement is found. In order to compare both cases appropriately, we show in Fig. \[redxsec\] a reduced fusion cross sections in terms of the collision radius of each reaction versus the energy divided by the estimated Coulomb barrier. As expected, the two no-coupling cross sections coincide almost perfectly, whereas the coupling cases show different results. Here, it is clearly seen how the proton break-up case has a larger cross section at low energies. On the other hand, the neutron break-up case has a larger enhancement at energies inmediately close to the energy of the Coulomb barrier. For the sake of cancelling the effects of choosing two different nuclei for the neutron and the proton case we add a third case in Fig. \[redxsec\] for the $^{8}$B+$^{58}$Ni case where the same potential, and so the same Coulomb barrier, is used for both channels, $V_{2}=V_{1}$ (dot-dashed line). This case is similar to consider that the $^{8}$B looses one neutron instead of a proton. As expected, the cross section follows the same trend as the $^{11}$Be+$^{58}$Ni but with an apparently smaller enhancement.
![(Color online) Cross section divided by the square of the interaction radius versus the energy divided by the estimation of the Coulomb barrier in the incoming channel ($V_{B}$) for $^{8}$B+$^{58}$Ni and $^{11}$Be+$^{58}$Ni fusion reactions. We compare the no coupling cases for both reactions (solid line) with the proton (dotted line) and neutron (dashed line) break-up cases.[]{data-label="redxsec"}](Fig3.pdf){width="0.8\columnwidth"}
![Barrier distributions for the $^{8}$B+$^{58}$Ni (left panels) and $^{11}$Be+$^{58}$Ni (right panels) fusion reactions both with (dashed) and without (solid) coupling to the break-up channel. In upper panels we show the derivative of the transmission factor for $\ell=0$ whereas in the lower panels we evaluate the second derivative of the fusion cross section times the energy.[]{data-label="bardist"}](Fig4.pdf){width="0.8\columnwidth"}
In order to clarify which processes are giving rise to these two different behaviors, it is useful to show the barrier distributions for both reactions. This can be done by evaluating the second energy derivative of the product of the cross section and the energy, or the first derivative of the transmission for $\ell=0$. Both observables are shown in Fig \[bardist\]. A clear difference between the proton and neutron induced effects on fusion is found. Both cases present two barriers as expected according to Fig. \[pot\]. However, in the proton case, the secondary barrier is below the barrier in the incoming channel and so it allows a larger enhancement at low energies. Instead, in the neutron case, the secondary barrier is at a higher energy. Therefore, the neutron enhancement simply arises from the displacement towards a lower energy of the final effective Coulomb barrier.
These results obtained here are similar to the effect of negative or positive $Q$-values on barrier penetration [@Das97; @Das83b]. As shown, for example, in figure 5.1 in [@Das97], the positive $Q$-value case shows the same cross section and barrier distribution as the proton break-up case, and the same parallelism is found for negative $Q$-value and neutron break-up cases. Indeed, effective $Q$-values can be considered and compared from the difference between the energies and the barriers in each channel. This effective $Q$-value may be evaluated as $$Q_{eff}=(E_{bu}-V_{B}^{2})-(E-V_{B}^{1}),$$ where $V_{B}^{1}$ and $V_{B}^{2}$ are the energies of the Coulomb barrier for the incoming and break-up channels respectively. Here we have also neglected the effect of the separation energy and the average excitation energy of the projectile. Looking at the energies plotted in Fig. \[pot\], we obtain $Q_{eff}=1.97$ MeV for the proton case and $Q_{eff}=-1.12$ MeV for the neutron case.
The exact value for $Q_{eff}$ will depend on the incoming energy. Nevertheless, it can be shown that it is always negative for the neutron case, whereas it is positive for the proton case at energies around or bellow the Coulomb barrier. Therefore, the differences between the energies and the Coulomb barrier due to the loss of a neutron or a proton can explain the results obtained in both cases.
In conclusion, the possibility of proton break-up produces an enhancement of the subbarrier fusion. Similar results were also found by Nakatsukasa *et al.* [@Nak04] in a time-dependant approach. This fact can explain the enhancement recently found for the proton halo nucleus $^{8}$B [@Agu11]. This enhancement is larger than in the neutron case, and also the energy distribution is far different. Indeed, for the neutron case, the enhancement is mainly due to a displacement in the energy of the Coulomb barrier. This can also explain why it is unclear if neutron halo produces or not an enhanced subbarrier fusion.
This work has been supported by MIUR research fund PRIN 2009TWL3MX. The authors acknowledge L. F. Canto for useful discussions.
|
{
"pile_set_name": "ArXiv"
}
|
Q:
Bitwise operators in conditional statement
I am trying to enter an if statement based on a bit-wise operator without changing the original bits, and am confused why what I have doesn't work.
I am checking if it least one of bits 7, 4, 2 are 0.
Working code: (it changes the original bits, which I do not want)
#include <stdio.h>
void main() {
unsigned char ch = 0b11111111;
ch = ~(ch | 0x6B);
if(ch) {
printf("%s\n", "YES");
}
else {
printf("%s\n", "NO");
}
}
That prints NO (as it should) and prints YES if you change bit 7, 4, or 2 to a 0.
Non-working code:
#include <stdio.h>
void main() {
unsigned char ch = 0b11111111;
if(~(ch | 0x6B)) {
printf("%s\n", "YES");
}
else {
printf("%s\n", "NO");
}
}
I am pretty stumped as I think the two pieces of code are identical? Thanks!
A:
That's easy to explain:
0x6B as you write it gets interpreted as a default integer (probably 32 bit). So (ch|0x6B)== 0x000000FF==0b00000000000000000000000011111111. Hence, ~(ch|0x6B) == 0b11111111111111111111111100000000, which is not 0, hence is true.
If you, however, put that result into a char, only the lower 8 bits are saved, and hence (unsigned char) ~(ch|0x6B) == 0, which evaluates to false.
|
{
"pile_set_name": "StackExchange"
}
|
---------------------- Forwarded by Peter F Keavey/HOU/ECT on 05/16/2001 04:00 PM ---------------------------
Peter F Keavey
05/23/2000 08:55 AM
To: [email protected]
cc:
Subject: philly hockey
---------------------- Forwarded by Peter F Keavey/HOU/ECT on 05/23/2000 08:53 AM ---------------------------
Sarah Mulholland
05/23/2000 07:19 AM
To: Maureen Smith/HOU/ECT@ECT, Peter F Keavey/HOU/ECT@ECT
cc:
Subject: philly hockey
---------------------- Forwarded by Sarah Mulholland/HOU/ECT on 05/23/2000 07:18 AM ---------------------------
[email protected] on 05/22/2000 07:09:21 PM
To: [email protected]
cc:
Subject: philly hockey
Enjoy!
---------------------- Forwarded by Christopher J. Bednar on 05/22/2000 06:18 PM
---------------------------
To: Christopher J. Bednar
cc:
Date: 05/18/2000 09:07 AM
From: Brian Abbott, Philadelphia, 565 / 5709
Subject: philly hockey
- phillyhockey.jpg
*******************Internet Email Confidentiality Footer*******************
Privileged/Confidential Information may be contained in this message. If you
are not the addressee indicated in this message (or responsible for delivery of
the message to such person), you may not copy or deliver this message to anyone.
In such case, you should destroy this message and kindly notify the sender by
reply email. Please advise immediately if you or your employer do not consent to
Internet email for messages of this kind. Opinions, conclusions and other
information in this message that do not relate to the official business of my
firm shall be understood as neither given nor endorsed by it.
|
{
"pile_set_name": "Enron Emails"
}
|
This invention relates to a seal assembly for a roller cutter drill bit having a pressure balanced lubrication system, and more particularly to a seal assembly between a journal on the bit body and a roller cutter mounted for rotation on the journal.
Heretofore, seal assemblies in a rotary drill bit between the journal and roller cutter mounted thereon for rotation have included a pair of metal seal rings urged into face to face sealing contact by a pair of elastomeric seal rings which seal against the metal seal rings in addition to forcing the metal seal rings into sealing contact. Normally one of the metal rings and elastomeric rings rotates with the cutter and the other metal ring is held in a static or non-rotating position on the journal by the other elastomeric ring. Thus, sliding sealing contact is normally provided between the metal contacting faces of the opposed metal seal rings. The use of a pair of elastomeric rings permits the metal seal rings to float back and forth and move together with little chance of being separated by the severe vibrations encountered in drill bits while drilling. Any separation of the metal sealing faces permits leakage of either the drilling fluid into the bearing areas between the journal and roller cutter, or leakage of lubricant outside the bearing areas.
Metal face seals with two metal seal rings and two rings have been used for years with success to seal bearings that must operate in an abrasive environment such as, for example, track rollers for treads on tractors, such as disclosed in U.S. Pat. No. 3,180,648. A similar type of seal is also disclosed in U.S. Pat. No. 3,216,513 for use in rolling cutter assemblies for large diameter bits for mining operations such as tunneling or drilling vent shafts for mines. These seals have heretofore provided both elastomeric seal rings on the same peripheral outer surface of the metal seal rings. These mining type applications have little or no borehole pressure and consequently do not require a hydrostatic pressure compensator as used in most downhole drill bits used in oil wells. The use of a seal such as shown in U.S. Pat. No. 3,216,513 in drill bits for oil wells could have severe problems due to pressure fluctuations across the seal caused by rapid excursions of the rolling cutter on the bearing journal as the bit drills and resulting in fluid pressure differentials between lubricant inside the bit and drilling fluid outside the bit. Because both elastomeric seal rings are located on the outer peripheral surfaces of the metal rings, resulting pressure differentials could cause leakage of mud contaminants into the bearing area because the seal contact pressure of the metal rings decreases as the mud pressure becomes greater than the lubricant pressure.
A metal face seal assembly as disclosed in U.S. Pat. No. 4,516,641 dated May 14, 1985 for drill bits helps compensate for these pressure fluctuations across the seal assembly caused by axial movements of the cutter by floating movement of the rigid rings in the seal cavity to balance the lubricant volume in the space between the seal and the main bearing. As disclosed in this patent the ratio of rigid ring movement to cutter movement in an axial direction was determined to be as much as 1.88 to 1 in order to balance the lubricant in this space. This still can cause a significant pressure differential across the seal assembly as one elastomeric ring is forced to compress more while the other elastomeric ring compresses less. The reduced compression of one of the elastomeric rings also can cause the associated rigid ring to slip resulting in wear of the elastomeric seal from frictional contact with the associated metal seal. Likewise, as shown in U.S. Pat. No. 4,466,622 dated Aug. 21, 1984, a metal face seal assembly is shown including a pair of metal seal rings and a pair of associated elastomeric rings, and particularly upon movement of the roller cutter to its outermost axial position on the journal, one of the elastomeric rings has more compression than the other elastomeric ring which could result in slippage and wear of one of the elastomeric rings ultimately causing seal failure.
One of the problems involved in the wear or deterioration of bearing areas or bearing surfaces between the journal and roller cutter is the problem of the egress or entering of drilling fluid into the bearing areas. The drilling fluid normally has foreign matter or contaminates entrained therein which can be damaging to the bearing areas. In seal assemblies heretofore for roller cutter drill bits used in oil wells and requiring a hydrostatic pressure compensator which includes a pair of metal seal rings urged into face to face sealing contact by a pair of elastomeric seal rings, the elastomeric seal rings have been provided on different peripheral surfaces of the metal seal rings, i.e. one elastomeric seal has been provided on the outer peripheral surface of one metal seal ring and the other elastomeric seal has been provided on the inner peripheral surface of the other metal seal ring. Normally one elastomeric seal ring is positioned on the outer peripheral surface of the dynamic metal seal ring which rotates with the cutter while the other elastomeric seal ring is positioned on the inner peripheral surface of the static metal seal adjacent the journal as shown in the aforesaid U.S. Pat. Nos. 4,466,622 and 4,516,641. However, under certain conditions of operation, such as an axial movement of the cutter from an innermost position on the journal to the outermost position on the journal, a maximum fluid pressure differential results from the drilling fluid along with a loss of compression in one of the elastomeric rings and possible slippage and wear of that elastomeric ring. Also, a rapid back and forth movement of the seal assembly in the cavity as the cutter moves back and forth may cause violent excursions of the seal assembly from severe vibrations of the bit while drilling.
|
{
"pile_set_name": "USPTO Backgrounds"
}
|
default:
go test ./...
|
{
"pile_set_name": "Github"
}
|
Quantum algorithms are global random searching algorithms based on the principles, laws and effects of quantum mechanics. They are used for controlling a process or for processing data in a database, and more specifically, for controlling a process that may include search-of-minima intelligent operations.
In a quantum search, each design variable is represented by a finite linear superposition of initial states, with a sequence of elementary unitary steps manipulating the initial quantum state |i> (for the input) such that a measurement of the final state of the system yields the correct output. Usually, three principle operators, i.e., linear superposition (coherent states), entanglement and interference, are used in the quantum search algorithm.
For a better understanding, a brief description of quantum search algorithms is provided. The problems solved by quantum algorithms may be stated as follows:
InputA function f: {0, 1}n →{0, 1}mProblemFind a certain property of fThe structure of a quantum algorithm is outlined by a high level representation in the schematic diagram of FIG. 1.
The input of a quantum algorithm is always a function f from binary strings into binary strings. This function is represented as a map table, which defines for every string its image. Function f is first encoded into a unitary matrix operator UF depending on f properties. This operator calculates f when its input and output strings are encoded into canonical basis vectors of a Complex Hilbert Space: UF maps the vector code of every string into the vector code of its image by f.
BOX 1 : UNITARY MATRIX U F A squared matrix U F on the complex field is unitary if its inverse matrix coincides with its conjugate transpose : U F - 1 = U F = A unitary matrix is always reversible and preserves the norm of vectors .
When the matrix operator UF has been generated, it is embedded into a quantum gate G, a unitary matrix whose structure depends on the form of matrix UF and on the problem to be solved. The quantum gate is the core of a quantum algorithm. In every quantum algorithm, the quantum gate acts on an initial canonical basis vector (the same vector can always be chosen) to generate a complex linear combination (called a superposition) of basis vectors as the output. The superposition contains all the information to answer the initial problem.
After the superposition has been created, a measurement takes place to extract this information. In quantum mechanics, measurement is a non-deterministic operation that produces as output only one of the basis vectors in the superposition. The probability of every basis vector being the output of a measurement depends on its complex coefficient (probability amplitude) in entering a complex linear combination.
The segmental action of the quantum gate and of the measurement forms the quantum block. The quantum block is repeated k times to produce a collection of k basis vectors. In measuring a non-deterministic operation, these basic vectors would not be necessarily identical and each one of them will encode a piece of the information needed to solve the problem. The last part of the algorithm includes interpretation of the collected basis vectors to get the right answer for the initial problem with a certain probability.
The behavior of the encoder block is described in the detailed schematic diagram of FIG. 2. Function f is encoded into matrix UF in three steps.
Step 1: The map table of function f:{0,1}n→{0,1}m is transformed into the map table of the injective function F:{0,1}n+m→{0,1}n+m such that:F(x0, . . . , xn−1, y0, . . . , ym−1)=(x0, . . . , xn−1, f(x0, . . . , xn−1)⊕(y0, . . . , ym−1)) (1)
BOX 2 : XOR OPERATOR ⊕ The XOR operator between two binary strings p and q of length m is a string s of length m such that the i - th digit of s is calculated as the exclusive OR between the i - th digits of p and q : p = ( p 0 , … , p n - 1 ) q = ( q 0 , … , q n - 1 ) s = p ⊕ q = ( ( p 0 + q 0 ) mod 2 , … , ( p n - 1 + q n - 1 ) mod 2 ) )
The need to deal with an injective function comes from the requirement that UF is unitary. A unitary operator is reversible, so it cannot map two different inputs in the same output. Given that UF is the matrix representation of F, F is supposed to be infective. If the matrix representation of function f is directly used, a non-unitary matrix could be obtained since f could be non-injective. Injectivity is thus fulfilled by increasing the number of bits and considering function F instead of function f. Function f can always be calculated from F by putting (y0, . . . , ym−1)=(0, . . . , 0) in the input string and reading the last m values of the output string.
Step 2: Function F map table is transformed into UF map table, following the following constraint:∀sε{0,1}n+m:UF[τ(s)]=τ[F(s)] (2)The code map τ:{0,1}n+m→c2n+m(c2n+m is the target Complex Hilbert Space) is such that:
τ ( 0 ) = ( 1 0 ) = | 0 〉 τ ( 1 ) = ( 0 1 ) = | 1 〉 ( 3 ) τ ( x 0 , … , x n + m - 1 ) = τ ( x 0 ) ⊗ … ⊗ τ ( x n + m - 1 ) = | x 0 … x n + m - 1 〉
BOX 3 : VECTOR TENSOR PRODUCT ⊗ The tensor product between two vectors of dimensionsh and k is a tensor product of dimension hk, suchthat: ❘ x 〉 ⊗ | y 〉 = ( x 1 … x h ) ⊗ ( y 1 … y k ) = ( x 1 y 1 … x 1 y k … x h y 1 … x h y k ) ⇒ Physical interpretation : _ If a component of a complexvector is interpreted as theprobability amplitude of asystem being in a given state(indexed by the componentnumber, the tensor product between two vectors describesthe joing probability amplitudeof two systems being in a joint state.
|
{
"pile_set_name": "USPTO Backgrounds"
}
|
Q:
How to add social login for Jhipster Gateway
Have generated UAA + Gateway. I want to add social login, please someone help me to perform this action.
A:
You can create monolithic application with social login ,then copy java files to uaa and frontend files to gateway
In gateway copy app/account/social ,app/shared/social folders and add modules and component to index.ts ,account-module.ts shared-modules.ts files
In uaa
1-add spring social dependencies
2-social configuration to yml files
3-copy all social java file to uaa packages
|
{
"pile_set_name": "StackExchange"
}
|
Introduction {#Sec1}
============
Glioblastoma (GBM) is the most common and aggressive primary brain tumor. These tumors are characterized by invasiveness coupled with neovascularization with a median survival of \~15 months from diagnosis through treatment with current standard of care \[[@CR1], [@CR2]\]. Patients with recurrent GBM have an even worse prognosis with a median survival of 6--9 months \[[@CR3]\]. As indicated by the poor survival rate, current treatments have not been effective in preventing disease progression.
Traditionally, it was believed that the central nervous system was an immuno-privileged compartment. However, recent data have shown the presence of lymphatic drainage, increased permeability in the presence of tumor as well as antigen-presenting capability of microglial cells \[[@CR4]\]. The GBM tumor microenvironment is associated with pathological cytokine profiles and immunosuppressive signals which prevent tumor recognition by the innate and adaptive immune system \[[@CR5]\]. Inadequate antigen presentation in the tumor microenvironment by antigen-presenting cells (APCs) such as dendritic cells (DCs) and macrophages can lead to the development of tumor immunotolerance via T-cell exhaustion and an increase in immunosuppressive regulatory T cells (Tregs) \[[@CR6]\].
Cancer immunotherapy attempts to harness the specificity and cytotoxic activity of the immune system to control growth and destroy tumor cells. Numerous strategies are in progress to reverse these immunosuppressive signals, including the use of immune checkpoint inhibitors which have been shown to partially reverse the immunosuppressive signals resulting in a reduction in tumor mass in some subjects \[[@CR7]--[@CR9]\]. In addition, genetically modified autologous T cells, which incorporate a chimeric gene consisting of an anti-folate single-chain antibody or modified with EGFRv3, have been shown effective in the treatment of subsets of GBM \[[@CR10], [@CR11]\]. Alternatively, tumor immunosuppression can be overcome by directly stimulating the immune system by the local administration of immunostimulant cytokines such as interleukin-12 (IL-12) and interferon-gamma (IFN-γ) which have been shown to be downregulated in the GBM microenvironment \[[@CR12]\].
IL-12, a heterodimeric protein, plays a pivotal role in linking the innate and adaptive immune systems \[[@CR13], [@CR14]\]. IL-12 is endogenously produced by APCs and acts upon natural killer (NK) cells and T cells in the differentiation of naive CD4+ T cells to a T helper 1 (Th1) phenotype, and for activating naive T cells to activated CD8+ cytotoxic T lymphocytes \[[@CR15]\]. Thus, IL-12 serves as a master regulator of adaptive type 1 cell-mediated immunity, a critical pathway involved in the protection against cancer. In addition to these effects, IL-12 serves as an important factor in the differentiation and survival of memory T cells \[[@CR16]\].
Studies with recombinant IL-12 protein have been performed in multiple murine tumor models using systemically administered IL-12. Results of these studies clearly demonstrated a reduction in tumor growth rate coupled with no appreciable toxicity \[[@CR17]\].
Based on these results, phase 1 studies were performed where recombinant IL-12 was administered systemically to human subjects. However, in the expanded phase 2 study, severe toxicity was observed and the study halted \[[@CR18]\], thereby indicating the need to precisely control and locally administer IL-12 at the target site.
Several groups demonstrated that in vitro transduction of tumor cells with IL-12 genes could be therapeutically useful and avoid the severe toxicities observed with systemic administration \[[@CR19], [@CR20]\]. Direct intratumoral injection of defective recombinant adenovirus encoding IL-12 resulted in tumor regression and long-term systemic tumor immunity. However, localized administration of IL-12 by this method is not controllable, thus resulting in a narrow therapeutic index \[[@CR21], [@CR22]\].
The RheoSwitch Therapeutic System^®^ (RTS^®^) provides a gene expression control switch platform that confers tightly regulated, inducible gene that has been validated in clinical trials involving IL-12 \[[@CR23], [@CR24]\]. An RTS^®^ gene switch consists of multiple inter-dependent functional components: (1) two transcription factors, (2) an inducible promoter and (3) a small molecule activator ligand. One of the transcription factors serves as a co-activation partner, and is a fusion between a transcription activation domain and a nuclear factor domain. The second transcription factor serves as a ligand-inducible transcription factor and is a DNA binding domain fused to a nuclear factor ligand binding domain. The RheoSwitch activator ligand (veledimex) is a synthetic analog of the insect molting hormone ecdysone \[[@CR25], [@CR26]\]. Both fusion proteins are constitutively expressed and, in the absence of the activator ligand veledimex, provide an "off" signal with no transgene expression. In the presence of veledimex, stabilization of the heterodimeric complex between the two fusion proteins forms an active transcription factor complex, leading to transcriptional activation from the inducible promoter through recruitment of transcriptional co-activators and components of the basal transcription machinery to induce expression ("on" signal) of a gene of interest placed under the control of the RTS^®^ \[[@CR26]--[@CR28]\]. We have previously shown that Ad-RTS-mIL-12+veledimex elicited dose-related decreases in tumor growth rate with no significant change in body weight in both breast and melanoma syngeneic mouse models \[[@CR25]\].
In this study, we explore the mechanism of action of a direct intratumoral injection of Ad-RTS-mIL-12 plus orally administered veledimex and correlate it with antitumor activity and induced systemic immunity in an orthotopic syngeneic mouse GBM model. The successful preclinical studies led to ongoing studies of Ad-RTS-hIL-12+veledimex in cancer subjects with relapsed and refractory GBM (NCT01397708).
Materials and methods {#Sec2}
=====================
Cell lines and mice {#Sec3}
-------------------
A murine glioma tumor cell line, GL-261, was purchased from American Type Culture Collection (Manassas, VA). Cells were maintained in complete RPMI containing 10% heat-inactivated fetal bovine serum (Atlanta Biologicals Inc., Lawrenceville, GA). Cells were grown and maintained at 37 °C in a humidified atmosphere with 5% CO~2~. Female C57BL/6 mice, age 6--7 weeks or 20--23 weeks for the control arm of the rechallenge study, were obtained from Harlan Laboratories (Indianapolis, IN) and Charles River Laboratories (Wilmington, MA). All animal care and experimental procedures used in this study were performed in accordance with the protocol approved by the Institutional Animal Care and Use Committee guidelines.
Adenoviral vectors and therapeutic agents {#Sec4}
-----------------------------------------
Adenoviral shuttle vectors were generated as previously described by Komita and colleagues \[[@CR22]\] containing the mIL-12 sequences. Adenoviral vectors were generated using the RAPAd adenoviral system \[[@CR23]\]. Ad-RTS-mIL-12 (containing and expressing the murine IL12 gene) was purified by Vivante/Lonza (Houston, TX) and stored in 20 mM Tris+10% glycerin at pH 8.2. Some preparations were also produced by Viraquest Inc. (North Liberty, IA) and stored in A195 buffer consisting of 10 mM Tris, 75 mM NaCl, 10 mM histidine, 5% sucrose (w/v), 1 mM MgC1~2~, 0.02% PS-80, 100 µM EDTA and 0.5% EtOH, pH 7.4 \[[@CR24]\]. All preparations were negative for replication-competent adenovirus. Prior to intratumoral (IT) administration, Ad-RTS-mIL-12 was diluted into phosphate-buffered saline (PBS) to obtain the desired concentration for vector particles (vp)/injections. Virus was titrated by a plaque-forming unit assay. Bevacizumab (Avastin®, Genentech, Lot No. 557412) was supplied as a 25 mg/mL solution, and was stored at 4 °C. PD-1 inhibitor (Lot No. 5792/0915) CD279, clone RMPI-14 was received as a liquid in PBS from BioXcell Inc. (West Lebanon, NH) and was stored at 4 °C protected from light.
Quantitative real-time PCR and RT-PCR {#Sec5}
-------------------------------------
Mouse genomic DNA and RNA was isolated from snap-frozen tumors using the DNeasy and RNeasy nucleic acid isolation kits from Qiagen (Germantown, MD). Isolated RNA was also further treated with RNase-free DNAse (Qiagen). Quantification of isolated DNA and RNA was performed on a Nanodrop spectrophotometer (Thermo Fisher Scientific, Cambridge, MA), and further quantified using Quant-IT PicoGreen^®^ and RiboGreen^®^ assay kits from Life Technologies (Carlsbad, CA). RNA quality was also assessed using the Agilent RNA Nano kit (Agilent Technologies Inc., Lexington, MA). Absolute quantification was performed on DNA samples to determine gene copy numbers. Standard curves were generated using murine IL-12 shuttle plasmid (Intrexon Corp., Blacksburg, VA). Each reaction involved 100 ng of template DNA to determine the vector copy number and samples were run in triplicate. Real-time quantitative polymerase chain reaction (PCR) assays were performed using an ABI 7300 and/or 9300 HT system (ABI Technologies, Foster City, CA).
For relative gene expression analysis, RNA was converted to complementary DNA (cDNA) using the qScript™ cDNA SuperMix (Quanta Biosciences, Gaithersburg, MD), per manufacturer's protocol. Equivalent amounts of cDNA were used per reaction and run in triplicate. Isolated RNA was analyzed by quantitative reverse transcription PCR (qRT-PCR; TaqMan assay) for mIL-12 and a house-keeping gene panel (mACTB). The relative expression levels of mIL-12 was based on ∆∆CT method. Assays were run using an ABI 7300 and/or 9300 HT system (ABI Technologies, Foster City, CA).
Serum and tumor cytokine analyses {#Sec6}
---------------------------------
Murine IL-12-p70 (mIL-12-p70) levels were measured by enzyme-linked immunosorbent assay (ELISA) in sera and in tumor cell lysates. Frozen tumors were lysed with SDS Free Cell Culture Lysis Buffer (Promega, Madison, WI) and then pulverized with a tissue homogenizer, QIAGEN (Valencia, CA) Tissue Lyser Bead Mill, followed by 3× freeze--thaw cycles. The supernatant was removed after centrifugation at \>10,000 × *g* for 5 min and used for cytokine analysis. The mIL-12p70 concentrations in the sera and tumor lysates were measured by ELISA using a Quantikine mIL-12 Immuno-assay kit (R&D Systems, Minneapolis, MN). Total protein concentration in each tumor lysate was determined using the bicinchoninic acid (BCA) method (Thermo Scientific, Waltham, MA) to calculate the pg cytokine levels /mg of tumor or /mL of sera.
Tumor and blood flow cytometry {#Sec7}
------------------------------
Brain tumors or whole blood (50 µL) samples were stained per Flow Contract Site Laboratory standard operating procedures. Briefly, samples were incubated for 30--35 min in the dark at room temperature and then washed twice with 1 mL 1× Permeabilization Buffer. The samples were resuspended in 125 µL of 1× calcium- and magnesium-free Dulbecco\'s phosphate-buffered saline for acquisition on the flow cytometer.
One negative sample (no antibody) was used for gating purposes. Cell populations were determined by electronic gating based on forward versus side scatter. The flow cytometer collected 20,000 CD45+ events in each tube. The CD45+ population was further characterized for T cells (CD3+/CD4+ and CD3+/CD8+), macrophages (CD11b+/Ly6G-/Ly6C-F4/80+), B cells (CD3-CD19+), NK cells (CD49b), T-cell exhaustion (LAG3+) and regulatory T cells (CD4+CD25+FoxP3+). Flow cytometric data acquisition was performed using the Facscanto ii™ flow cytometer (BD Biosciences). Data were acquired using BD FACSDiva™ software (version 8.0.1).
Assessment of plasma, brain and cerebrospinal fluid pharmacokinetics of veledimex in mouse and cynomolgus monkey {#Sec8}
----------------------------------------------------------------------------------------------------------------
Brain tumor, plasma and cerebrospinal fluid (CSF) samples were analyzed for veledimex using liquid chromatography/tandem mass spectrometry method. Brain tumor samples were homogenized in homogenization solution (10:90/v:v/acetonitrile:0.1% Tween 80 in water) with ratio of 1 g to 9 mL and proteins precipitated. Veledimex concentrations were calculated with a Quadratic 1/x concentration weighting linear regression over concentration ranges of 0.0500 to 50 ng/mL using deuterated (D3) veledimex as an internal standard.
Syngeneic mouse GL-261 orthotopic glioma model {#Sec9}
----------------------------------------------
At 5 days prior to treatment initiation, female C57BL/6 (C57BL/6NHsd) mice 6--7 weeks old (Envigo; Indianapolis, IN) were inoculated intracranially with 3 µL of GL-261 murine glioblastoma tumor cells at 1 × 10^5^ cells/mouse. Implant coordinates were 1 mm anterior and 2 mm lateral to the bregma, at a depth of 3 mm. Buprenorphine (Reckitt Benckiser Healthcare; UK) was administered to the mice approximately 30 min prior to surgery. After 5 days, mice were randomized into one of the treatment groups. For those mice administered Ad-RTS-mIL-12, the vector was administered through the same bur hole at the coordinates above in a constant volume of 3--5 μL. Veledimex was administered via oral gavage once daily for 14 consecutive days.
Data analysis {#Sec10}
-------------
All values are expressed as the mean ± standard error of the mean. Statistical analysis was performed using a one-way analysis of variance with Dunnets post hoc test to compare differences between the groups versus control. Increased survival fractions were studied using Kaplan--Meier survival plot followed with a log-rank and Gehan--Wilcoxon tests to assess the significance of the differences. All statistical analyses were performed using GraphPrism 5 (GraphPad Software, Inc., CA, USA). Differences between groups were considered significant when *P* \< 0.05. Pharmacokinetics and tissue levels of veledimex was determined using non-compartmental analysis module of the pharmacokinetic software Phoenix 64 WinNonlin.
Results {#Sec11}
=======
Veledimex crosses the blood--brain barrier in mice and cynomolgus monkeys {#Sec12}
-------------------------------------------------------------------------
Veledimex was administered to separate groups of C57BL/6 mice at a single dose via oral gavage (PO) at 225 mg/m^2^. Terminal blood and CSF and brain samples were collected at 1, 2, 4, 6, 24 and 48 h post dose. Following a single oral dose of 225 mg/m^2^ veledimex, C~max~ and AUC~0-t~ in plasma were 4153 ng/mL and 4057 ng h/mL while the CSF C~max~ and AUC~0-t~ were 14 ng/mL and 466 ng h/mL. For veledimex detected in the brain, C~max~ and AUC~0-t~ were 1794 ng/mL and 25325 ng h/mL. The veledimex C~max~ and AUC~0-t~ ratio between CSF and plasma was 0.34%.
A single oral dose of veledimex at 120 mg was administered to six cynomolgus monkeys (3/gender) and plasma and CSF levels were assessed through 48 h post treatment. Since no gender-related differences were observed, the data from male and female animals were pooled. After a single oral dose of 120 mg veledimex, C~max~, and AUC~0-t~ in plasma and CSF were 327 ± 142 and 2.07 ± 0.91 ng/mL and 5887 ± 2203 and 42.5 ± 17.4 ng h/mL, respectively. The mean plasma T~max~ was observed at \~4 h post dose with elimination t~1/2~ 25.0 ± 6.1, respectively. There was \~1% of veledimex brain uptake, C~max~, and AUC~0-t~ ratios between CSF and plasma were 0.6 ± 0.2 and 0.7 ± 0.2.
In summary, these data show that orally administered veledimex crosses the blood--brain barrier in both mice and cynomolgus monkeys at sufficient levels to warrant the assessment of Ad-RTS-IL-12+veledimex in glioma.
Demonstration of mechanism of action in vivo {#Sec13}
--------------------------------------------
A series of in vivo mechanistic studies were performed in C57BL/6 mice to demonstrated the ability of Ad-RTS-mIL-12+veledimex to produce local expression of IL-12 as well as select the optimal vector dose for further in vivo evaluation. In this study mice were administered Ad-RTS-mIL-12 1 × 10^8^ to 5 × 10^9^ vp intracranial at the coordinates stated above with veledimex administered gavage at a fixed dose of 100 mg/m^2^/day PO, and brain veledimex, viral copy number and the ability to turn on the switch to elicit local production of IL-12 mRNA and IL-12 protein expressions were assessed. The results of this study demonstrated that increasing doses of the vector in the presence of a fixed dose of activator ligand, veledimex, elicited a dose-related increase in tumor viral particles, IL-12 mRNA (activating the switch) and localized production of IL-12 (Table [1](#Tab1){ref-type="table"}). The 5 × 10^9^ vp dose was chosen for all subsequent studies.Table 1Effects of Increasing Doses of Ad-RTS-mIL-12 Viral Particles in the Presence of a Fixed Dose of Veledimex**GroupBrain veledimex C** ~**max**~**RTS gDNAIL-12 mRNATumor IL-12(ng/g)(copies/100** **ng DNA)(relative expression)(pg/mg)**Vehicle+vehicle0 ± 01 ± 13 ± 1V 100 + 1 × 10^8^ vp515 ± 5375 ± 502 ± 10 ± 0V 100 + 1 × 10^9^ vp311 ± 999657 ± 2641105 ± 3215 ± 6V 100 + 5 × 10^9^ vp378 ± 6827511 ± 16109300 ± 10780 ± 31*V* activator ligand, veledimex, administered orally at a fixed dose of 100 mg/m^2^/dayAll data are presented as group mean ± SEM
Correlation of veledimex dose and local cytokine production {#Sec14}
-----------------------------------------------------------
We next explored the ability of intratumoral Ad-RTS-mIL-12 at 5 × 10^9^+oral veledimex 1 to 30 mg/m^2^ to locally produce IL-12 and downstream IFN-γ in the tumor, as well as assess serum IL-12 and IFN-γ levels in the GL-261 orthotopic syngeneic mouse model. In the tumor, there was a dose-related increase in IL-12 with steep increase in IL-12 between 3 and 10 mg/m^2^ on days 3 and 7. IFN-γ followed a similar trend with the peak increases observed on day 7 demonstrating that IL-12 produced by the vector was biologically active. Serum cytokines followed a similar trend as tumor cytokines with the levels in the serum being approximately 10 times lower than those observed in the tumor (Fig. [1](#Fig1){ref-type="fig"}).Fig. 1Female C57BL/6 mice were inoculated intracranially with GL-261cells. At 5 days post cell inoculation (termed as day 1 for Ad-RTS-mIL-12+veledimex treatment), mice were dosed with Ad-RTS-mIL-12 5 × 10^9^ vp intracranially and approximately 2 h later, veledimex was administered via oral gavage at 1--30 mg/m^2^/day on days 1--14. On days 3 and 7, mice in each group were killed to collect serum and tumor samples for evaluation of IL-12 and IFN-γ levels via ELISA. Each histogram depicts the mean ± standard error (*N* \> 4 per time point). \**P* \< 0.05 versus corresponding vehicle
Ad-RTS-mIL-12+veledimex improves survival {#Sec15}
-----------------------------------------
Since the nonclinical studies demonstrated that Ad-RTS-mIL-12+veledimex was on mechanism, we initiated a series of studies to evaluate the safety and efficacy of Ad-RTS-mIL-12+veledimex in an orthotopic GL-261 glioma syngeneic mouse model. In this model, each C57BL/6 mouse received 1 × 10^5^ GL-261 glioma cells via intracranial injection \~2 mm distal to the intersection of the coronal and sagittal suture. On day 5, the animals were randomly assigned to one of the treatment groups. Animals were monitored for survival.
Treatment with vehicle only resulted in median survival of 23 days (Min:16, Max: 38) (Fig. [2](#Fig2){ref-type="fig"}). Consistent with disease progression, there was a marked reduction in body weight as well as increasing incidence of head tilt, ataxic locomotion, circling movements, cold to touch, emaciation, hunched posture, hypoactivity, leaning, head swelling and tremors leading to moribund killing of the animal. All vehicle-only animals were observed to have tumor present at the time of killing.Fig. 2Intratumor regulated IL-12 gene delivery by Ad-RTS-mIL-12+veledimex improves survival in GL-261 glioma model. 1 × 10^5^ GL-261 glioma cells were administered into the brain. Separate groups of 12 C57BL/6 mice each were randomly assigned to one of the treatment groups. On day 1 animals were administered 1 × 10^5^ GL-261 glioma cells intracranially. On day 5, Ad-RTS-mIL-12 at 5 × 10^9^ vp+veledimex PO at 1--30 mg/m^2^/day PO was administered for 14 consecutive days and the time to disease progression and death was studied. Depicted in the upper panel are the survival results and lower panel are the respective body weights. All values are expressed as the mean for \>12 animals per group. Ad-RTS-mIL-12+veledimex 10 and 30 mg/m^2^ significantly improved overall survival over all other treatment groups; *P* \< 0.05
Ad-RTS-mIL-12 5 × 10^9^ vp + veledimex 1 and 3 mg/m^2^/day only slightly prolonged overall survival with the median overall survival of 36 and 27 days, respectively, when compared to vehicle. In both treatment groups, there were 3 survival animals which were terminally killed on day 85 with minimal tumor burden observed. Increasing doses of veledimex 10 and 30 mg/m^2^/day + Ad-RTS-mIL-12 5 × 10^9^ vp resulted in a significant (*P* \< 0.05) sustained and prolonged survival with the median survival of greater than 85 days with \~66% of the mice surviving to terminal killing at day 85. At terminal killing, most Ad-RTS-mIL-12-treated animals were tumor free. Ad-RTS-mIL-12+veledimex 30 mg/m^2^/day elicited a moderate reduction in body weight during the veledimex dosing period which rapidly rebounded on discontinuance of veledimex (Fig. [2](#Fig2){ref-type="fig"}). In addition, these results were compared to the current and potential standards of care bevacizumab 30 mg/m^2^ biwk ×3, temozolomide 300 mg/m^2^ qdx5 and anti-PD-1 (RMP 1--4) 15 mg/m^2^ q4dx5. The anti-PD-1 and temozolomide slightly prolonged median survival to 37 and 33 days, respectively with 24 and 14% of the animals surviving to terminal killing with all mice having tumor present at terminal killing. Bevacizumab failed to prolong survival when compared to vehicle with a median survival of 20 days and 4% survival to terminal killing (Fig. [2](#Fig2){ref-type="fig"}).
Tumor and blood flow cytometry {#Sec16}
------------------------------
To assess the role of IL-12 on the tumor microenvironment and the recruitment of effector and regulatory T cells in the GL-261 orthotopic syngeneic mouse model, tumor and blood flow cytometry on days 7 and 14 as well as to assess persistence on day 28 (2 weeks after last veledimex dose). In the tumor at those doses which markedly prolonged survival (Ad-RTS-mIL-12+veledimex 10 and 30 mg/m^2^/day), we observed sustained increase in tumor cytotoxic T cells (CD3+CD8+) concomitant with sustained reductions in T regulatory cells (Fig. [3](#Fig3){ref-type="fig"}). There was 2.6- and 6.1-fold increase of tumor cytotoxic T cells (CD3+CD8+) in treated groups compared to vehicle group on days 7 and 14, respectively. In the efficacious dose groups, there was 2.6-fold increase of cytotoxic T cells compared to vehicle group on day 28. At the meantime, percent of Tregs (CD4+CD25+FoxP3+) was at 0.3- and 0.4-fold of vehicle level on days 14 and 28, respectively. These changes altered the tumor microenvironment in favor of cytotoxicity as demonstrated by the increase in the cytotoxic/regulatory T-cell ratio in treated groups during and after treatment. Reductions in tumor NK (CD49+) cells and increase in T-cell exhaustion (LAG3+) were also observed during the active dosing period (data not shown). In the blood Ad-RTS-mIL-12+veledimex 10 and 30 mg/m^2^/day we observed transient increases in cytotoxic T cells (CD3+CD8+) concomitant with sustained reductions in Tregs (CD4+CD25+FoxP3+) on day 7 (Fig. [3](#Fig3){ref-type="fig"}). There was an \~ 2-fold increase in cytotoxic T cells as well as cytotoxic T cells/Treg ratio in efficacious dose groups on day 7. No clear trend was observed in blood samples on days 14 and 28. The above data indicated that T-cell changes were more robust in tumor microenvironment than in the whole blood. (Fig. [3](#Fig3){ref-type="fig"}).Fig. 3Effect of Ad-RTS-mIL-12 on tumor and blood CD8+ and FoxP3+ T cells. Mice bearing 6-day-old intracranial GL-261 tumors were administered intratumorally a single dose of Ad-RTS-IL-12 5 × 10^9^ vp+once daily orally administered veledimex for 14 consecutive days. Tumor and blood were harvested during the active dosing period and 2 weeks after the last veledimex dose. The tumor and blood were analyzed by flow cytometry for the percentage of cytotoxic T cells (CD8) and regulatory T cells (FoxP3) in the tumor (left) and blood (right). Each histogram is the mean ± SEM for 4 mice. \**P* \< 0.05 versus vehicle on respective days
Demonstration of improved survival upon rechallenge {#Sec17}
---------------------------------------------------
Prior treatment with Ad-RTS-mIL-12+veledimex produced a durable survival response that was superior to standards of care and vehicle controls. To determine if pretreatment with Ad-RTS-mIL-12+veledimex provides sustained benefit upon tumor rechallenge, on day 1, 36 surviving animals previously treated with Ad-RTS-mIL-12 (1 × 10^10^ vp)+veledimex (450 mg/m^2^/day PO) were rechallenged with 1 × 10^5^ GL-261 cells at the site of original implantation. In addition, 12 age-matched (20--23 weeks) C57BL/6 mice were administered intracranial 1 × 10^5^ GL-261 cells and survival monitored for an additional 73 days. The results of this study clearly show that Ad-RTS-mIL-12+veledimex demonstrated a significant increase in survival with \~90% of the animals surviving until the end of post rechallenge monitoring period vs. 50% for naive age-matched control group (*P* \< 0.05) (Fig. [4](#Fig4){ref-type="fig"}).Fig. 4Intratumor regulated IL-12 gene delivery by Ad-RTS-mIL-12+veledimex induces systemic immune memory in the GL-261 orthotopic mouse glioma model. Surviving animals that had been previously treated with Ad-RTS-mIL-12+veledimex 450 mg/m^2^/day were rechallenged with 1 **×** 10^5^ GL-261 glioma cells at the same coordinates as the prior studies (*N* = 36). The control group consisted of 12 age-matched C57BL/6 mice (20--23 weeks old) inoculated with 1 × 10^5^ GL-261 glioma cells and survival monitored for an additional 73 days. \**P* \< 0.05 versus age-matched control
Discussion {#Sec18}
==========
These studies provide proof of concept that regulated and controlled local IL-12 production can overcome the innate immunosuppression in glioma resulting in tipping the balance via increasing tumor cytotoxic T cells and reducing tumor Treg cells resulting in prolonged survival in an orthotopic model of glioma.
We chose the GL-261 orthotopic model since it closely mimics the human GBM phenotype in that it has K-ras oncogene as well as mutant p53 suppressor gene concomitant with high levels of c-myc \[[@CR29]--[@CR32]\]. In addition, the GL-261 tumor is partially immunogeneic as demonstrated by the presence of major histocompatibility complex I with low levels of costimulatory molecules required for T-cell activation, i.e., MCHII, B7--1 and B7-2 \[[@CR29]\]. Tumors formed from GL-261 cells mimic the same four stages of growth over a 4-week period that is observed in human GBM. Following implantation, there is perivascular organization followed by proliferation, hypoxia, neovascularization and development of necrotic regions. The GL-261 tumors, while invasive, do not metastasize outside the brain and do not spontaneously regress \[[@CR29], [@CR33]\].
IL-12 is a potent immunostimulatory cytokine extensively investigated in cancer therapy, and has been shown to increase survival in rodent glioma models \[[@CR34]--[@CR36]\]. IL-12 facilitates the development of an inflammatory IFN-γ- secreting tumor-specific Th1-type immune response, thereby enhancing tumor cytotoxicity.
Systemic administration of recombinant IL-12 protein in mice has demonstrated a reduction in tumor growth rate coupled with no appreciable toxicity. \[[@CR17]\]. However, in phase 1 studies systemic administration of recombinant IL-12 resulted in severe toxicity \[[@CR18]\], thus demonstrating the need to precisely control and locally administer IL-12 at the target site. To assess the ability of localized IL-12 to have an acceptable therapeutic index, Wei et al. \[[@CR37]\] used lentiviral vectors to transduce SCCVII tumor cells to produce local IL-12 at different concentrations. Their results demonstrated that the local delivery of IL-12 is an effective route for overcoming innate tumor immunosuppression. To assess the role of local IL-12 administration in a GL-261 orthotopic model of GBM, Vom Berg et al. \[[@CR38]\] implanted osmotic pumps to ensure continuous dosing of IL-12 to the tumor. They found that local IL-12 administration reversed the GBM-induced immunosuppression, leading to increase in overall survival. While both approaches demonstrated the ability of local IL-12 to overcome innate tumor immunosuppression, both approaches are cumbersome and are difficult to precisely control local IL-12 levels in the clinical setting. Intratumoral delivery of DCs allows the capture and presentation of tumor antigens, and DCs have been shown to cause tumor regression in a breast cancer in vivo model, and prolonged survival and immunity to tumor rechallenge in rats implanted with glioblastoma cells \[[@CR39], [@CR34]\]. With this in mind, Komita et al. \[[@CR26]\] studied a conditional IL-12 expression system that is tightly controlled by an orally administered activator ligand. In their study, they transduced DC cells with Ad-RTS-mIL-12 and administered the cells into a B16 melanoma tumor. They observed that the administration of the activator ligand resulted in controlled local production of IL-12 leading to an increase in cytotoxic T cells and tumor regression.
In the present study, we assessed an adenoviral vector that expresses IL-12, i.e., Ad-RTS-mIL-12 or Ad-RTS-hIL-12 under the control of a conditional (regulated) promoter, administered together with an orally bioavailable small molecule activator ligand, veledimex. The RTS^®^ "gene switch" functions as a conditional (regulated) promoter of transgene expression which can be controlled by a small molecule ligand. RTS-inducible transgene expression is "off" in the absence of veledimex, whereas transgene expression is turned "on" by the administration of veledimex. In the GL-261 orthotopic model of glioma, we demonstrated a dose-related increase in tumor IL-12 and downstream IFN-γ, demonstrating that the IL-12 produced was biologically active.
Successful immunotherapy involves overcoming the immunosuppressive tumor environment as demonstrated by low levels of cytotoxic T cells and increased Tregs. We have shown that Ad-RTS-mIL-12+veledimex elicited a dose-related increase in tumor IL-12 which in turn elicited a dose-related increase in CD3+CD8+cytotoxic T cells concomitant with a reduction in FOXP3+ regulatory T cells. On day 14, cytotoxic T cells were increased by \~ 6-fold over vehicle and at the efficacious doses of veledimex 10 and 30 mg/m^2^. The increase persisted for at least 14 days after the last dose of veledimex. Concomitant with the increase in cytotoxic T cells there was a marked decrease in Tregs, thus demonstrating that Ad-RTS-mIL-12+veledimex could stimulate and restore local immune function in a dose-related manner. Numerous investigators have also demonstrated that tumor-infiltrating T cells and more importantly the ratio of cytotoxic T cells/Tregs provide a favorable prognostic marker to predict the success of immunotherapies in breast \[[@CR40]\], ovarian \[[@CR41]\], melanoma \[[@CR42]\] and glioma \[[@CR43]--[@CR45]\]. In the present study, we also observed transient increases in the blood CD3+ CD8+/FoxP3 ratio which may be useful as a surrogate marker for efficacy. However, further studies are required to establish its role as a surrogate marker for efficacy.
The local increase in IL-12 increase coupled with the local restoration of the immune system resulted in a concomitant increase in long-term survival without significant adverse events. At day 85 (study termination), over 95% of the Ad-RTS-IL-12+veledimex-treated animals were tumor free. In contrast, bevacizumab, temozolomide and anti-PD-1 therapy only slightly prolonged survival. Similar survival benefit was observed by others \[[@CR44]--[@CR46]\]. To assess the ability of Ad-RTS-IL-12+veledimex to provide sustained benefit on rechallenge, a group of surviving animals were readministered 1 × 10^5^ GL-261 tumor cells 100 days after Ad-RTS-IL-12+veledimex. We observed \~90% of the animals surviving until the end of the 73-day post-rechallenge monitoring period vs. 50% for the naive control group, thus demonstrating the induction of local immune response induced by vector intratumoral administration and oral veledimex.
In summary, our results demonstrate that the localized delivery of IL-12 encoded by the replication-incompetent adenoviral vector Ad-RTS-IL-12, and controlled by the oral activator veledimex is an effective gene and immunotherapeutic strategy in preclinical studies. We have demonstrated that this therapy induced localized controlled production of IL-12 which correlates with an increase in tumor-infiltrating lymphocytes leading to the desired biologic response of tumor growth inhibition and regression. These results demonstrate the need to study Ad-RTS-IL-12+veledimex in the patients with glioma. Indeed, clinical trials based on these data have been initiated in the treatment of glioma (NCT02026271) via direct intratumoral injection of adenoviral particles carrying a gene switch and the human IL-12-p70 transgene and oral administration of veledimex to produce hIL-12. The preliminary results of this study are encouraging \[[@CR47]\]. Thus, the local tightly controlled production of IL-12 has the potential to significantly expand the cancer immunotherapeutic armamentarium.
Author contributions {#FPar1}
====================
Conception and design: JAB, HC, JM, PDK, LJNC and FL; development of methodology: JAB, HC, JM and PG; acquisition of data (provided animals, acquired and managed patients, provided facilities, etc.): JAB, HC, JM, PG, JD-H, GS; analysis and interpretation of data (e.g., statistical analysis, biostatistics, computational analysis): JAB, HC, JM, PDK, PG, TC, LJNC and FL; writing, review and/or revision of the manuscript: JAB, HC, JM, PDK, PG, JD-H, GS, TC, LJNC and FL.
Conflict of interest {#FPar2}
====================
JAB, HC, JM, PDK, LJNC and FL are all employees of Ziopharm Oncology, Inc. TC is an employee of Intrexon, Inc.
|
{
"pile_set_name": "PubMed Central"
}
|
Avocado Mango and Sprout Salad with
Fat-Free Sweet and Spicy Mango Dressing
This is a longtime favorite salad that, of course, had to go in Easy Affordable Raw. Sunflower sprouts add a fresh crunch, and the avocados and mushrooms add substance to this
hearty salad that is perfect for either lunch or dinner.
For the Salad
1 head Romaine lettuce, torn into pieces
1 avocado, sliced
1 cup (70 g) sliced mushrooms
1½ cups (265 g) mango chunks
3 scallions, sliced
½ cup (50 g) walnuts
1 cup (45 g) sunflower sprouts
(See how-to on page 40.)
For the Dressing
1 cup (175 g) chopped mango
½ cup (35 g) sun-dried tomatoes, soaked in
water for 30 minutes (See how-to on page 44.)
6 dates, soaked in water for 30 minutes
2 tablespoons (30 ml) agave
1 teaspoon minced fresh garlic
1 tablespoon (15 ml) apple cider vinegar
1 cup (235 ml) water
1 teaspoon ground paprika
¼ to ½ teaspoon cayenne pepper
1 tablespoon (10 g) minced onion
½ teaspoon salt
½ teaspoon black pepper
To make the salad: Assemble the salad by making a bed of Romaine two salad plates and arranging the remaining salad ingredients on top.
To make the dressing: Place all the dressing ingredients in a blender and puree until very smooth. Drizzle over the salad when ready to serve.
Leftover dressing can be stored in a covered container in the refrigerator for up to a few days.
YIELD: 2 LARGE SALADS AND ABOUT 2½ CUPS (590 ML) DRESSING
Easy Affordable Raw: How to Go Raw on $10 a Day has over 100 raw recipes made with ingredients found at most local groceries. No exotic or hard to find things! Get your own copy at the link above.
|
{
"pile_set_name": "OpenWebText2"
}
|
---
abstract: 'We search for persistent and quasi-periodic release events of streamer blobs during 2007 with the Large Angle Spectrometric Coronagraph on the *Solar and Heliospheric Observatory* and assess the velocity of the slow solar wind along the plasma sheet above the corresponding streamer by measuring the dynamic parameters of blobs. We find 10 quasi-periodic release events of streamer blobs lasting for three to four days. In each day of these events, we observe three-five blobs. The results are in line with previous studies using data observed near the last solar minimum. Using the measured blob velocity as a proxy for that of the mean flow, we suggest that the velocity of the background slow solar wind near the Sun can vary significantly within a few hours. This provides an observational manifestation of the large velocity variability of the slow solar wind near the Sun.'
author:
- |
H.Q. $^{1,2}$, Y. $^{1}$, K. $^{3}$,\
S.W. $^{1}$, L.D. $^{1}$
title: 'Quasi-Periodic Releases of Streamer Blobs and Velocity Variability of the Slow Solar Wind near the Sun'
---
Introduction
============
Sheeley *et al*. (1997) were the first to report the observation of plasma blobs, released from the tips of streamers as revealed in the data obtained by the Large Angle Spectrometric Coronagraph (LASCO) on the *Solar and Heliospheric Observatory* (SOHO) spacecraft (Brueckner *et al*., 1995) around the last solar minimum. According to the data analysis by Sheeley *et al*. (1997) and a series of following studies by Wang *et al*. (1998,2000) blobs emerge at about 2-4 Solar Radii ($R_\odot$) from the Sun center as radially elongated structures with initial sizes being about 1 $R_\odot$ in the radial direction and 0.1 $R_\odot$ in the transverse direction. They move outward radially, maintaining an almost constant angular span and their lengths increased from $\approx$1 $R_\odot$ to $\approx$3 $R_\odot$ within the LASCO field of view (FOV). Their velocities also increase gradually with increasing length. Besides understanding the plasma process accounting for the formation of blobs themselves, there are at least two other issues directly related to blob studies: The first is that blob studies provide a practical technique of assessing the velocity of the embedded solar wind, since blobs are believed to get closely coupled to and flow outward together with the background solar wind shortly after their emission. This component of the solar wind (*i.e.*, the wind originated from the plasma sheet above a streamer) is usually taken to be a part of the slow solar wind (*e.g.*, Woo and Martin, 1997; Sheeley *et al*., 1997; Habbal *et al*., 1997; Wang *et al*., 2000). That is to say that the measurements of the blob motion can be used to represent the velocity of the embedded slow wind plasmas above a certain distance. Another issue of blob studies concerns the possible diagnostics of plasma properties enclosed in the closed magnetic-field regions of streamers, especially, in the streamer cusp region, through *in situ* detection of blob structures in interplanetary space. This is deduced from the assumption that blobs originate from inside the closed arcades right below the streamer cusp, which is used or supported by several physical and numerical models of blob formation (*e.g.*, Wu *et al*., 2000; Wang *et al*., 2000; Chen *et al*., 2009). Note that there are also models that suggest that blobs are the aftermath of magnetic reconnections along the current sheet embedding in the solar wind with open magnetic geometry (*e.g.*, Einaudi *et al*., 1999; Lapenta and Knoll, 2005). The possibility of collecting samples of plasmas originated from inside the closed magnetic-field regions with *in situ* measurements is important for the evaluation of elemental compositions in the blob source region, and for understanding the formation and stability of coronal streamers, as well as the delicate coupling process between plasmas and magnetic field near the cusps.
Wang *et al*. (1998) reported a very interesting event with steady quasi-periodic releases of blobs above a streamer during the eight days from 19 to 26 April, 1997. The daily rate of blobs in this event is observed to be three-five with a release period ranging from five to eight hours. To interpret the formation and quasi-periodic releases of blobs, Chen *et al*. (2009) designed a numerical model accounting for the magnetohydrodynamic coupling process between the closed streamer magnetic arcades and the solar wind expansion. They found that the streamer-cusp geometry is subject to an intrinsic instability originating from the peculiar magnetic topological feature at the cusp region despite the long-term stability of the overall morphology. According to Chen *et al*. (2009), the process of the instability consists of two successive processes. One is the plasma magnetic-field expansion through the localized cusp region where the field is too weak to maintain plasma confinement; the continuing expansion brings strong velocity shear into the slow wind regime, providing the free energy necessary for the onset of a streaming sausage mode instability (Lee, Wang, and Wei, 1988; Wang, Li, and Wei, 1988). The other is then the onset and nonlinear development of the streaming instability, which causes pinches of magnetic-field lines and drives reconnections at the pinching points to form separated magnetic blobs. After the birth of a blob, the streamer system returns to the configuration with a lower cusp point, subject to another cycle of the instability. As mentioned, the whole process originates from the topological feature at the cusp region, which is intrinsically associated with a typical coronal streamer; therefore Chen *et al*. (2009) made use of the word “intrinsic” to describe the streamer instability. We point out in passing that other numerical models demonstrating various aspects of streamer instabilities exist in the literature (*e.g.*, Suess, Wang, and Wu, 1996; Wu *et al*., 2000; Endeve, Leer, and Holzer, 2003; Endeve, Holzer, and Leer, 2004). According to the numerical results given by Chen *et al*. (2009), the period of blob formation is about four-five hours. Thus, hypothetically, one can observe four-six blobs per day on average, in agreement with what is observed by Wang *et al*. (1998). We find this agreement with the observations to be very encouraging considering the simplicity of Chen *et al*.’s numerical model (2009). However, in the series of blob studies by Wang *et al*. (1998,2000), only a few events with continuous and quasi-periodic releases of blobs are reported (in April 1997 and December 1998). If the scenario proposed by Chen *et al*. (2009) is basically correct-that the blobs are the aftermath of an intrinsic instability of coronal streamers with release periods being several hours-there should exist more events with steady blob releases. It is the primary purpose of this paper to search for events similar to those reported by Wang *et al*. (1998) with the LASCO data. As a starting point, we only deal with the data accumulated in the whole year of 2007 in this paper.
As mentioned previously, the velocity measurement of blobs can be used as a proxy for that of the embedded slow solar wind along the plasma sheet. This argument is further supported by a recent numerical calculation presented in Chen *et al*. (2009). They show that, as a result of the dynamical coupling to the mean flow, the blobs are basically accelerated to the same velocity after they further propagate a distance of 2-3 $R_\odot$ from the disconnection point. Therefore, in general, beyond a certain heliocentric distance of, say, 5 to 7 $R_\odot$, the background solar wind velocity can be well represented by that of the blobs. However, most blobs are too weak to be observable beyond 20 $R_\odot$ by the LASCO C3 coronagraph. Therefore, the major region where this method is usable is limited to 4-20 $R_\odot$. At present, there are only a few other indirect techniques, such as the Doppler dimming technique (*e.g.*, Li *et al*., 1998; Cranmer *et al*., 1999; Strachan *et al*., 2002) and the IPS (Interplanetary Scintillation) technique (*e.g.*, Grail *et al*., 1996; Breen *et al*., 1999), that can be used to determine the wind velocity within the first 20 $R_\odot$ of the corona. For instance, one can use the measured ratio of the O VI doublet to evaluate the outflowing velocities of O$^{5+}$ ions. The velocities obtained by both the Doppler dimming technique and the IPS techniques are usually model dependent with large errors. As mentioned, the presence of blobs provides another velocity diagnostic technique of the solar wind in the corona. This can be referred to as the blob technique. Among the various methods of velocity measurement in the corona, the blob technique may serve as the most accurate, at least in cases where blobs are clearly measurable. One serious limitation of this method is that only the projected velocity of the solar wind along the plasma sheet can be revealed. Also, it should be noted that the blob technique is based on the general assumption that blobs can be taken as effective velocity tracers of the mean flow. Nevertheless, the second purpose of this paper is to examine the velocity of the solar wind along the plasma sheet, which is usually regarded as a part of the source region of the slow solar wind, as mentioned previously. The details of our observations and results are described in the following section. The summary and discussion are provided in the last section of this paper.
Observations and Results
========================
As already mentioned, one of the main purposes of this paper is to search for steady release events of blobs. To investigate the quasi-periodic character of blob emissions, we need to observe enough blobs emitted above a streamer. Therefore, only those events with emission lasting for at least three days are reported in this study. By examining all of the white-light data taken by the LASCO coronagraph in 2007, we have identified 10 events with steady emission of blobs lasting for three to four days. Some information about these 10 events is listed in Table 1, where the number in the first column indicates the time sequence of the blob emission. In the remaining columns, we list the start and end dates of the events, the position angle (PA) of the central axis of the streamers from which blobs are released, the total number of blobs and the average daily rate released during the event, and the minimum and maximum values of the blob velocities at a specific height, say, 9 $R_\odot$. The PA increases counterclockwise, taken to be zero in the northward direction. The varying ranges of the deduced blob accelerations are also presented in the last column of this table. The velocities and accelerations given in this table are projected quantities on the plane of the sky, which are obtained with a second-order polynomial fitting to the measured blob tracks. The details of our data reduction method will be introduced as we proceed.
----- -------------- ----------------- ------------------ ------------------------------ --------------------
No. Observation PA ($^{\circ}$) Total number Velocity range Acceleration
date /avg. daily rate at 9 $R_\odot$ (km s$^{-1}$) range (m s$^{-2}$)
1 Feb 14-16 103 11/3.7 183-356 3.6-14.2
2 Apr 04-07 248 12/3 191-335 1.1-13.8
3 Apr 25-27 288 9/3 197-299 1.3-6.2
4 Apr 30-May 2 246 9/3 169-298 0.6-8.0
5 May 07-09 71 10/3.3 240-400 3.6-18.2
6 Jun 05-08 67 13/3.3 173-303 2.6-11.4
7 Jun 13-15 119 12/4 228-379 2.2-17.8
8 Jun 30-Jul 2 69 10/3.3 162-287 2.2-11.2
9 Jul 18-20 106 11/3.7 200-334 3.9-10.4
10 Sep 27-29 244 9/3 192-286 2.2-11.0
----- -------------- ----------------- ------------------ ------------------------------ --------------------
: Information on the 10 events with quasi-periodic releases of blobs lasting for three-four days observed in 2007.
The blob structures are only marginally brighter than the background coronal emission, as seen from the white-light brightness and polarization measurements by LASCO (Sheeley *et al*., 1997; Wang *et al*., 1998). Therefore, it is generally difficult to recognize a blob from the original coronagraph images. The usual way to emphasize the blob features is to make running-difference images by subtraction of two successive images taken tens of minutes to one hour apart in time. After this procedure, the blob structures are more easily identified. They reveal themselves as radially elongated white-leading-black bipolar islands. The white (black) color indicates a brightness increase (decrease) in the corresponding region during the elapsed interval. In the following discussion, we first introduce our data analysis method by presenting two examples observed during 13 to 15 June and during 30 June to 2 July, which are the seventh and eighth events listed in Table 1 and are referred to as Event A and Event B, respectively.
The two white-light images shown in Figures 1(a) and 1(b) are recorded at 05:18 UT on 15 June and at the same time on 1 July, where the white circle represents the surface of the Sun and the one-quarter solid disk is where the LASCO C3 occulting disk is located. The size of each image is 30 $R_\odot$ along the horizontal direction and 15 $R_\odot$ along the vertical direction. The standard routines provided with the solarsoft software (http://www.lmsal.com/solarsoft/) are used to produce these images. A background representing the contribution of the F corona and instrumental stray light has been subtracted from each image. It can be seen that a well-defined streamer exists at the the southeastern part (PA$=$119$^{\circ}$) and the northeastern part (PA$=$69$^{\circ}$) in Figures 1(a) and 1(b), respectively. The blob structures that we are interested in are emitted right atop of these two streamers. To recognize the blobs clearly, in Figures 1(c) and 1(d) we present two running-difference images by subtracting the images taken one hour earlier from those shown in Figures 1(a) and 1(b). The blob structures are indicated with white arrows. In Figure 1(d), two blobs are emitted successively from the streamer. To view more blob events simultaneously in one figure, we produce the temporal evolutionary map, which is the stacked time series of radial strips centered along the corresponding streamer stalk in the running-difference images. This method has been used in previous blob studies (*e.g.*, Wang *et al*., 1998; Wang *et al*., 2000). The width of the narrow region is taken to be about 6 pixels, and the height is given by the C3 FOV. Such height-time maps are presented in Figures 1(e) and 1(f) for the two blob events, where the abscissa represents the time of observation and the ordinate the height of the strips.
It is obvious that the outward-moving blob structures are represented as white-black tracks in these height-time maps. By counting the number and deducing the slope of these tracks, we can easily obtain the daily rate and the velocity profiles of the blob structures. Note that only the data obtained by LASCO C3 are analyzed in this paper. The reason for excluding the C2 data is twofold. First, the blobs are observed initially near the streamer tips, which are generally located at about 2 to 4 $R_\odot$ in the middle part of the C2 FOV. At this height, it is generally difficult to discern the blob structures even from the running-difference images since the intensity of the background streamer emission is relatively strong. The seeing condition of blobs gets better when they enter the C3 FOV starting from 3.7 $R_\odot$. Second, the C3 FOV already covers the outer part of the C2 FOV and our main purpose of this study is to search for the persistent and quasi-periodic release events of blobs and to determine the associated solar wind velocity, which is thought to be well fulfilled by only using the C3 data.
As can be seen from Figures 1(e) and 1(f) there are a total of 12 blobs observed during the three days from 13 to 15 June and 10 blobs from 30 June to 2 July with an average daily rate being 4 and 3.3, respectively. By fitting the apparent blob tracks with a second-order polynomial of the form $r=r_{0}+v_{0}t+\frac{1}2{}at^{2}$, where $r_{0}$ and $v_{0}$ represent the heliocentric distance and speed at the starting point of the selected event, the constant acceleration $a$ can be determined by the quadratic fit. The temporal derivative of this equation gives the expression of the fitted blob speed as $v=v_{0}+at$.
The fitted velocity profiles as a function of heliocentric distance are plotted in Figures 1(g) and 1(h) for the two events discussed. Different symbols represent the velocities of different blobs; the numbers before the symbols are ordered according to the temporal sequence of the blob occurrence. As mentioned previously, the blob speed can be used as a proxy for that of the mean solar wind projected to the sky plane beyond a heliocentric distance of about 5-7 $R_\odot$. We see that for most distances involved in Figures 1(g) and 1(h) the symbols can be regarded as velocities for both the blobs and the associated solar wind along the streamer stalks. The velocities increase gradually with increasing distances from 3.7 to 20 $R_\odot$. Also, it can be seen that the velocities at a fixed distance vary significantly from blob to blob. To indicate this, in Table 1 we present the varying ranges of the blob velocities at 9 $R_\odot$ for all events. We see that, for Event A, the minimum and maximum of the blob (or the solar wind) velocities at 9 $R_\odot$ are 228 and 379 km s$^{-1}$, respectively. The relative velocity variation is 66% for this event and 77% for Event B.
To reveal more details of the velocity variability, in Figure 2 we plot the fitted velocities at three heliocentric distances of 6 $R_\odot$ (squares), 9 $R_\odot$ (circles), and 12 $R_\odot$ (triangles) for Event A \[Figure 2(a)\] and B \[Figure 2(b)\]. The abscissa of this figure is the time starting from 0 UT of the first day of the event. It can be clearly seen that the speeds of different blobs at a fixed distance vary significantly with time. There are two possible physical causes accounting for such large temporal velocity variations at a fixed distance. The first one is the variation of the velocity of the local solar wind plasma, and the second one is the change of the projection angle caused by solar rotation during an event. We see that there is no apparent regular pattern governing the velocity variations at the three distances. Moreover, large velocity variations can take place within a few hours. For example, for the first two blobs shown in Figure 2(a) the velocity decreases abruptly from 430 to 280 km s$^{-1}$ at 12 $R_\odot$ and from 355 to 242 km s$^{-1}$ at 9 $R_\odot$. The two blobs are separated temporally by several hours. In such a short time, the effect of solar rotation on the projection angle is basically negligible. Besides, if the temporal change at a certain distance was caused by the projection effect, the velocity would tend to be either monotonic or first increase then decrease. Therefore, we suggest that the velocity change presented in Figure 2 is mainly attributed to the velocity variability of the local solar wind plasma. It is well known that large velocity variability is one of the most apparent characteristics of the slow solar wind (*e.g.*, McComas *et al*., 2000). It has also already been mentioned in the previous section that the wind along the plasma sheet above a streamer is usually regarded as one source of the slow solar wind; therefore, it is reasonable to deduce that this study provides an observational manifestation of the large velocity variability of the slow solar wind near the Sun.
Using exactly the same method of data reduction as that for these two events, we examine the other eight events listed in Table 1. The PA of the streamer axis, the total number and the average daily rate of blobs in each event are shown in the third and fourth columns of Table 1. The obtained height-time maps showing the blob tracks are shown in Figure 3. The time of observation is taken to be the abscissa, and the height of the radial strips cropped from the series of running-difference images is shown as the ordinate of this figure, the same as in Figures 1(e) and 1(f). We can see that the most apparent common feature of the eight panels in Figure 3 is the persistent and quasi-periodic distribution of the white-black blob tracks. During each day of these events, we observe about three to five blobs released along the stalk of the corresponding streamer. The time span between two adjacent blobs is about five to eight hours. Note that a coronal mass ejection (CME) event was observed by LASCO C3 from 04:42 UT on 9 May, whose white-black track is obviously brighter than that of nearby blobs. For every blob release event, we have double-checked the white-light images of LASCO C3 to determine whether the white-black tracks in both Figure 3 and Figure 1 are caused by the small-scale blob events or the large-scale eruptive events. It is found that all the tracks are caused by small-scale blob events except the one mentioned here, which is not included in our statistics for the blobs.
In Figure 4, we plotted the velocity profiles of all the blobs shown in Figure 3. The velocities are obtained by the same method as that for Events A and B. Similarly to Figures 1(g) and 1(h), the velocities of different blobs are represented with different symbols, and the numbers in front of the symbols represent the temporal order of the blob occurrence. It can be seen that, in all these eight events, the blob velocities can vary significantly on a time scale of several hours to a few days. Again, there are no apparent patterns governing the velocity variations. For instance, from 14 to 16 February with 11 blobs detected, the velocities at a fixed distance, say, 9 $R_\odot$, vary dramatically in a few hours from blob to blob. Specifically, the velocities of the first six blobs are 356, 229, 313, 326, 253, and 223 km s$^{-1}$. As suggested previously, such large velocity variability should be taken as a consequence of the temporal evolution of the velocity of the local slow solar wind. In other words, the data analysis results shown in both Figure 1 and Figure 3 may provide observational evidence for the presence of large velocity variability of the slow wind near the Sun. Note that the varying ranges of the fitted blob velocities at 9 $R_\odot$ have been given in the fifth column of Table 1. In addition, the sixth column of this table presents the minimum and maximum values of the fitted acceleration, which also varies significantly from blob to blob. From the analysis, we suggest that the slow solar wind near the Sun flows outward from its source region already with both a highly variable speed and acceleration. It is apparent that both these two aspects may contribute to the large velocity variability of the slow wind observed *in situ* at much greater distances. We point out in passing that the PAs of the 10 streamers used in this study are distributed over a wide range from 67 to 288 degrees.
For all the blobs observed in the 10 events listed in Table 1, we plot the velocity versus height profile in Figure 5. It can be seen that the blobs generally accelerate gradually within the LASCO C3 FOV. Their velocities increase slowly from 50-150 km s$^{-1}$ at 3.7 $R_\odot$ and to 350-450 km s$^{-1}$ at 20 $R_\odot$. These statistical results are in ful agreement with previous results by Wang *et al*. (1998) using the data observed near the last solar minimum. We expect more persistent and quasi-periodic blob release events can be revealed in the future.
Summary and Discussion
======================
In this paper we have examined the LASCO C3 data obtained in 2007 and found 10 persistent and quasi-periodic blob release events lasting for three to four days. The average daily rate of blobs is found to be three to five, in agreement with previous studies for the last solar minimum. It is found that the velocities of blobs vary significantly from blob to blob over a time scale of several hours to a few days. Taking the fitted blob speed beyond a certain distance as a proxy for that of the mean flow, we suggest that the large velocity variability, one of the most apparent signatures of the slow solar wind observed *in situ*, may develop near the Sun, say, within the first tens of solar radii.
Sheeley, Wang, and coauthors (Sheeley *et al*., 1997; Wang *et al*., 1998; Wang *et al*., 2000) reported a few persistent blob release events around the last solar minimum. To interpret such steady blob releases from the tip of a streamer, Chen *et al*. (2009) proposed that the closed magnetic field geometry associated with a streamer cusp can become unstable to the expansion of the hot coronal plasmas, which results in a so-called intrinsic instability of corona streamers and the formation of blobs. For more details of this process, refer to the first section of this paper or to Chen *et al*. (2009). The modeled number density and velocity signatures, even the daily rate of blobs, are in agreement with previous observations. However, it is also apparent that not all streamers are associated with blobs. There are several possible reasons for this: *i*) The excitation and nonlinear development of the mentioned instability require certain specific physical conditions that develop over time. Blobs are not released, or, in other words, the instability does not develop or develop maturely, if the required conditions are not fulfilled or the developing process is disturbed by other coronal activities such as CMEs. *ii*) The brightness of the blob structures is only marginally higher than that of the background plasmas, so some blobs, even if released, are not observable owing to the limitations in resolution of current coronagraphs and interference from instrumental backgrounds (*e.g.*, stray light). *iii*) The blob signature may be obscured by other structures or eruptive phenomena in the foreground or the background corona along the line of sight.
The measurements of the dynamical parameters of the blob structures provide important complements to the other state-of-the-art techniques aimed at velocity diagnostics of the solar wind near the Sun. It may be expected that a distribution map of the solar wind velocities in the outer corona can be coarsely delineated with enough data accumulated. Although the flow velocity along the streamer stalk is provided only within a height ranging from a few solar radii to about 20 $R_\odot$, it is still useful for constraining the solar wind condition in the outer corona. These constraints may help in establishing the background conditions that can be used in models for CME initiation and propagation, as well as for some space weather forecasting models.
From Figure 2, we see that velocities of successive blobs at a fixed distance can vary significantly within a few hours. Assuming streamer blobs to be velocity tracers of the slow solar wind along the plasma sheet, we therefore deduce that large velocity variability, observed *in situ* in the slow solar wind, is already manifested near the Sun. It is a good question to ask how the blob velocity variability compares with that of the slow solar wind. To address this question, we examined the solar wind velocity data obtained by, say, the *Ulysses*/SWOOPS instrument and found that large velocity variations similar to that presented in Figure 2 are not unusual. However, we point out that such comparisons should be conducted very carefully to reach a physically meaningful conclusion. This is mainly due to the large distance for the solar wind plasmas to travel from their source region to the point of *in situ* measurements. The original velocity profiles as revealed by the blob observations may undergo significant changes caused by the intrinsic dynamical evolution and coupling processes with nearby solar wind plasmas. The plasmas and magnetic structures associated with eruptive transient events, such as magnetic clouds, may also contribute to reshaping the solar wind velocity profiles. Therefore, comparison between the blob variability and the slow wind variability is in general not a trivial task, and so will not be further discussed here.
Another very interesting and meaningful study would be to search for the counterpart of the blob structures in interplanetary space with *in situ* data. As mentioned in the introduction, there are models that suggest the blobs originate from closed-field regions below the streamer cusp or along the current sheet in the open magnetic geometry; therefore, the determination of the *in situ* blob counterpart will be helpful to describe different formation mechanisms of blobs and to assess plasma properties in the region near the streamer cusp. Many spacecraft, such as *Ulysses*, SOHO, *Wind*, ACE, as well as the recently launched STEREO (Kaiser *et al*., 2008; Galvin *et al*., 2008), have already accumulated enough data that would be appropriate for this study. The *in situ* counterpart of a blob could be recognized by examining the elemental composition and abundance, ionic temperature, and charge-state distribution, as well as the magnetic-field geometry of the structures carried by the solar wind. This study should be conducted in future.
The SOHO/LASCO data used here are produced by a consortium of the Naval Research Laboratory (USA), Max-Planck-Institut für Aeronomie (Germany), Laboratoire d’Astronomie (France), and the University of Birmingham (UK). SOHO is a project of international cooperation between ESA and NASA. This work was supported by grants NNSFC 40774094, 40825014, 40890162, and NSBRSF G2006CB806304 and by the Specialized Research Fund for State Key Laboratory of Space Weather in China. H.Q. Song is grateful to C.L. Shen, X.H. Zhao, and H.D. Chen for their assistance in preparing this paper.
Breen, A.R., Mikic, Z., Linker, J.A., Lazarus, A.J., Thompson, B.J., Biesecker, D.A., Moran, P.J., Varley, C.A., Williams, P.J.S., Lecinski, A.: 1999, *J. Geophys. Res.* **104**, 9847.
Brueckner, G.E., Howard, R.A., Koomen, M.J., Korendyke, C.M., Michels, D.J., Moses, J.D., Socker, D.G., Dere, K.P., Lamy, P.L., Llebaria, A., *et al*.: 1995, *Solar Phys.* **162**, 357.
Chen, Y., Li, X., Song, H.Q., Shi, Q.Q., Feng, S.W., Xia, L.D.: 2009, *Astrophys. J.* **691**, 1936.
Cranmer, S.R., Kohl, J.L., Noci, G., Antonucci, E., Tondello, G., Huber, M.C.E., Strachan, L., Panasyuk, A.V., Gardner, L.D., Romoli, M., *et al*.: 1999, *Astrophys. J.* **511**, 481.
Einaudi, G., Boncinelli, P., Dahlburg, R.B., Karpen, J.T.: 1999, *J. Geophys. Res.* **104**, 521.
Endeve, E., Holzer, T.E., Leer, E.: 2004, *Astrophys. J.* **603**, 307.
Endeve, E., Leer, E., Holzer, T.E.: 2003, *Astrophys. J.* **589**, 1040.
Galvin, A.B., Kistler, L.M., Popecki, M.A., Farrugia, C.J., Simunac, K.D.C., Ellis, L., Möbius, E., Lee, M.A., Boehm, M., Carroll, J., *et al*.: 2008, *Space Sci. Rev.* **136**, 437.
Grail, R.R., Coles, W.A., Klinglesmith, M.T., Breen, A.R., Williams, P.J.S., Markkanen, J., Esser, R.: 1996, *Nature* **379**, 429.
Habbal, S.R., Woo, R., Fineschi, S., O’Neal, R., Kohl, J., Noci, G., Korendyke, C.: 1997, *Astrophys. J.* **489**, L103.
Kaiser, M.L., Kucera, T.A., Davila, J.M., Cyr, O.C.St., Guhathakurta, M., Christian, E.: 2008, *Space Sci. Rev.* **136**, 5.
Lapenta, G., Knoll, D.A.,: 2005, *Astrophys. J.* **624**, 1049.
Lee, L.C., Wang, S., Wei, C.Q.: 1988, *J. Geophys. Res.* **93**, 7354.
Li, X., Habbal, S.R., Kohl, J.L., Noci, G.: 1998, *Astrophys. J.* **501**, L133.
McComas, D.J., Barraclough, B.L., Funsten, H.O., Gosling, J.T., Santiago-Muñoz, E., Skoug, R.M., Goldstein, B.E., Neugebauer, M., Riley, P., Balogh, A.: 2000, *J. Geophys. Res.* **105**, 10419.
Sheeley, N.R., Wang, Y.M., Hawley, S.H., Brueckner, G.E., Dere, K.P., Howard, R.A., Koomen, M.J., Korendyke, C.M., Michels, D.J., Paswaters, S.E., *et al*.: 1997, *Astrophys. J.* **484**, 472.
Strachan, L., Suleiman, R., Panasyuk, A.V., Biesecker, D.A., Kohl, J.L.: 2002, *Astrophys. J.* **571**, 1008.
Suess, S.T., Wang, A.H., Wu, S.T.: 1996, *J. Geophys. Res.* **101**, 19957.
Wang, S., Lee, L.C., Wei, C.Q.: 1988, *Phys. Fluids* **31**, 1544.
Wang, Y.M., Sheeley, N.R., Socker, D.J., Howard, R.A., Rich, N.B.: 2000, *J. Geophys. Res.* **105**, 25133.
Wang, Y.M., Sheeley, N.R., Walters, J.H., Brueckner, G.E., Howard, R.A., Michels, D.J., Lamy, P.L., Schwenn, R., Simnett, G.M.: 1998, *Astrophys. J.* **498**, L165.
Woo, R., Martin, J.M.: 1997, *Geophys. Res. Lett.* **24**, 2535.
Wu, S.T., Wang, A.H., Plunkett, S.P., Michels, D.J.: 2000, *Astrophys. J.* **545**, 1101.
{width="100.00000%"}
{width="100.00000%"}
{width="100.00000%"}
{width="100.00000%"}
{width="100.00000%"}
|
{
"pile_set_name": "ArXiv"
}
|
Rep. Jerrold Nadler (D-N.Y.) said on Friday that the Federal Communications Commission should reinstate the fairness doctrine for broadcast television to ensure that multiple sides of controversial topics are offered to the public.
“For over the airwaves TVs, I think they should bring it back,” said Nadler on Fox Business News with Andrew Napolitano.
“I think it makes sense for people to be able to hear as many sides of political opinions as possible, and as long as it's the people's airwaves that should be used for that purpose.”
Nadler’s comments come in the wake of the recent shooting of Rep. Gabrielle Giffords (D-Ariz.), which has re-launched the debate over what role heated political rhetoric plays in the spurring people to take violent actions.
Nadler's comments follow on the heals of remarks by Rep. James Clyburn (D-SC) following the shootings in Tucscon. "Free speech is as free speech does," he is quoted by the The Post and Courier as saying. "You cannot yell ‘fire' in a crowded theater and call it free speech and some of what I hear, and is being called free speech, is worse than that."
Despite originally calling for a "clarif[ication of] the public interest obligations of broadcasters who occupy the nation's spectrum" on the White House website the day he was inaugurated, the Obama Administration has since retreated after false charges by the Right that Democrats intended to "censor" Rightwing talk radio. The passage was quickly removed from the White House website and, more recently, the Administrationhas said they have no interest in revisiting the Fairness Doctrine. (Though, we should note, that doctrine is not the only way to help restore fairness and balance to our public airwaves.)
The Fairness Doctrine, which had been enforced by the Federal Communications Commission (FCC) since 1949, required those who used the limited and publicly licensed broadcast airwaves to at least attempt to offer opposing views on controversial issues, was ordered abolished by President Reagan in 1987.
The doctrine's abandonment immediately paved the way for round-the-clock, one-sided propaganda from nationally syndicated talk radio hosts such as Rush Limbaugh and scores of others.
Diversity of viewpoints shared in the public interest over our public airwaves was further brought to an end by the federal Telecommunications Act of 1996, signed by President Clinton. The act, sold as a "boon to competition in the market" lifted the cap on the number of radio and TV station licenses that could be procured by a single corporation. For example, prior to the Telecom Act, the Clear Channel corporation owned just 59 radio and TV stations nationwide. After passage of the act, they were allowed to purchase and control more than 1,200.
In the bargain, real competition --- and virtually all pretense of fairness and balance --- on our public broadcast airwaves largely died, rather than flourish as supporters of the Telecom act had claimed (and as detractors, such as Ralph Nader, had predicted), as corporations were allowed to hold virtual monopoly control over political viewpoints on the nation's airwaves in nearly all major cities across the country.
In the October issue of O magazine, Democratic consultant and commentator Donna Brazile did the unthinkable: she used the "F" word --- in Oprah Winfrey's publication, no less! Eyebrows are being raised across the political spectrum.
Okay, not that "F" word, a different one which is, apparently, far more controversial these days: Brazile says that if she "were in charge" her first priority would be to bring back the Fairness Doctrine." She says that that would require "holders of broadcast licenses to present controversial issues of public importance in an honest, equitable, and balanced fashion."
To the uninitiated, bringing Fairness to the public airwaves --- broadcast radio and TV --- is a no-brainer. But to Sean Hannity, Glenn Beck, Rush Limbaugh, and an army of 550,000 amassed to keep the nation's radio airwaves under "conservative" control, Brazile's declaration of priorities could be a call to arms. Is it possible that the Democratic establishment is finally ready for a fight to take control of their message? While no longer with the DNC, Brazile is still closely aligned with the Democratic power establishment after all.
Okay, time for a bit of history.
Our elders will remember a time when radio was America's number one source of news and information. And they remember being horrified at how Tokyo Rose and our enemies used the radio airwaves to promote hate and propaganda against the U.S.
So they watched as the Federal Communications Commission (FCC) and radio station owners worked together to prevent propaganda from ever being broadcast over the public airwaves in these United States of America. This coalition of government and business put the "Fairness Doctrine" in place to ensure a healthy, reasoned discourse so critical to our democracy.
The thing is --- and a point important for those who believe much more information is now available on cable and the Internet --- radio is still America's number one source of news and information. More people listen to radio than watch television, read newspapers, or go online. Nearly fifty million people in the U.S. listen to talk radio.
But Fairness? Equal Time? Reasoned discourse? Those went out the window in 1987 with - drumroll, please - President Ronald Reagan...
This November, California voters will be afforded a rare opportunity to directly decide whether to legalize and tax the lawful cultivation, processing, distribution, sale, and consumption of marijuana by and to individuals over 21 years of age.
By approving Proposition 19, formally labeled the "Regulate, Control, Tax Cannabis Act of 2010" [PDF], voters will take an important first step towards ending the costly, hypocritical, and liberty-destroying "war on drugs" which, like its predecessor (Prohibition), has created a lucrative niche for criminal organizations --- hypocritical because the covert agencies of the U.S. government have long engaged in drug trafficking in support of Empire even as the so-called "War on Drugs" has provided a convenient excuse for supporting brutal dictatorial puppet regimes whose function it is to serve the interests of what John Perkins described in Confessions of an Economic Hit Man as the "corporatocracy"...
A three judge panel of the heavily-Republican 5th Circuit Court of Appeal in New Orleans rejected the Department of Interior's request for an emergency stay of Judge Martin Feldman's June 22, 2010 preliminary injunction [PDF], which prevents enforcement of the Department of Interior's six month moratorium on exploratory drilling on only 33 "of the approximately 3,600 structures in the Gulf dedicated to offshore oil exploration and production."
The panel's two Reagan appointees, Judge Jerry E. Smith, joined by Judge W.Eugene Davis, ruled that the government had failed to demonstrate that it would be irreparably harmed if a decision on whether to vacate Judge Feldman's injunction was deferred until after the appeal was heard sometime around the end of August or early September. Judge James L. Dennis, a Clinton appointee, dissented, noting that he did not believe Secretary of Interior Ken Salazar had abused his discretion in ordering a moratorium, which, per the government's motion is limited to those drilling operations that apply "the same technologies employed by Transocean's Deepwater Horizon...only to waters over 500 feet deep..." Since, under the Administrative Procedures Act, a court cannot overturn an agency decision absent an abuse of discretion, Judge Dennis appears to have concluded that Judge Feldman erred in issuing the preliminary injunction.
Judge Dennis did have a question, however, about the six month length of the moratorium.
While yesterday's ruling does not mean that the panel will ultimately rule against the moratorium, the Justice Department and the attorneys representing a number of environmental organizations in the appeal face a daunting climb given Judge Smith's expression of the usual appellate court deference to the findings of the district court judge --- a climb up an oil slicked slope given the ties between the oil industry and the judges who will decide the moratorium's fate...
Amidst exploding bombs, smoke billowing from sinking battleships and dead bodies floating atop the oil slicked waters of Pearl Harbor, it was not all that difficult to appreciate the damage wrought by a surprise attack launched by the Empire of Japan. The same was true when we watched in horror as the smoldering twin towers of the World Trade Center precipitously collapsed on September 11, 2001.
Like these two earlier pivotal events, January 21, 2010 is, "a date which will live in infamy." Yet, unlike Pearl Harbor and 9/11, most Americans do not recognize it as such. This attack came not by way of planes or bombs delivered by some foreign menace. It came from within courtesy of what Professor Cass Sunstein aptly described as "radicals in robes" --- four directly connected to the Robert-Bork founded, billionaire-funded Federalist Society; all five as appointees of the Reagan and two Bush administrations. Men bent on unraveling the very constitution they had all solemnly sworn to uphold.
Their assault, though subtle, wrought far greater devastation than either Pearl Harbor or 9/11. They did not merely attack planes, ships and buildings. They assaulted the very foundations of our constitutional democracy...
"You've got a small number of multinational corporations that control the entire food system from seed to the supermarket. This isn't just about what we're allowed to eat. This is about what we're allowed to say; what we're allowed to know. It's not just our health at risk...They have managed to make it against the law to criticize their products. There is an effort to make it illegal to publish a photo of any industrial food operation." - Food, Inc. narration.
We hear it constantly from Republicans; an ideological mantra to the effect that government, especially government programs that would place the interests of public health, safety, and equality above the profits and power of those who already have too much of both, threatens our liberties.
Perhaps in a manner even more successful than Michael Moore's very powerful presentations in Sicko! and in Capitalism: A Love Story, Robert Kenner and Eric Schlosser, in their Academy Award nominated documentary feature Food, Inc. (trailer posted at end of article), expose the lie behind the myth that so-called "free markets" make us free....
"We cannot afford these wars. We cannot afford the loss of lives. We cannot afford the cost to taxpayers. We cannot afford to fail to exercise our constitutional right to end the wars." So said Rep. Dennis Kucinich (D-OH) in an email on Wednesday, announcing his intention to introduce a privileged resolution in the House in January to "End the War."
He appeared on MSNBC with Ed Schultz (video below) the night before to explain that under President Obama's plan to immediately increase troops levels by 30,000 before beginning a withdrawal in July of 2011 (pending "conditions on the ground" which could extend the occupation for years, as Sec. of Defense Robert Gates recently admitted) we have an "orgy of crime."
"We will be spending at least $150 billion a year, at the costs of many lives, to be able to subsidize a criminal undertaking." What criminal undertaking was Kucinich referring to?...
On Jan. 18, 2010 our nation will observe Martin Luther King, Jr. Day, commemorating the extraordinary life of an intellectual and moral giant. The corporate media will fill the airwaves with excerpts of his uplifting August 28, 1963 "I Have a Dream" speech in which Dr. King called upon us to judge one another by the content of our character and not by the color of our skin. And, during that same holiday, the corporate media can be counted upon to ignore his April 4, 1967 "Beyond Vietnam" speech just as they have every year since the first Martin Luther King, Jr. Day in 1986.
Why? Because the egalitarian principles enunciated in "I Have a Dream" challenged only the now (largely) defunct Jim Crow regime.
While de facto, race-based economic inequality stubbornly remains as a vestige of slavery and Jim Crow, the elimination of de jure segregation posed no threat to the stark economic inequality created by an increasingly brutal form of U.S. capitalism and imperialism. It was the brutal reality of corporate Empire which led Dr. King, in "Beyond Vietnam," to describe his own government as "the greatest purveyor of violence in the world today" --- a point which exposes the hypocrisy in that same government's celebration of the life of a man singularly devoted to non-violence.
If you have not read "Beyond Vietnam" in its entirety, you should. If you have, you should read it again, for Dr. King's message is as applicable today as it was then.
Particularly, as we deconstruct the empty words used by our Harvard-educated President to justify an escalation of what Robert Scheer aptly describes as a "War of Absurdity," and as we look "Beyond Afghanistan"...
What a weird life! In the 1970s and 80s I helped my late Evangelical-leader, Religious Right founder father as his nepotistic sidekick. We helped establish the Religious Right and send it on its merry way to doom. Now I --- a backslider former Evangelical, former Republican --- watch in amazed fascination as once again the Right I helped launch like a nasty little torpedo into the guts of the Republican Party once again explodes.
A few personal observations... I doubt that I'd be involved in radio at all these days myself, were it not for the many late nights, as a child, in the dark, when I should have been sleeping, listening to Jim White broadcast over the 50,000 watt KMOX blow-torch in St. Louis, MO. Back in the days when talk radio was something very different than what it has now become...
"The widespread abuse of prisoners is a virtually foolproof indication that politicians are trying to impose a system --- whether political, religious or economic --- that is rejected by large numbers of people they are ruling. Just as ecologists define ecosystems by the presence of certain 'indicator species'..., torture is an indicator species of a regime that is engaged in a deeply anti-democratic project, even if that regime happens to have come to power through elections." - Naomi Klein, The Shock Doctrine (2007)
In Part I of this five-part series, I took care to distinguish the post-9/11 application of torture techniques by the U.S. military from the role played by the CIA and demonstrated how the Bush/Cheney decision to torture predated the quasi-legal Justice Department memos. In Part II, I covered the CIA's dark beginnings, including links not only to former Nazi war criminals but to those Americans who provided financial support to Hitler's Germany, including the late Senator Prescott Bush, George W's paternal grandfather. I also demonstrated how academic studies, performed as part of the CIA's maniacal quest to crack the code of human consciousness, culminated in KUBARK, the CIA's 1963 torture manual.
Here, we will explore how those KUBARK torture techniques became an essential component of the covert dimension of a US-led corporate Empire --- a means for exerting control over populations resistant to the injustice of a system that values the obscene wealth of a few over the needs of the many...
"The United States participated actively and effectively in the negotiation of the Convention . It marks a significant step in the development during this century of international measures against torture and other inhuman treatment or punishment. Ratification of the Convention by the United States will clearly express United States opposition to torture, an abhorrent practice unfortunately still prevalent in the world today.
The core provisions of the Convention establish a regime for international cooperation in the criminal prosecution of torturers relying on so-called 'universal jurisdiction.' Each State Party is required either to prosecute torturers who are found in its territory or to extradite them to other countries for prosecution."
My italics. Reagan was admant [sic] about prosecuting torture, but also prosecuting inhuman treatment that some might claim was not full-on torture. Now go read National Review or The Weekly Standard. And look what has happened to conservatism in America.
Reagan was, of course, part of the Blame-America-First crowd. Soft on terror. Friend of the evil-doers. Why did Ronald Reagan hate America?
|
{
"pile_set_name": "Pile-CC"
}
|
A pair of optic fibers can be optically coupled by forming lenses at the ends of the optic fibers and positioning the lenses substantially in alignment and at approximately a predetermined spacing. One technique for accomplishing this, as disclosed in U.S. Pat. No. 4,497,536, is to extend the tip of the fiber beyond the front of a contact or terminus and apply heat to form the bead or lens thereon. Then the fiber is pulled back until the root of the lens rests against a locating surface at the bottom of a recess at the front of the contact, which locates the lens both laterally and longitudinally. While this technique accurately locates the root of the lens, it has the disadvantage that the lens may be broken off as it is pulled back firmly to seat its root against the locating surface. This can occur because the optic fiber and lens have very small diameters and a small force can break off the lens at or near its root.
In a lens type fiber optic connector, it is critical that the front faces of the lenses in the mating fiber optic contacts be positioned a precise predetermined distance from each other to maximize light transmission through the optical fibers joined by the contacts in a connector. This requires that the front face of the lens in each contact be positioned a precise distance from the front mating face of the contact which is half the distance desired for the spacing between the lenses in the mated contacts to assure maximum light transmission. In the prior art contact of the type described above, the position of the front face of the lens in the contact is dependent upon three tolerances namely tolerances for the axial length of the bead lens, the position of the locating surface at the bottom of the recess against which the lens seats, and the shape and location of the root of the lens. Because there are three axial tolerances applicable to the two mating contacts in a connector, there is an accumulation of manufacturing tolerances which makes it extremely difficult to achieve the proper spacing between the end faces of the lenses in the contacts necessary to avoid attenuation of the light signal through the connector.
An object of the present invention is to provide a connection apparatus and method which enables the forming and positioning of a lens at the end of an optical fiber which avoids breakage caused by pulling the fiber until the lens rests firmly against a stop.
Another object of the invention is to provide a connection apparatus and method which permits precise positioning of the end face of an optical fiber lens within a contact so that light transmission through a pair of mating contacts may be maximized.
|
{
"pile_set_name": "USPTO Backgrounds"
}
|
Q:
Using return value of extraction operator in condition?
Why ss >> aa >> bb >> cc >> dd could be used in condition check ? If i use ss >> aa >> bb >> cc >> dd >> ee what's the return value of this operation ?
ifstream inputFile("source.txt", ifstream::in);
string aa, bb, cc, dd;
char line[1024];
while(!inputFile.eof())
{
inputFile.getline(line, 1023);
stringstream ss(stringstream::in | stringstream::out);
ss.str(line);
if(ss >> aa >> bb >> cc >> dd)
{
cout << aa << "-" << bb << "-" << cc << "-" << dd << endl;
}
}
With source.txt like this:
1aaa ddd eee asd
2dfs dfsf sdfs fd
3sdf sdfsdfsdf d s
A:
The return value of a stream input operation is the stream.
The expression
ss >> aa
is equal to
operator>>(ss, aa)
and the operator>>() function returns the first argument.
Using multiple input operations simply chains the function calls. For example
ss >> aa >> bb;
becomes
operator>>(ss, aa).operator>>(ss, bb);
The reason a stream can be use as a boolean expression, is because it has a special conversion operator that allows it to be used as such.
By the way, you shouldn't use while (!stream.eof()). Instead use the fact that getline returns the stream, and that a stream can be used in boolean expressions:
while (inputFile.getline(line, 1023))
{
// ...
}
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Do open windows increase the chance of lightning striking the house?
There's a widespread belief among most of the people I know that all windows should be closed during a storm as having them open is supposed to attract lightning (or, according to some people, can allow the lightning to 'strike inside').
I'm highly sceptic about all this and Googling for those risks was unsuccessful. Does anyone have any light to shed on the matter?
A:
No, open windows do not increase the chance of lightning striking a house--however it DOES allow lightning to more easily strike an object inside the house.
From a USAToday chat transcript with John Jensenius, a meteorologist with the National Weather Service and expert on lightning safety:
Warren, Ohio: They say don't stand near a window when it's lightning
outside. Does it make a difference whether the window is open or
closed? Can lightning go through the glass? Isn't glass an insulator?
John Jensenius: It's better to be a few feet away from windows and
doors. Glass is an insulator, but so is air. You're probably a little
better off with the window closed, but it's more important to put a
couple feet of air between you and the window. Both windows and doors
can be made of or contain metal, so the glass may not make much
difference. I know of several incidents of people being struck with
their hand on the doorknob while peering outside at the storm.
Although there is always a chance that lightning travels through the closed window, with lightning strikes you are always playing in the realm of probabilities and the best thing you can do is keep your body out of the potential path of any nearby lightning strikes. You could be injured by shards of glass from a shattering window but it is preferable to being part of a closed circuit between a thundercloud and the earth.
|
{
"pile_set_name": "StackExchange"
}
|
Farm Insurance
American farms are the original small business and part of the foundation of our economy. From small family-owned cattle farms to large crop producers, farming is quite literally living off the land. Regardless of what you grow, you need insurance that protects your livelihood and ensures the continuity of your business. There is nothing you can do to prevent disaster from striking, but you can take steps to protect your income and assets from the unknown.
Here at Mid Rivers Insurance, we help farmers throughout St. Louis, St. Peters, and the state of Missouri find insurance appropriate to their needs. We offer property and liability coverage that is fully customizable and affordable, with protection for building structures, equipment, livestock, crops, assets, and income. After all, it only takes one major catastrophe to wipe out years of hard work on a farm.
Liability Insurance for Farms
What would happen if one of your horses kicked an employee? What if you accidentally ran your all-terrain vehicle into the side of a customer’s truck? Liability is a major issue for any business. Unfortunately, if you own a lot of property, you also own a lot of risk. Our job is to help you minimize your financial exposure to issues that may arise.
We can provide a wide range of liability protection, with coverage designed to protect your business income and assets if you are responsible for another person’s injuries or property damage. Our insurance solutions can provide payments for medical bills, cover the cost of repairs, reimburse you for legal fees, and handle any judgments brought against you in a lawsuit. Examples of common farm insurance liability coverage types include:
General liability
Workers compensation
Pollution/Environmental liability
Commercial vehicle and ATV liability
Umbrella insurance
And more
Property Insurance for Farms
Most farms and ranches have one thing in common – plenty of property. Whether you own cattle, breed horses, grow crops, or run a nursery, chances are you have a home plus multiple buildings, equipment, machinery, and other assets to insure. Our farm property insurance solutions cover your business against a wide range of potential disasters, including:
Theft and vandalism
Adverse weather events
Fire, smoke, and explosions
The weight of ice and snow
Falling objects
Accidental water damage
And more
We can provide you with peace of mind knowing that all facets of your property – from your home and barn to your hay and special equipment – are covered against potential loss. Your personalized farm insurance can help you pay to repair or replace damaged structures and equipment, as well as compensate you if you experience an interruption to your farm’s business operations due to a covered event.
Farm Insurance in St. Peters and Kirksville
If you operate a farm in Missouri, we want to help you protect it. At Mid Rivers Insurance, we believe that farm insurance should be affordable, easy to access, and simple to understand. We shop and compare rates from multiple carriers until we find a combination of coverage types that fit your unique needs and budget. Our team enjoys getting to know our customers and working closely with you to better understand the risks you face on a daily basis. Call us today to speak directly with a helpful and friendly insurance representative. We look forward to serving you soon.
Earn a $10 Gift Card!
Do you have a friend, neighbor, or co-worker that might be interested in learning more about what Mid Rivers Insurance can do for them? Just pass along our name on our Refer-a-Friend Page and if they request a quote, we'll send you a $10 gift card to show you our appreciation!
Free Newsletter!
Email address:
Leave this field empty if you're human:
Testimonials
I use them for my contractors and car insurance. Their prices are the best I’ve found and they have …
Ayn Riggs
These guys are fantastic! Awesome rates, awesome customer service! Switch now and see.
Jeffrey White
Jaren Hafen from Mid Rivers Group is one of the best agents I have ever dealt with! He goes beyond w…
Derrik Staheli
Friendly staff and great prices. Saved us a bunch of money over State Farm!
Suzie Mueller
Jaren is an outstanding agent! He cares about my family’s needs and has our best interests at hear…
Jessica Lane
Jaren Hafen is extremely knowledgeable and helpful. He’s also very nice and easy to talk to, with a …
Michelle D
Tim at Mid Rivers Insurance provided a much better value when I shopped for homeowners and auto cove…
Robert Wood
These guys do a great job finding you the best deal. No one else has been close!
Chris Guebert
Thank you Mike!!! It truly is a privilege to work with you!!! From workman’s comp/liability for bu…
Candice Anderson
I highly recommend giving Mid Rivers Insurance Group a chance to help you with your insurance. They …
|
{
"pile_set_name": "Pile-CC"
}
|
Game 31: 2014-15 NBA Season @ (23-7, 9-4 away) Series 0-1 Raps (13-17, 9-6 home) December 28th, 2014 Pepsi Center - Denver, CO
7:00 PM MT Altitude / 950 AM
Kyle Lowry PG Ty Lawson Terrence Ross SG Arron Afflalo James Johnson SF Wilson Chandler Amir Johnson PF Kenneth Faried Jonas Valanciunas C Timofey Mozgov Notes Raptors HQ Blogs You're here! DeMar DeRozan - out (groin), Landry Fields - out (acute gastritis) Injuries JaVale McGee - out (leg), Randy Foye - out (quad), Danilo Gallinari - out (knee), Darrell Arthur - day-to-day (leg) The nickname finalists when the team originally formed included along with Raptors were Beavers, Bobcats, Dragons, Grizzlies, Hogs, Scorpions, T-Rex, Tarantulas, and Terriers
Etc... The Nuggets are 23-14 all time versus the Raptors, including 13-5 at Pepsi Center
Tonight the Toronto Raptors enter the fear inducing environment of the Pepsi Center, to face the Denver Nuggets. Toronto pulled out a win last night against the Los Angeles Clippers on the road. Canada's team is the toast of the Eastern Conference as they currently hold the top spot. Denver had the makings of a Western Conference sleeper, but now appear to be ready to sell spare parts in February. Tonight would be a signature win for the Nuggets, if they can out run the Raptors.
Toronto's Lou Williams torched the Nugs off the bench with 26 points as the Nuggets fell in OT on the road despite a valiant effort, the last time these two teams met.
Prediction
Denver keeps it rolling at home against the Raps with a 107-105 nail biter.
|
{
"pile_set_name": "OpenWebText2"
}
|
breakfast to lose weight
The Breakfast That Could Make You Gain Weight
Plus, the one you should go for instead
The Breakfast That Could Make You Gain WeightPlus, the one you should go for instead
Shutterstock
We've got some good news, and we've got some bad news. Let's start with the bad news: Having toast for breakfast may increase your odds of weight gain. But the good news? Eggs may help you stay svelte. At least, that's according to the latest what's-the-best-breakfast-for-weight-loss study from the journal Appetite, which found that high-carbohydrate and low-fat breakfasts could make you gain weight, whereas low-carbohydrate, high-fat ones can help you slim down.
For the study, University of Alabama researchers had 64 overweight adults, ages 21 to 50, eat one of two breakfasts: a high-carb, low-fat meal or a low-carb, high-fat meal. Study subjects ate their designated meals every day for four weeks. Then, after they'd had that period to get used to the meals, researchers had them eat the breakfasts again—but this time, they measured participants' insulin and glucose levels both pre- and post-meal. They also asked them to rate their hunger and fullness levels afterward.
The results? The low-carb, high-fat meal won out: Those who had been placed into that category reported feeling less hungry three and four hours after breakfast, whereas those who ate the carb-ier meal reported feeling hungrier. They also had a faster "rise and fall" of glucose levels. Researchers believe that's because the carbs caused their blood sugar levels to crash earlier, so they were hungrier as a result. And as we've said before, fat keeps you full for longer.
The takeaway? Go ahead and leave the yolks in your omelet—and don't feel like low-fat or even full-fat yogurt should be avoided at all costs.
|
{
"pile_set_name": "Pile-CC"
}
|
Top Rated 2 Year Old Toys
Our top rated toys for 2 year olds are rated by our customers. In addition, our team of child-experts review and test thousands of fun, educational toys, books and games. Then, they match every item to a 2 year old's abilities. The combination of top rated toys by our customers and our child-experts matching every toy to a child's development helps to guarantee a child's fun.
Ebeanstalk Picks
These are our FAVORITES. With so many to choose from, we wanted to give a hand to help you find the absolute coolest toys on the site. All Ebeanstalk Picks →
Perfect for the new BIG brother in your family!Warm, loving pictures accompany this upbeat look at how a family grows when a new baby comes home. A companion volume to I'm a Big Sister. Author Joanne ...
This is one of our most popular toys for a baby or toddler.- The Classic Walker Wagon has been updated to be STURDIER, HEAVIER and SLOWER MOVING than the original (this means, that as your child is le...
500 Words To Grow On is written by Random House and handsomely illustrated by Kristen Kest. Oh, and it's one of our top sellers for the past 3 years. Color Words, people Words, Words to wear, and Word...
EBEANSTALK TOP SELLER THE BUCKET: The Bubble Bucket has a patented no-spill design that eliminates bubble spills, and a wide base that adds stability. This allows three kids to play with the Bubble Bu...
Get a little Lime-Green with envy!Rody's are great Core Training (and super fun) for the little ones! The Rody Horse is soft and easy to ride. It is made of super-strong, latex-free vinyl, and inflate...
These Giant Stacking Cups are more games-in-one that you could ever imagine.Game 1: Simple stacking from big to small. These sturdy buckets have a ridge to allow for easy-fit stacking. Game 2: Knock-...
It's a plane! It's a firetruck! It's a...- Place a vehicle puzzle piece correctly in the puzzle board and listen to it toot, beep or rumble! - Eight great sounds and a full-color, matching picture b...
From the vast and colorful imagination of Mary Engelbreit springs a Mother Goose world bursting with warmth and humor. All the favorite time-honored characters are here -- Little Bo-Peep, Humpty Dumpt...
|
{
"pile_set_name": "Pile-CC"
}
|
1. Introduction {#s1}
===============
Corticobasal degeneration (CBD) is a slowly progressive neurodegenerative disorder characterized by tau pathology and distinctive clinical manifestations including asymmetric akinetic-rigid syndrome and higher cortical dysfunctions such as ideomotor apraxia, cortical sensor loss, and alien limb ([@bb001]; [@bb002]; [@bb003]). Clinical features related to dysfunctions in the basal ganglia are also present in patients with progressive supranuclear palsy (PSP), but are generally symmetric and associated with impairment of infratentorial structures (vertical gaze palsy and early falls) ([@bb004]). Apart from such differences in clinical presentation, an overlap in symptoms has been reported between CBD and PSP, which makes a differential diagnosis between these neurodegenerative disorders challenging ([@bb003]; [@bb005]). Moreover, the classic neuropathology of CBD is found in patients who presented with progressive aphasia or frontotemporal dementia, making it difficult to maintain the term CBD as a unified clinicopathological entity. The evidence of this poor clinicopathological correlation has led to the use of the term corticobasal syndrome (CBS) in clinically diagnosed CBD cases ([@bb006]).
In addition to characteristic clinical symptoms, previous magnetic resonance imaging (MRI) studies have reported distinct neuroimaging findings of clinically or pathologically diagnosed CBD (CBS/CBD), including asymmetric atrophy in the cerebral cortex and peduncle with dominance contralateral to the more clinically affected side, atrophy of the midbrain tegmentum and corpus callosum, and abnormal T2 prolongation in the subcortical white matter (WM) ([@bb007]; [@bb008]; [@bb009]). Nevertheless, the diagnostic accuracy of MRI abnormalities is suboptimal for clinically diagnosed PSP (sensitivity averaging approximately 70% across different studies) and poor for CBD ([@bb0010]; [@bb0011]; [@bb0012]; [@bb0013]; [@bb0014]). These disorders can also have similar structural abnormalities including atrophy of the midbrain tegmentum and asymmetric atrophy of the cerebral cortex ([@bb008]; [@bb009]; [@bb0015]).
Voxel-based morphometry (VBM), which can objectively assess the whole brain structure with voxel-by-voxel comparisons, has been developed to analyze tissue volumes between subject groups to distinguish degenerative diseases with Parkinsonism. Previous VBM studies comparing cerebral atrophy between CBS/CBD and PSP patients confirmed more asymmetric dorsal frontal and parietal gray matter (GM) atrophy in CBS/CBD, and more midbrain tegmental atrophy in PSP ([@bb0016]; [@bb0017]). In addition to these findings, subcortical frontal WM atrophy, which may reflect primary degeneration due to tauopathy, has been reported using mainly the SPM 2 and 5 ([@bb0016]; [@bb0017]). However, data on the utility of WM atrophy for differentiating between clinically diagnosed CBD and PSP using the SPM8 plus diffeomorphic anatomical registration through exponentiated Lie algebra (DARTEL) (Wellcome Trust Centre for Neuroimaging, London, UK) method ([@bb0018]), which can improve registration and provide the precise location of structural damage in both GM and WM, are scant. The aim of this study was to compare the utility of structural WM atrophy evaluated using SPM8 plus DARTEL for differentiating between patients with a clinical diagnosis of CBD---reported here as CBS---and patients with the classic clinical phenotype of PSP---reported here as Richardson's syndrome (RS).
2. Materials and methods {#s2}
========================
2.1. Patients and control subjects {#s2.1}
----------------------------------
The aim of this study was to evaluate the characteristic WM atrophy of CBS and RS using data retrospectively collected at a single medical center. This study was approved by the Ethics Committee for Clinical Research of the Tokyo Metropolitan Medical Center of Gerontology, which waived the requirement for informed consent. The privacy of the patients was completely protected. In this retrospective study, the study group was selected following a search of the medical records filed at the Tokyo Metropolitan Medical Center of Gerontology between March 2007 and March 2013. Patient backgrounds were standardized by applying the following inclusion criteria: (1) clinical diagnoses according to the published criteria of CBS and PSP ([@bb001]; [@bb0019]), and (2) acquisition of 3D T1-weighted SPGR images. An exclusion criterion was the insufficient quality of 3D T1-weighted SPGR images due to significant abnormal findings (e.g., large cerebral infarctions) and apparent artifacts which disturb the VBM analyses. During this period, a total of 59 patients were suspected to have CBS or PSP. Of these, eight patients were excluded due to insufficient MRI qualities. Eighteen CBS (mean age, 79 ± 5 years; 3 men and 15 women) and 33 RS (4 possible and 29 probable) (mean age, 78 ± 5 years; 20 men and 13 women) patients were finally enrolled in this study. Patient characteristics were summarized in [Table 1](#t0005){ref-type="table"}. Thirty-two age-matched people (mean age, 79 ± 3 years; 19 men and 13 women) without obvious neurological or MR abnormalities were selected from the normal database of the volunteer subjects at our institution, and were investigated as control subjects.
2.2. MRI protocol {#s2.2}
-----------------
All 83 patients underwent MRI examinations on a 1.5-T imager (Signa Excite HD; GE Medical systems, Milwaukee, WI, USA) with a multichannel head coil. 3D sections of T1-weighted spoiled gradient recalled echo sequence (SPGR) were mainly obtained in a sagittal plane, for which the scanning parameters were as follows: repetition time 21 ms; echo time 6 ms; flip angle 20°; field of view 230 mm; matrix, 256 × 192 (i.e., in-plane resolution 0.90 × 1.20 mm); and 1.8-mm thick gapless sections. 3D SPGR of four CBS patients were obtained in the axial plane with the same in-plane resolution. All volumetric T1-weighted images were visually inspected for apparent artifacts due to patient motion or metallic dental prostheses.
2.3. Image analysis {#s2.3}
-------------------
Using the software program, VSRAD based on SPM8 plus DARTEL ([@bb0020]), SPGR images of all subjects were classified into GM, WM, and cerebrospinal fluid images using a unified tissue-segmentation procedure after image-intensity nonuniformity correction, anatomically standardized to a customized template of WM using DARTEL, and were then smoothed using an 8-mm full width at half maximum isotropic Gaussian kernel. VSRAD provided statistical *Z* score images for WM atrophy in each of the patients relative to that of the "normal" database of WM ([@bb0021]). The *Z* score was defined as: (\[control mean\] − \[individual value\]) / (control SD).
In order to confirm the diagnostic accuracy ([@bb0021]), we divided 18 patients with CBS, 33 patients with RS, and 32 controls into two groups at random; Group A consisted of 9 CBS patients, 17 RS patients, and 16 controls, and Group B consisted of 9 CBS patients, 16 RS patients, and 16 controls. The WM reduction pattern of CBS and RS patients compared to the others in group A was then assessed by segmented WM images on SPM8 full-factorial analysis. The statistical threshold was set at *p* \< 0.001 uncorrected for multiple comparisons with an extent threshold of 300 voxels. Age and sex were included in the model as covariates.
The target volumes of interest (VOI) specific for CBS and RS were then determined using the results of group A analyses. We evaluated the usefulness of these target VOIs for diagnosing the remaining 9 CBS patients and 16 RS patients in group B. We obtained averaged positive *Z* scores in the target VOIs with MRIcron (<http://www.mccauslandcenter.sc.edu/mricro/mricron/>). Using these averaged positive *Z* scores in the target VOI as a threshold, we used IBM SPSS statistics 21 (IBM SPSS Inc, Chicago, IL, USA) to determine receiver operating characteristic (ROC) curves for discriminating CBS and RS patients.
2.4. Statistical analysis {#s2.4}
-------------------------
Statistical analysis was carried out using IBM SPSS statistics 21. A one-way ANOVA, the Kruskal--Wallis test, the unpaired *t* test, Chi-square test and the Mann--Whitney *U* test were used to assess differences in patient characteristics between the groups. Pearson product-moment and Spearman's rank correlation coefficient were used to assess the correlation between the degree of WM atrophy and clinical parameters including the disease duration and Hoehn-Yahr stage at time of MRI scan. Differences were considered significant when *p* \< 0.05.
3. Results {#s3}
==========
Patient characteristics have been summarized in [Table 1](#t0005){ref-type="table"}. No significant difference was observed in age among the CBS, RS, and control groups. No significant difference was also identified in the Hoehn and Yahr scale between the CBS and RS groups. The number of women was markedly higher than that of men in the CBS group; however, this was not observed in the RS or control groups. In the 18 CBS patients, symptoms were right dominant in 12 and left dominant in six.
On full-factorial analysis, widespread patterns of WM reduction were mainly identified in the bilateral frontal and limbic subcortical WM and midbrain ([Fig. 1](#f0005){ref-type="fig"}). Patterns of WM reduction in each of the CBS and RS groups compared to the others are shown in [Table 2](#t0010){ref-type="table"}. The most significant areas of atrophy observed in CBS patients compared to the controls were the bilateral frontal subcortical WM including the left precentral gyrus ([Fig. 2A](#f0010){ref-type="fig"}, [Table 2](#t0010){ref-type="table"}). The most significant areas of atrophy observed in RS patients compared to the controls were in the midbrain ([Fig. 2B](#f0010){ref-type="fig"}, [Table 2](#t0010){ref-type="table"}). Additionally, the atrophy of the corpus callosum in the CBS groups, and subcortical frontal WM in the PSP groups were observed at a more lenient threshold of *P* \< 0.05. More atrophic lesions were found in CBS patients than in RS patients, especially in the bilateral cingulate and right postcentral gyrus ([Fig. 3A](#f0015){ref-type="fig"}, [Table 2](#t0010){ref-type="table"}). On the other hand, significant atrophy was identified in the bilateral midbrain in RS patients ([Fig. 3B](#f0015){ref-type="fig"}, [Table 2](#t0010){ref-type="table"}).
The target VOIs of CBS- and RS-specific atrophy were determined from the results of VBM analyses ([Fig. 4](#f0020){ref-type="fig"}A, B). ROC analyses using the averaged positive *Z* scores of CBS, RS, and control subjects were performed to evaluate the diagnostic accuracy of disease-specific VOIs ([Fig. 5](#f0025){ref-type="fig"}A--D). A target VOI of CBS including the bilateral frontal subcortical WM exhibited an area under curve (AUC) of 0.99, sensitivity of 89%, specificity of 100%, and accuracy of 96% with a cutoff *Z*-score of 1.30 ([Fig. 5A](#f0025){ref-type="fig"}). A target VOI of RS including the midbrain exhibited an AUC of 0.84, sensitivity of 81%, specificity of 81%, and accuracy of 81% with a cutoff *Z*-score of 0.97 ([Fig. 5B](#f0025){ref-type="fig"}). These results indicated the adequate discrimination power of disease-specific VOIs to differentiate CBS and RS patients from normal controls. On the other hand, a comparison of the averaged positive *Z* scores to differentiate CBS from RS patients revealed the higher discrimination power of CBS-specific VOI than that of RS-specific VOI (AUC of 0.75 for CBS-specific VOI vs AUC of 0.53 for RS-specific VOI) ([Fig. 5C, D](#f0025){ref-type="fig"}). These results indicated that CBS-specific VOI in the bilateral frontal WM could diagnose 89% of CBS patients and exclude 63% of RS patients from the patient group (i.e., sensitivity 89%, specificity 63% with a cutoff *Z*-score of 1.37). Although RS-specific VOI in the midbrain diagnosed 88% of PSP patients, this VOI could not exclude 56% of CBS patients (i.e., sensitivity 88%, specificity 44% with a cutoff *Z*-score of 0.83).
CBS patients revealed a moderately positive correlation between WM atrophy (i.e., *Z*-score in the VOI) and Hoehn-Yahr stage (*r* = 0.5, *P* = 0.035). On the other hand, this correlation was weak in RS patients (*r* = 0.3, *P* = 0.07). There were no correlations between WM atrophy and disease duration at the time of MRI scan in these patients (*r* = 0.09, *P* = 0.73 in CBS and *r* = 0.02, *P* = 0.9 in RS patients).
4. Discussion {#s4}
=============
To the best of our knowledge, this is the first study to focus on the diagnostic value of WM volume reduction for discriminating between clinically diagnosed CBD (i.e., CBS) and PSP (i.e., RS) patients by VBM using SPM8 plus DARTEL. The present study demonstrated CBS-specific left-side dominant asymmetric atrophy in the bilateral frontal subcortical WM around the precentral gyrus. This asymmetric nature resulted from the asymmetric symptoms of CBS patients in this study. The WM abnormality of CBS was consistent with previously reported neuroradiological and pathological findings in CBS/CBD ([@bb008]; [@bb009]; [@bb0022]). Conventional MRI studies previously revealed asymmetric cerebral atrophy and subcortical WM T2 prolongation, especially around the central sulcus, with greater prominence contralateral to the more severely affected side ([@bb008]; [@bb009]; [@bb0022]). Additionally, advanced techniques including diffusion-weighted and diffusion tensor imaging have also reported the microstructural abnormalities of cerebral WM including the precentral gyrus, corpus callosum and corticospinal tract ([@bb0023]; [@bb0024]; [@bb0025]). Pathological examinations of WM lesions correlated with the T2 prolongation on MRI have shown the gliosis, demyelination, and tauopathy associated with CBD ([@bb008]; [@bb0022]). A semiquantitative analysis revealed similar pathological findings in the subcortical WM relative to the GM in CBD patients ([@bb0026]).
Previous VBM studies of CBS/CBD patients mainly evaluated GM atrophy and focused on frontal lobe atrophy, especially around the premotor cortex ([@bb0016]; [@bb0017]; [@bb0027]; [@bb0028]; [@bb0029]; [@bb0030]). On the other hand, the results of this study are consistent with a few VBM studies, which revealed WM abnormalities including asymmetric frontal subcortical atrophy, especially around the central sulcus, and the less severe involvement of the brainstem in CBD patients ([@bb0017]). However, these studies did not evaluate the diagnostic value of WM atrophy for discriminating between CBS/CBD and RS/PSP. This study demonstrated that the discrimination power of bilateral frontal WM atrophy was higher than that of midbrain tegmental atrophy for differentiating CBS from RS. Considering the pathological data indicating the significantly greater burden of WM abnormalities in CBD than those in PSP ([@bb0031]), it is reasonable to evaluate subcortical WM abnormalities when diagnosing CBD and PSP. Abnormal findings of the cingulate gyrus and corpus callosum have also been reported in CBS patients ([@bb007]; [@bb0016]).
If the methodology is only required to discriminate CBS and RS, it is unnecessary to involve control subjects in the procedure for identifying diagnostic VOIs. Indeed, their use may result in a final test with inferior ROC characteristics. However, considering the difficulty in diagnosing parkinsonian syndromes, especially atypical PSP, CBD and multiple system atrophy, it is not always possible for clinicians to narrow down the differential diagnosis only to "PSP" and/or "CBD" on neurological examinations. Thus, we think that it is important to evaluate the diagnostic value of the disease-specific diagnostic VOIs between patients and normal controls, which may support the imaging diagnosis of parkinsonian syndromes.
Midbrain tegmental atrophy is one of the well-known imaging findings of RS/PSP. Not only conventional MRI studies, but also VBM studies have reported the utility of this finding in diagnosing RS/PSP ([@bb0016]; [@bb0017]; [@bb0032]; [@bb0033]; [@bb0034]). On the surface, the result of the present study revealing the poorer utility of this finding is inconsistent with previous studies. However, some CBS/CBD patients as well as RS/PSP patients can have severe midbrain tegmental atrophy ([@bb008]; [@bb009]). Clinical symptoms rather than underlying pathology have been shown to have more impact on midbrain tegmental atrophy ([@bb0035]). Therefore, it is not surprising that the presence of midbrain tegmental atrophy revealed a lower discrimination power.
Furthermore, 3D gradient echo imaging enables not only VBM but also other quantitative evaluations including volume and area measurements, which is useful for diagnosing neurodegenerative diseases ([@bb0032]; [@bb0036]; [@bb0037]). Its higher spatial resolution is necessary for the detailed evaluation of various anatomical structures including the midbrain tegmentum, cerebral peduncle, and superior cerebellar peduncle. However, the clinical utility of VOI analyses in this study has not been adequately established, in at least two respects - the group is heterogenous because the scope of the classification is not defined (i.e., severity of symptoms and staging), and no alternative diagnostic tests are considered. Of note is that patients' samples with an unequal size can introduce the bias in VBM analyses and affect the diagnostic value such as accuracy and area under ROC curve ([@bb0038]). Furthermore, our study may have been also limited by the absence of pathological diagnoses in all cases. In this study, CBS patients were diagnosed according to formal diagnostic criteria for research purposes ([@bb002]). Clinicopathological studies have reported low sensitivity in the ante mortem diagnosis of CBD ([@bb0039]; [@bb0040]), and pathological studies have suggested that CBD could present with a broad clinical spectrum including not only CBS, but also non-motor symptoms including disorders of behavior, executive control, and language ([@bb0029]; [@bb0040]). It is also evident that CBS is more likely to be caused by various neurodegenerative disorders including PSP ([@bb0040]). Despite the very small number of patients, pathologically proven CBD patients revealed different patterns of WM atrophy according to their clinical symptoms ([@bb0017]). Considering the difficulties associated with an ante mortem diagnosis of CBD due to the heterogeneity of clinical symptoms and imaging findings, more pathologically proven cases of CBD are required to reinforce the diagnostic value of WM volume reduction on VBM analysis.
5. Conclusions {#s5}
==============
Our VBM analysis using SPM8 plus DARTEL demonstrated the diagnostic value of significant atrophy in the bilateral frontal subcortical WM for diagnosing CBS. Thus, the VBM approach can be useful for discriminating between CBS and RS. However, considering the broad clinical spectrum of CBD, more pathologically proven cases of CBD are required to establish the diagnostic value of WM volume reduction on VBM analysis.
This study was supported in part by a Grant-in-Aid for Scientific Research (Kakenhi C) (24,591,785; K.S.).
{#f0005}
{#f0010}
{#f0015}
{#f0020}
{#f0025}
######
Patient and control characteristics.
CBS (n = 18) RS (n = 33) Control (n = 32) *p* value
------------------------------------------------------ --------------------------------------------- ------------------ ------------------ -------------------------------------------------
Age at the time of MRI (y) 79 ± 5 78 ± 6 79 ± 3 0.67 [⁎](#nstbl1.1){ref-type="table-fn"}
Age at symptom onset (y) 74 ± 5 74 ± 5 NA 0.43 [⁎⁎](#nstbl1.2){ref-type="table-fn"}
Male/Female 3/15 20/13 19/13 0.005 [⁎⁎⁎](#nstbl1.3){ref-type="table-fn"}
Disease duration at time of MRI (y) 4.6 ± 2.3 (1-9) 4.8 ± 2.6 (1-10) NA 0.79 [⁎⁎](#nstbl1.2){ref-type="table-fn"}
Neurological examination findings at the time of MRI
Asymmetry 18 (100%) 4 (12%) NA \< 0.001 [⁎⁎⁎⁎](#nstbl1.4){ref-type="table-fn"}
Tremor 6 (33%) 8 (24%) NA 0.49 [⁎⁎⁎⁎](#nstbl1.4){ref-type="table-fn"}
Rigidity 18 (100%) 32 (97%) NA 0.46 [⁎⁎⁎⁎](#nstbl1.4){ref-type="table-fn"}
Limb apraxia 14 (78%) 1 (3%) NA \< 0.001 [⁎⁎⁎⁎](#nstbl1.4){ref-type="table-fn"}
Apraxia of speech 12 (67%) 2 (6%) NA \< 0.001 [⁎⁎⁎⁎](#nstbl1.4){ref-type="table-fn"}
Alien limb 2 (13%) [a](#nstbl1.6){ref-type="table-fn"} 1 (3%) NA 0.19 [⁎⁎⁎⁎](#nstbl1.4){ref-type="table-fn"}
Myoclonus 4 (25%) [a](#nstbl1.6){ref-type="table-fn"} 0 (0%) NA 0.003 [⁎⁎⁎⁎](#nstbl1.4){ref-type="table-fn"}
vertical gaze limitation 5 (28%) 25 (76%) NA \< 0.001 [⁎⁎⁎⁎](#nstbl1.4){ref-type="table-fn"}
Falls 9 (64%) [b](#nstbl1.7){ref-type="table-fn"} 33 (100%) NA \< 0.001 [⁎⁎⁎⁎](#nstbl1.4){ref-type="table-fn"}
L-dopa benefit (subjective) 0 (0%) 7 (21%) NA 0.04 [⁎⁎⁎⁎](#nstbl1.4){ref-type="table-fn"}
Yahr stage 3.8 ± 1.0 3.9 ± 0.9 NA 0.68 [⁎⁎⁎⁎⁎](#nstbl1.5){ref-type="table-fn"}
1 0 1 NA
2 0 2 NA
2.5 3 2 NA
3 2 6 NA
4 8 14 NA
5 5 8 NA
Data are shown as absolute numbers or the mean ± standard deviation
Note − CBS = corticobasal syndrome, NA = not applicable, RS = Richardson's syndrome, y = years
One-way ANOVA
The unpaired *t* test
Kruskal--Wallis test
Chi-square test
Mann--Whitney *U* test
There were no relevant data in the medical records of two CBS patients
There were no relevant data in the medical records of four CBS patients
######
Comparisons of CBS, RS and NC groups showing the locations in which WM reductions were greater in one group than in the other.
Region volume (cluster) t−value Talairach coordinates (x, y, z) Location of local maxima
----------- ------------------------- --------- --------------------------------- ----------------------------
NC \> CBS 973 5.76 −20, −15, 46 left precentral gyrus
1210 5.69 18, −19, 44 right cingulate gyrus
4.81 33, −18, 56 right precentral gyrus
4.57 10, 3, 54 right middle frontal gyrus
NC \> RS 2132 6.91 −13, −17, −6 left midbrain
5.21 15, −15, −6 right midbrain
RS \> CBS 939 5.27 −18, −18, 42 left cingulate gyrus
4.36 −14, −16, 56 left medial frontal gyrus
911 5.05 29, −36, 51 right postcentral gyrus
4.85 22, −23, 44 right cingulate gyrus
4.59 27, −8, 55 right middle frontal gyrus
CBS \> RS 932 4.87 6, −33, −4 right midbrain
4.82 −9, −31, −6 left midbrain
Clusters of WM SPM analysis uncorrected at *p* \< 0.001 with an extent threshold of 300 voxels are shown. The coordinates refer to the Talairach reference space.
Note − CBS = corticobasal syndrome, NC = normal controls, RS = Richardson's syndrome
|
{
"pile_set_name": "PubMed Central"
}
|
Ministro de Energía y Minas, Francisco Ísmodes, participó de la inauguración en distrito de Marcona, región Ica.
El ministro de Energía y Minas, Francisco Ísmodes, asistió a la inauguración del parque eólico Wayra I, en el distrito de Marcona, en la región Ica. Este parque eólico, construido por la empresa Enel Green Power Perú, cuenta con la capacidad de generar energía equivalente al consumo de 482 mil familias.
El parque eólico Wayra I fue construido por la empresa Enel Green Power Perú, en virtud de un contrato de 20 años de suministro suscrito por el Ministerio de Energía y Minas. En total, se realizó una inversión de US$ 165 millones para el parque eólico, que cuenta con una capacidad de 132 megavatios.
PUEDES VER Ministerio de Energía y Minas aplicó prórroga al pago de derechos de vigencia para pequeños mineros
El parque comprende 42 aerogeneradores que producen cerca de 600 gigavatios-hora al año, los cuales serán suministrados por la empresa al Sistema Eléctrico Interconectado Nacional (SEIN).
El ministro Ísmodes saludó la inversión realizada por Enel y agregó que Wayra I producirá energía eólica evitando anualmente la emisión de aproximadamente 285 mil toneladas de dióxido de carbono a la atmósfera.
Subrayó que la generación y uso de esta energía limpia permitirán también cuidar el planeta, reducir los gases de efecto invernadero y el calentamiento global.
|
{
"pile_set_name": "OpenWebText2"
}
|
I would like to tell you about my student Phi....
Phi is a fun loving, kind hearted third grader who loves school more than anything. He spent his 2nd grade year as a hospital homebound student as he underwent treatments of chemotherapy for Neuroblastoma stage 4 cancer. He was ready to enter his third grade year and came into school on the first day with a big smile. He said "when do the kids come? I'm ready to meet some friends." With Phi's cheerful personality and kind heart he quickly became a friend to all. One of his classmates described him as "a good friend to everyone." Not only is Phi a good friend but he's a stellar student in the classroom and has earned straight A's each trimester. His academic performance and love for learning makes him a stand out student.
In April of 2015, Phi's family found out that his cancer had returned. This time, to his lungs, spine and brain. Phi is no longer able to attend school as he once again has to undergo chemotherapy treatments. His mom will not be able to work so she can care for her son and get him to and from treatments, which will be hard on the family.
Phi and his family need your help. They should not have to go through this alone as they have already been through so much. Please help us support this family, with any donation you can make which will go toward covering their living expenses while they take time off work to care for Phi.
Read more
|
{
"pile_set_name": "OpenWebText2"
}
|
<?xml version="1.0" encoding="utf-8"?>
<configuration>
<startup>
<supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.5.2"/>
</startup>
<runtime>
<assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1">
<dependentAssembly>
<assemblyIdentity name="Moq" publicKeyToken="69f491c39445e920" culture="neutral"/>
<bindingRedirect oldVersion="0.0.0.0-65535.65535.65535.65535" newVersion="4.7.99.0"/>
</dependentAssembly>
</assemblyBinding>
</runtime>
</configuration>
|
{
"pile_set_name": "Github"
}
|
Control mechanisms of the mechanical type (using springs), of the pneumatic type and of the hydraulic type are known for operating circuit breakers, in particular for performing an OFO cycle (open, rapid close, open again).
A mechanical drive mechanism is described, for example, in U.S. Pat. No. 4,240,300.
At present, the drive mechanisms available deliver energy up to a maximum of about 3000 Joules. Recent circuit breakers require drive mechanisms that deliver much larger quantities of energy, about ten times the maximum energies currently available.
Research, development, and manufacture of a new drive mechanism of such high power would require considerable investment.
An object of the invention is to use a presently-available low power drive mechanism of very low cost and to adapt it to driving a high power circuit breaker.
|
{
"pile_set_name": "USPTO Backgrounds"
}
|
The Go Timer is perfect for interval training which combines intense timed work periods with short precise periods of rest. This method of training is clinically proven to increase your metabolic rate and torch fat. Use the Go Timer for all types of cross training and interval workouts. Maximize results in minimal time while improving anaerobic and cardio endurance.
|
{
"pile_set_name": "Pile-CC"
}
|
---
abstract: 'Polar codes are recursive general concatenated codes. This property motivates a recursive formalization of the known decoding algorithms: Successive Cancellation, Successive Cancellation with Lists and Belief Propagation. This description allows an easy development of the first two algorithms for arbitrary polarizing kernels. Hardware architectures for these decoding algorithms are also described in a recursive way, both for Arikan’s standard polar codes and for arbitrary polarizing kernels.'
author:
- 'Noam Presman and Simon Litsyn[^1]'
bibliography:
- 'IEEEabrv.bib'
- 'bibTexPolar.bib'
title: Recursive Descriptions of Decoding Algorithms and Hardware Architectures for Polar Codes
---
Introduction
============
Polar codes were introduced by Arikan [@Arikan] and provided a scheme for achieving the symmetric capacity of binary memoryless channels (B-MC) with polynomial encoding and decoding complexities. Arikan used the so-called $(u+v,v)$ construction, which is based on the following linear kernel $$G_2 = \left(
\begin{array}{cc}
1 & 0 \\
1 & 1 \\
\end{array}
\right).$$ In this scheme, a $2^n\times2^n$ matrix, $G_2^{\bigotimes n}$, is generated by performing the Kronecker power on $G_2$. An input vector $\bf u$ of length $N=2^n$ is transformed to an $N$ length vector $\bf x$ by multiplying a certain permutation of the vector $\bf u$ by $G_2^{\bigotimes n}$. The vector $\bf x$ is transmitted through $N$ independent copies of the memoryless channel, $W$. This results in new $N$ (dependent) channels between the individual components of $\bf u$ and the outputs of the channels. Arikan showed that these channels exhibit the phenomenon of polarization under Successive Cancellation (SC) decoding. This means that as $n$ grows, there is a proportion of $I(W)$ (the symmetric channel capacity) of the channels that become clean channels (i.e. having the capacity approaching $1$) and the rest of the channels become completely noisy (i.e. with the capacity approaching $0$). Arikan showed that the SC decoding algorithm has an algorithmic time and space complexity which is $O(N\cdot \log(N))$ (The same complexity holds also for the encoding algorithm). Furthermore, it was shown [@Arikan2] that asymptotically in the block length $N$, the block error probability of this scheme decays to zero like $O(2^{-\sqrt{N}})$.
Generalizations of Arikan’s code structures were soon to follow. Korada *et al.* considered binary and linear kernels [@Korada]. They showed that a binary linear kernel is polarizing if and only if its corresponding generating matrix is upper-triangular, and analyzed their rate of polarization, by introducing the notion of kernel exponent. Mori and Tanaka considered the general case of a mapping $g(\cdot)$, which is not necessarily linear and binary, as a basis for channel polarization constructions [@Mori2010]. They gave sufficient conditions for polarization and generalized the exponent for these cases. They further showed examples of linear and non-binary Reed-Solomon codes and Algebraic Geometry with exponents that are far better than the exponents of the known binary kernels [@MoriandTanka3]. The authors of this correspondence gave examples of binary but non-linear kernels having the optimal exponent per their kernel dimensions [@PrShLi2]. All of these structures were having homogenous kernels, meaning that the alphabet of their inputs and their outputs were the same. The authors of this correspondence considered the case that some of the inputs of a kernel may have different alphabet than the rest of the inputs [@Presman2011]. This results in the so-called mixed kernel structure, that have demonstrated good performance for finite length codes in many cases. A further generalization of the polar code structure was suggested by Trifonov [@Trifonov2011], in which the outer polar codes were replaced by suitable codes along with their appropriate decoding algorithms. We note here, that the representation of polar codes as instances of general concatenated codes (GCC) is fundamental to this correspondence, and we elaborate on it in the sequel.
Generalizations and alternatives to SC as the decoding algorithm were also studied. Tal and Vardy introduced the Successive Cancellation List (SCL) decoder [@Tal11; @Tal2012]. In this algorithm, the decoder consider up to $M$ concurrent decoding paths at each one of its stages, where $M$ is the size of the list. At the final stage of the algorithm, the most likely result is selected from the list. The asymptotic time and space complexity of this decoder are the same of those of the standard SC algorithm, multiplied by $M$. Furthermore, an introduction of a cyclic redundancy check code (CRC) as an outer code, results in a scheme with an excellent error-correcting performance, which is sometimes compatible with state of the art schemes (see e.g. [@Tal2012 Section V]). Bonik *et al.* suggested using a separate CRC and a different list size for each outer code, in the GCC structure of the polar code. This approach seems to give better results, comparing it to standard list approach with the same average list size. Finally, Li *et al.* [@Li2012], suggested an iterative SCL with CRC algorithm in which the decoder increases the list size by a multiplicative factor of $2$ and restart the algorithm, if at the end of the SCL algorithm there doesn’t exist a result that satisfies the CRC. Here again, the excellent performance is achieved with limited average list size and outperforms Tal and Vardy’s original approach. Note, however, that here the average time and space complexity (rather than the worst case complexity) is the basis for comparison between the approaches.
Belief-Propagation is an alternative to the SC decoding algorithm . This is a message passing iterative decoding algorithm that operates on the normal factor graph representation of the code. It is known to outperform SC over the Binary Erasure Channel (BEC) [@Hussami2009], and seems to have good performance on other channels as well [@Hussami2009; @Arikan3].
Leroux *et al.* considered efficient hardware implementations for the SC decoder for the $(u+v,v)$ polar code [@Leroux10; @Leroux2012]. They gave an explicit design of a “line decoder” with $N/2$ processing elements and $O(N)$ memory elements. Their work, contains an efficient approximate min-sum decoder, and a discussion on a fixed point implementation. Their design is verified by an ASIC synthesis. Pamuk considered a hardware design of BP decoder tailored for an FPGA implementation [@Pamuk2011].
The goal of this paper is to emphasize the formalization of polar codes as recursive GCCs and the implication of this structure on the decoding algorithms. The main contributions of this correspondance are as follows: 1) Formalizing Tal and Vardy’s SCL as a recursive algorithm, and thereby generalizing it to arbitrary kernels. 2) Formalizing Leroux *et al.* SC line decoder and generalizing it to arbitrary kernels. 3) Defining a BP decoder with GCC schedule, and suggesting a BP line architecture for it.
The paper is organized as follows. In Section \[sec:Prelim\], we describe polar codes kernels as the generating building blocks of polar codes. We then elaborate on the fact that polar codes are examples of recursive GCC structures. This fundamental notion, is the motivation for formalizing the decoding algorithms in a recursive fashion in Section \[sec:RecDescOfDecAlgor\]. Specifically, we do this for the standard SC, the SCL (both for Arikan’s kernels and arbitrary ones) and BP (for Arikan’s kernel using the GCC schedule). These formalizations lay the ground for hardware architectures of the decoding algorithms in Section \[sec:HrdwreArikConstr\]. Specifically, we restate Leroux *et al.* SC pipeline and SC line decoders, and introduce a line decoder for the GCC schedule of the BP algorithm. Finally, in Section \[sec:HardArchiForOthKer\], we consider generalizations of these architectures for arbitrary kernels.
Preliminaries {#sec:Prelim}
=============
Throughout, we use the following notations. Vectors are denoted by bold letters, for example ${\bf u}$. For $i\geq j$, let ${\bf
u}^i_j=(u_j,...,u_i)$ be the sub-vector of a vector ${\bf u}$ of length $i-j+1$ (if $i<j$ we say the ${\bf
u}^i_j=()$, the empty vector, and its length is $0$).
In this paper we consider kernels that are based on bijective transformations over a field $F$. A channel polarization kernel of dimension ${\ell}$, denoted by $g(\cdot)$, is a mapping $$g:F^{{\ell}}\rightarrow F^{{\ell}}.$$ This means that $g({\bf u})={\bf x}, \,\,\,\, {\bf u}, {\bf x}
\in F^{{\ell}}$. Denote the output components of the transformation by $$g_i({\bf u})=x_i \,\,\,\ 0 \leq i \leq \ell-1,$$
We note that this type of kernel is referred to as *homogeneous kernel*, because its $\ell$ input coordinates and $\ell$ output coordinate are from the same alphabet $F$.
The homogenous kernel $g(\cdot)$ may generate a polar code of length $\ell^m$ by inducing a larger mapping from it, in the following way [@Mori2010; @Presman2011].
\[def:constructG2\] Given a transformation $g(\cdot)$ of dimension ${\ell}$, we construct a mapping $g^{(m)}(\cdot)$ of dimension ${\ell}^m$ (i.e. $g^{(m)}(\cdot):\left\{0,1\right\}^{{\ell}^m}\rightarrow\left\{0,1\right\}^{{\ell}^m}$) in the following recursive fashion. $$g^{(1)}({\bf u}_0^{\ell-1})=g({\bf u}_0^{\ell-1})\,\,\,;$$
$$g^{(m)}=\Big[ g^{(1)}\left(\gamma_{0,0}, \gamma_{1,0}, \gamma_{2,0}, \ldots, \gamma_{\ell-1,0}\right),$$ $$\,\,\,\,\,\,\,g^{(1)}\left(\gamma_{0,1}, \gamma_{1,1}, \gamma_{2,1}, \ldots, \gamma_{\ell-1,1}\right),\ldots,$$ $$\,\,\,\,\,\,\,g^{(1)}\left(\gamma_{0, {\ell}^{m-1}}, \gamma_{1, {\ell}^{m-1}}, \gamma_{2, {\ell}^{m-1}}, \ldots, \gamma_{\ell-1,{\ell}^{m-1}}\right) \Big],$$ where $$\gamma_{i,j}=g_j^{(m-1)}\left({\bf u}_{ i \cdot {\ell^{m-1}} +1}^{(i+1)\cdot {\ell}}\right)
\,\,\,\,\, 0\leq i\leq {\ell}^{m-1}-1 \,\,\,\,\,\, 0 \leq j \leq
{\ell}-1.$$
General Concatenated Codes (GCC)[^2] are error correcting codes that are generated by a construction technique, which was introduced by Blokh and Zyabolov [@Blokh1974] and Zinoviev [@Zinoviev1976]. In this construction, we have $\ell$ outer codes $\left\{\mathcal{C}_r\right\}_{r=0}^{\ell-1}$, where $\mathcal{C}_r$ is an $N_{out}$ length code of size $M_r$ over alphabet $F_r$. We also have an inner code of length $N_{in}$ and size $\prod_{r=0}^{\ell-1}|F_r|$ over alphabet $F$, with a nested encoding function $\phi : F_0\times F_1 \times ... \times F_{\ell-1} \rightarrow F^{N_{in}}$. The GCC that is generated by these components is a code of length $N_{out}\cdot N_{in}$ and of size $\prod_{r=0}^{\ell-1}M_r$. It is created by taking an $\ell\times N_{out}$ matrix, in which the $r^{th}$ row is a codeword from $\mathcal{C}_r$, and applying the inner mapping $\phi$ on each of the $N_{out}$ columns of the matrix. As Dumer describes in his survey [@DumerConcatCodes], the GCCs can give good code parameters for short length codes due to a good combination of outer codes and a nested inner code. In fact, some of them give the best parameters known. Moreover, it is common that the decoding algorithms associated with them, utilize their structure by performing local decoding steps on the (short) outer-codes and exchanging decisions via the inner code decoding.
As Arikan already noted, polar codes are examples of recursive GCCs [@Arikan Section I.D]. This observation is useful as it allows to formalize the construction of large length polar code as a concatenation of several smaller length polar codes (outer codes) by using a kernel mapping (an inner code). Therefore, applying this notion to Definition \[def:constructG2\], we see that a polar code of length $\ell^m$, may be regarded as a collection of $\ell$ outer polar codes of length $\ell^{m-1}$. These codes are then joined together by applying an inner code (defined by the mapping $g^{(1)}(\cdot)$) on the outputs of these mappings. This idea is illustrated in Figure \[fig: def2GCC\]. In this figure, we see the $\ell$ outer codewords of length $\ell^{m-1}$ organized in $\ell$ rows of the matrix. The inner codeword mapping is depicted as the vertical rectangle that is located on top of them. This is appropriate, as this mapping operates on columns of the of the matrix which rows are the outer codewords. Note, that for brevity we only drew one instance of the inner mapping, but there should be $\ell^{m-1}$ instances of it, one for each column of this matrix. In the homogenous case, the outer codes themselves are constructed in the same manner. Although the outer coeds have the same structure, they are different in the general case, because they may have different set of frozen bits.
\
Let ${\bf u}$ be an $N=2^m$ length binary vector. The vector $\bf u$ is transformed to an $N$ length vector $\bf x$ by using a bijective mapping $g(\cdot):\{0,1\}^{N}\rightarrow \{0,1\}^{N}$. The transformation is defined recursively as $$\text{for } n=1\,\,\,\,\, g^{(1)}({\bf u})=\left[u_0+u_1,u_1\right]$$ $$\label{eq:Constr}
\text{for } n>1\,\,\,\,\, g^{(m)}({\bf u})=\left[v_0,w_0,v_1,w_1,...,v_{N/2-1},w_{N/2-1} \right]={\bf x}\,\,\,\,,$$ where ${\bf v}_{0}^{N/2-1}=g^{(m-1)}\left({{\bf u}_{0}^{N/2-1}}\right)+g^{(m-1)}\left({{\bf u}_{N/2}^{N-1}}\right)$ and ${\bf w}_{0}^{N/2-1}=g^{(m-1)}\left({{\bf u}_{N/2}^{N-1}}\right)$. See also Figure \[fig:uvExample\].
\
In a mixed kernel constructions the outer codes are not necessarily from the same family of polar codes. For example, if we take the first kernel $g_1(u_0,u_{1,2},u_3)={\bf x}_0^3\in\{0,1\}^4$ and define the RS kernel as $g_2(u_{0,1},u_{2,3},u_{4,5},u_{6,7})={\bf x}_0^3\in\left(\{0,1\}^2\right)^4$ [@Presman2011], then the general concatenated construction is given in Figure \[fig: mixed kernel\].
\
Now, note that using $g_2^{(m)}$ mapping over a binary channel is like using a concatenated scheme in which the inner code is the standard binary full space mapping. It can be observed, that the mapping in Figure \[fig: mixed kernel\] has more potential in transforming between the used alphabets. This concept may be further generalized, by replacing some of the outer polar codes, with other types of codes (see e.g. Trifonov’s proposal [@Trifonov2011]).
The recursive GCC structure of polar codes calls for recursive formalization of the algorithms associated with them. These algorithms enjoy from a simple and clear description, which may lead to an elegant analysis. Furthermore, in some cases it allows reuse of resources and indicates which operations may be done in parallel. The recursive encoding algorithm has already been described in Definition \[def:constructG2\]. The recursive decoding algorithms are described in the next section.
Recursive Descriptions for Decoding Algorithms of Polar Codes {#sec:RecDescOfDecAlgor}
=============================================================
In this section, we describe decoding algorithms for polar codes in a recursive framework that is induced from their recursive GCC structure. Roughly speaking, all the algorithms we consider here have a similar format. Consider the GCC structure of Definition \[def:constructG2\]. This means that we have a length $N$ code, that is composed of $\ell$ layers of outer codes, denoted by $\left\{\mathcal{C}_r\right\}_{r=0}^{\ell-1}$, each one of length $N/\ell$. The decoding algorithms, we consider here, are composed of $\ell$ pairs of steps. Pair number $r$ is dedicated to decoding $\mathcal{C}_{r-1}$, in the following way.
STEP $2\cdot r -1$
: \
Using the previous steps, prepare the inputs to the decoder of code $\mathcal{C}_i$.
STEP $2\cdot r$
: \
Call the decoder of code $\mathcal{C}_i$ on the input you’ve prepared.
Process the output of this decoder, together with the outputs of the previous steps.
Typically, the codes $\left\{\mathcal{C}_r\right\}_{r=0}^{\ell-1}$ are polar codes of length $N/\ell$, thereby creating the recursive structure of the decoding algorithm.
It should be noted that the above decoding format is quite common for decoding algorithms of GCCs. As an example, see the decoding algorithms in Dumer’s survey on GCCs [@DumerConcatCodes]. In addition, the recursive decoding algorithms for Reed-Muller (RM) codes, utilizing their Plotkin $(u+v,v)$ recursive GCC structure were extensively studied by Dumer [@Dumer2006; @Dumer2006b] and are closely related to the algorithms we present here. Actually, Dumer’s simplified decoding algorithm for RM codes [@Dumer2006b Section IV] is the SC decoding for the Arikan’s structure, we describe in Subsection \[sec:recSCDec\].
The algorithms we describe in a recursive fashion are the SC (Subsection \[sec:recSCDec\]), Tal and Vardy’s SCL (Subsection \[sec:SCListDecoding\]) and BP (Subsection \[sec:BP\]). For all of these algorithms, we first consider Arikan’s $(u+v,v)$ code. For the first two algorithms we also provide generalizations to other kernels, both homogenous and mixed. We note, that when possible, we prefer that the inputs to the algorithm and the internal computations are interpreted as log likelihood ratios (llrs). Thus, the SC algorithms and the BP are described in such manner, but in SCL decoding, we use likelihoods instead of llrs. Furthermore, in our discussion we do not consider how to efficiently compute these quantities. In some cases, especially with large kernels or with large alphabet size, these computations pose a computational challenge. Approaches to adhere this challenge, are efficient decoding algorithms (such as variants of Viterbi algorithms) or approximations of the computations (for example, the min-sum approximation that Leroux *et al.* used [@Leroux2012] or the near Maximum Likelihood (ML) decoding algorithms that were used by Trifonov [@Trifonov2011]).
A Recursive Description of the SC Algorithm {#sec:recSCDec}
-------------------------------------------
We begin by considering the SC decoder for Arikan’s $(u+v,v)$ construction, and then generalize it to arbitrary kernels. First, let us describe the decoding algorithm for length $N=2$ code, i.e. for the basic kernel $g^{(1)}(u,v)=(u+v,v)\equiv(a,b)$. We get as input $[\lambda_a,\lambda_b]$ which are the log likelihood ratios (llrs) of the output of the channel ($\lambda_{a}$ corresponds to the first output of the channel and $\lambda_{b}$ corresponds to the second output). The algorithm has four steps.
STEP I
: \
Compute the llr of $u$, $L_u = 2\tanh^{-1}\left(\tanh(\lambda_a/2)\tanh(\lambda_b/2)\right)$.
STEP II
: \
Decide on $u$, (denote it by $\hat{u}$).
STEP III
: \
Compute the llr of $v$, (given the estimate of $\hat{u}$): $L_v = (-1)^{\hat{u}}\cdot\lambda_a+\lambda_b$.
STEP IV
: \
Decide on $v$, (denote it by $\hat{v}$).
It should be noted that steps II and IV, may be done based on the llrs computed on steps I and III (i.e. by their sign), or by using an additional side information (for example, if $u$ is frozen, then the decision is based on its known value).
Now, for describing a SC decoder of length $N=2^{n}$, let us assume that we already developed a SC decoder for length $N/2$ polar code. We assume that the $N$ length decoder gets as input $N$ channel output $llrs$, $\{\lambda_{i}\}_{i=0}^{N-1}$, and the frozen bits indices. The decoder outputs the estimation of the information (unfrozen) bits and the estimation of the codeword that was sent on the channel. For convenience, we assume that the estimation of the information word is an $N$ length vector (denoted by ${\bf u}$) which includes also the values for the frozen bits. A decoder for length $N$ polar code contains the following steps.
STEP I
: \
Partition the llr vector into pairs of consecutive llr values $\left\{(\lambda_{2i},\lambda_{2i+1}) \right\}_{i=0}^{N/2-1}$. Compute the llr input vector, ${\bf L}_0^{N/2-1}$, for the first outer code such that $$L_i= 2\tanh^{-1}\left(\tanh(\lambda_{2i}/2)\tanh(\lambda_{2i+1}/2)\right)\,\,\,\,, 0 \leq i\leq N/2 -1.$$
STEP II
: \
Give the vector ${\bf L}_0^{N/2-1}$ as an input to the polar code decoder of length $N/2$. Also provide to the decoding algorithm, the indices of the frozen bits from the first half of the codeword (corresponding to the first outer code).
Assume that the decoder outputs ${\bf u}^{(0)} $ as the estimation of the information word, and ${\bf x}^{(0)}$ as the estimation of the first outer polar codeword of length $N/2$. Both of them are vectors of length $N/2$. Then, we can output ${\bf u}^{(0)}$ as ${\bf{ u}}_{0}^{N/2-1}$ (the first half of the estimated information word).
STEP III
: \
Using, again, the input llr pairs and ${\bf x}^{(0)}$ as the estimation of the first outer polar codeword, prepare the llr input vector for the second outer code, ${\bf L}_0^{N/2-1}$, such that $$L_i = (-1)^{x^{(0)}_i}\cdot \lambda_{2i}+\lambda_{2i+i}\,\,\,\,, 0 \leq i\leq N/2 -1.$$
STEP IV
: \
Give the vector ${\bf L}_0^{N/2-1}$ as an input to the polar code decoder of length $N/2$ and the indices of the frozen bits from the second half of the codeword (corresponding to the second outer code). Assume that the decoder outputs ${\bf u}^{(1)}$ as the estimation of the information word, and ${\bf x}^{(1)}$ as the estimation of the second outer polar codeword of length $N/2$. Then, we can output ${\bf u}^{(1)}$ as ${\bf u}_{N/2}^{N-1}$ (the second half of the estimated information word).
Construct the estimation of the codeword as follows ${\bf x}=\left [\left\{x^{(0)}_i+x^{(1)}_i,x^{(1)}_i\right\}_{i=0}^{N/2-1}\right]$.
Let us now generalize this decoding algorithm for the GCC scheme with a general kernel. In this case for length $N$ code, we have an $\ell$ length mapping $g({\bf u})={\bf x}$ over an alphabet $F$, i.e. $g(\cdot):F^{\ell}\rightarrow F^{\ell}$. We also have at most $\ell$ outer codes $\left\{\mathcal{C}_r\right\}$, each one of length $N/\ell$. We may have less than $\ell$ outer codes, in case some of the inputs are glued (which results in a mixed kernel case). In this case, the outer code corresponding to the glued inputs is considered to be over a larger size input alphabet. We assume that each outer code has a decoding algorithm associated with it. This decoding algorithm is assumed to get as input the “channel” observations on the outer code symbols (usually manifested as probabilities matrices, or llr vectors). If the outer code is a polar code, then this algorithm should also get the indices of the frozen bits of the outer code. We require that the algorithm will output its estimation on the information vector and its corresponding outer code codeword. Assuming that we know input symbols ${\bf u}_{0}^{k}$, computing the llr vector $L(\cdot)$ corresponding to input number $k+1$ of the transformation is done according to the following rule. $$\label{eq:genSCRule}
L(t)=\ln\left(\frac{\sum_{{\bf u}_{k+2}^{\ell}\in F^{\ell-k+1}} R_g\left({\bf u}_{0}^{k},0,{\bf u}_{k+1}^{\ell}\right)}{\sum_{{\bf u}_{k+2}^{\ell}\in F^{\ell-k+1}} R_g\left({\bf u}_{0}^{k},t,{\bf u}_{k+1}^{\ell}\right)}\right),$$ where $$R_g({\bf u}_{0}^{\ell-1}) =\exp\left(\sum_{i=0}^{\ell-1} \lambda_i\left(g_i({\bf u})\right) \right),$$ which is the likelihood ratio associated with input ${\bf u}$ to the kernel $g(\cdot)$, and $\lambda_i(\cdot)$ is the llr associated with the $i^{th}$ output of the kernel, ${\bf x}_i$. Because $F$ may be non-binary, $\lambda(\cdot)$ and $L(\cdot)$ are assumed to be functions of llrs, that is $\lambda_i(t)=\log\left(\frac{\Pr({\bf y}|{\bf x} = 0)}{\Pr({\bf y}|{\bf x}_i = t)}\right)$, for $t\in F$, where ${\bf y}$ is assumed to be the vector of the observations.
We now describe the SC decoding algorithm. As we already mentioned, because of structure of the code, the decoding algorithm is composed of pairs of steps, such that pair $r$ deals with outer code $r-1$, where $1\leq r \leq \ell$. As a preparation step, we partition the decoder’s $N$ length input llr vector ${\bf \lambda}(\cdot)$ to $N/\ell$ length vectors, each of length $\ell$, denoted by ${\bf \lambda}^{(m)}(\cdot)$, such that $${ \lambda}^{(m)}_i(\cdot) = \lambda_{m\cdot\ell+i}(\cdot) \,\,\,\,\, 0 \leq m \leq N/{\ell}-1,\,\,\,\,\,0\leq i\leq \ell-1.$$ The $\ell$ length vector ${\bf \lambda}^{(m)}(\cdot)$ is associated with the output symbols corresponding to the $m^{th}$ symbol of the outer codes (transformed by kernel $g(\cdot)$). We denote the information word that was given by the decoder of the $m^{th}$ outer code by ${\bf u}^{(m)}$ and its corresponding codeword by ${\bf x}^{(m)}$, both of them are of length $N/{\ell}$. We have the following pair of steps of the algorithm $1 \leq r \leq \ell $.
STEP $2\cdot r-1$
: \
Using the results on the outer-codewords of the previous steps i.e. ${\bf x}^{(m)}$, for $0\leq m\leq r-2$, prepare the $N/{\ell}$ length llr input vector ${\bf L}(\cdot)$ for outer code number $r-1$. To do that, for $0 \leq j \leq N/{\ell}-1$, compute $L_{j}(\cdot)$ using (\[eq:genSCRule\]) with $\left\{{ x}^{(m)}_j\right\}_{0\leq m\leq r-2}$ as the estimated inputs to the transformation.
STEP $2\cdot r$
: \
Give the llr vector ${\bf L}(\cdot)$ as an input to the decoder of outer code number $r-1$. If this is a polar code decoder of length $N/\ell$, then also supply the indices of the frozen symbols in the range $\left[(r-1)\cdot N/\ell,r\cdot N/\ell-1\right]$. The decoder outputs ${{\bf u}}^{(r-1)}$, as the estimation of the information word and ${\bf x}^{(r-1)}$ as the estimation of the outer codeword. Both of these vectors are of length $N/{\ell}$ symbols.
After the step $2\cdot \ell$, the decoder outputs its estimation for the information word, by concatenating the information parts generated by all the outer code decoders, i.e. ${\bf u} = \left[ {\bf u}^{(0)},{\bf u}^{(1)},...,{\bf u}^{(\ell -1)}\right]$. The estimation of the codeword, ${\bf x}$, is done by applying the transformation $g(\cdot)$ on the column of the matrix, which rows are $\left\{{\bf x}^{(m)}\right\}_{m=0}^{\ell-1}$, that is $${\bf x}_{\ell\cdot i}^{\ell\cdot(i+1)-1} = g\left({ x}^{(0)}_i, { x}^{(1)}_i,...,{ x}^{(\ell-1)}_i \right),\,\,\,\,\,0 \leq i\leq N/{\ell}-1.$$
The base case of the recursion, i.e. the decoder for $N=\ell$ length polar code, is a simple generalization of the SC decoder for length $N=2$ code of Arikan. The idea is to successively estimate the input bits to the transformation $g(\cdot)$, using (\[eq:genSCRule\]). We decide on the symbol ${ u}_i$ using the llr generated by (\[eq:genSCRule\]) (in which our previous decisions are taken as known values). If $u_i$ is frozen, we skip the calculation of (\[eq:genSCRule\]), and decide on its known value.
In case we have a mixed kernel construction, the generalization is very easy. Let us assume that we have glued the symbols ${ u}_1$ and ${ u}_2$ to a new symbol ${ u}_{1,2} \in F^{2}$. In this case, we treat these two symbols as a one entity, and consider the outer code associated with them, denoted as ${\mathcal{C}}_{1,2}$, as $N/{\ell}$ length code over the alphabet $F^{2}$. The only change we have in the decoding algorithm is for the pair of steps corresponding to this “glued” outer code. For the first step in the pair, we need to compute the $N/{\ell}$ length llr vector ${\bf L}(\cdot,\cdot)$, that serves as an input to the the decoder of ${\mathcal{C}}_{1,2}$. In this case, we need that each llr function in the vector, will be a function of both ${ u}_1$ and ${ u}_2$. Equation (\[eq:genSCRule\]) is therefore updated accordingly. $$\label{eq:genSCRule2}
L(t_1,t_2)=\ln\left(\frac{\sum_{{\bf u}_{3}^{\ell}\in F^{\ell-3}} R_g\left({ u}_{0},0,0,{\bf u}_{3}^{\ell}\right)}{\sum_{{\bf u}_{3}^{\ell}\in F^{\ell-3}} R_g\left({ u}_{0},t_1,t_2,{\bf u}_{3}^{\ell}\right)}\right).$$ The second step of the pair is remained unchanged.
A Recursive Description of the SCL Algorithm {#sec:SCListDecoding}
--------------------------------------------
Tal and Vardy introduced an efficient SCL decoder [@Tal2012]. We give here a recursive description of this algorithm. In the algorithm, there is a requirement to compare between the likelihoods of different decoding possibilities. Therefore, we need to assume that inputs to the algorithm as well as its internal computations are interpreted as likelihoods, instead of llrs. Note, that if the decoding list is of size $1$, then the formulation we give below is of the SC decoder we described in the previous subsection (with the only difference that we use likelihoods instead of llrs to describe the computations).
We also note, that here we only describe the algorithm to generate the list. At the end of the algorithm, the most likely element of the list should be given as output. If there is an outer CRC, only outputs that agree with the CRC should be considered. The notion of likelihoods normalization that was considered by Tal and Vardy [@Tal2012 Algorithm 14] to avoid floating-point underflow is also applicable here. These two issues and their generalization are not further discussed in this paper.
The algorithm of SCL decoding of $N$ length polar code with list of size $M$ gets as input the following structures.
- Two likelihood matrices ${\bf \Pi^{(0)}}$ and ${\bf \Pi^{(1)}}$ of $M\times N$ dimension, which represent $M$ arrays of conditional probability values (each array of length $N$) - each one corresponds to an input option, that the decoder should consider. We refer to these input options as *models*. The plurality of the models exists, because at any given point, in the list decoding algorithm we allow $M$ options for past decisions on the symbols of the information word (these options form the list). Each one of these options induces a different statistical model, in which we assume that the information sub-vector, which is associated with it, is the one that was transmitted. We have ${ \Pi}^{(b)}_{i,j} = \Pr({ Y}_j^{(i)}={ y}_j^{(i)}|{ V }_j=b)$, where ${ Y}_j^{(i)}$ is the measurement of the $j^{th}$ channel ${ V }_j \rightarrow { Y}_j $ of the $i^{th}$ option in the list and $b\in \{0,1\}$.
- A marker $\rho_{in}$ indicating how many rows in ${\bf \Pi^{(0)}}$ and ${\bf \Pi^{(1)}}$ are occupied. The algorithm supports decoding of $\rho_{in} \in [1,M]$ input models.
- The vector of the indices of the frozen bits.
The algorithm outputs the following structures.
- A matrix $ {\bf U} $ of $M\times N$ dimension, which represents $M$ arrays of information values (each array of length $N$) - this is the list of the possible information words that the decoder estimated.
- A matrix $ {\bf X} $ of $M\times N$ dimension, which represents $M$ arrays of codewords (each array of length $N$) - this is the list of codewords that correspond to the information words in ${\bf U}$.
- An indicator vector ${\bf s}_{0}^{M-1}$, that indicates for each row in ${ \bf U}$ and ${\bf X}$ to which row in the input ${\bf \Pi^{(0)}}$ and ${\bf \Pi^{(1)}}$ it has originated from (i.e. it refers to the statical model that was assumed when estimating this row).
- A marker $\rho_{out}$ indicating how many rows in $ {\bf U}$ or $ {\bf X} $ are occupied.
For the basic $N=2$ length case the algorithm operates as follows.
STEP I
: \
For each of the $\rho_{in}$ occupied rows of ${ \Pi}^{(0)}$ and ${ \Pi}^{(1)}$ compute ${ P}^{(0)}_{i}=\frac{1}{2}\left({ \Pi}^{(0)}_{i,0}\cdot{ \Pi}^{(0)}_{i,1}+{ \Pi}^{(1)}_{i,0}\cdot{ \Pi}^{(1)}_{i,1}\right)$ and ${ P}^{(1)}_{i}=\frac{1}{2}\left({ \Pi}^{(0)}_{i,0}\cdot{ \Pi}^{(1)}_{i,1}+{ \Pi}^{(1)}_{i,0}\cdot{ \Pi}^{(0)}_{i,1}\right)$, for $0\leq i \leq \rho_{in}-1$.
STEP II
: \
Concatenate the two vectors to one $2\cdot \rho_{in}$ length vector, ${\bf P} = [{\bf P}^{(0)} , {\bf P}^{(1)}]$.
Let $\tilde {\bf P}$ be a vector that contains the $\rho = \min\{2\cdot \rho_{in}, M \}$ largest values of $\bf P$. Let ${\bf s}^{(0)}, {\bf u}^{(0)}$, be $\rho$ length column vectors corresponding to $\tilde{\bf P}$, such that the $i^{th}$ element of $\tilde{\bf P}$ is element number $ { s}^{(0)}_i$ in the vector ${\bf P}^{({ u}^{(0)}_{i})}$. This element was originated from model number $ { s}^{(0)}_i$, which means that it was computed assuming that row number $ { s}^{(0)}_i$ of ${\bf \Pi}^{(0)}$ and ${\bf \Pi}^{(1)}$ was the statistical model.
If $u$ is frozen (without loss of generality assume that it is set to the 0 value), then steps I and II should be skipped and ${\bf s}^{(0)} = [0,1,...,\rho_{in}-1]$ ,${\bf u}^{(0)} = {\bf 0}$.
STEP III
: \
Generate two $\rho$ length vectors, ${ \bf P}^{(0)}$ and ${ \bf P}^{(1)}$. For each of the $\rho$ occupied rows of ${\bf s}^{(0)}, {\bf U}^{(0)}$ compute ($ i\in [0,\rho-1]$). $${P}_{i}^{(0)} = \frac{1}{2}\cdot\left\{
\begin{array}{ll}
{ \Pi}^{(0)}_{{s}^{(0)}_i,0}\cdot{ \Pi}^{(0)}_{{ s}^{(0)}_i,1}, & \hbox{ ${ x}^{(0)}_{i}=0$;} \\
{ \Pi}^{(1)}_{{ s}^{(0)}_i,0}\cdot{ \Pi}^{(0)}_{{ s}^{(0)}_i,1}, & \hbox{ ${ x}^{(0)}_{i}=1$.}
\end{array}
\right.$$
$${ P}_{i}^{(1)} = \frac{1}{2}\cdot\left\{
\begin{array}{ll}
{ \Pi}_{{ s}^{(0)}_i,0}^{(1)}\cdot{ \Pi}_{{ s}^{(0)}_i,1}^{(1)}, & \hbox{ ${ x}^{(0)}_{i}=0$;} \\
{ \Pi}_{{ s}^{(0)}_i,0}^{(0)}\cdot{ \Pi}^{(1)}_{{ s}^{(0)}_i,1}, & \hbox{ ${ x}^{(0)}_{i}=1$.}
\end{array}
\right.$$
STEP IV
: \
Concatenate the two vectors to one $2\cdot \rho $ length vector, ${\bf P} = [{\bf P}^{(0)} , {\bf P}^{(1)}]$.
Let $\tilde {\bf P}$ be a vector that contains the $\rho_{out} = \min\{2\cdot \rho, M \}$ largest values of $\bf P$. Let ${\bf s}^{(1)}, {\bf u}^{(1)}$, be $\rho_{out}$ length column vectors corresponding to $\tilde{\bf P}$, such that the $i^{th}$ element of $\tilde {\bf P}$ is element number ${ s}^{(1)}_i$ of the vector ${ \bf P}^{({ u}^{(1)}_{i})}$.
If the second bit is frozen (without loss of generality assume that it is set to the 0 value), then steps III and IV should be skipped and ${\bf s}^{(1)} = [0,1,...,r-1],\,\,{\bf u}^{(1)}= {\bf 0}, \rho_{out} = r$.
Output:
- $\rho_{out}$
- ${ s}_i = { s}^{(0)}_{\sigma}\,\,\,\, s.t \,\,\,\, \sigma = { s}^{(1)}_i \,\,\,\, i\in [0,\rho_{out}-1]$
- ${{\bf U}} = [{\bf u}^{(0)} ; {\bf u}^{(1)}]$
- ${{\bf X}}= [ {\bf u}^{(0)}+{\bf u}^{(1)}; {\bf u}^{(1)}]$
Now, for describing a SC decoder for length $N=2^{n}$ polar code, let us assume that we already developed a SCL decoder for length $N/2$ polar code. Therefore, a decoder for length $N$ polar code contains the following steps.
STEP I
: \
Prepare the probability transition matrices for the first polar outer code decoder. Specifically, generate two matrices ${\bf P}^{(b)}$ of dimension $M\times N/2$, $b\in \{0,1\}$, such that for $0 \leq i \leq \rho_{in}-1\,\,\,0\leq j\leq N/2-1$ $${ P}^{(0)}_{i,j}=\frac{1}{2}\left({ \Pi}^{(0)}_{i,2\cdot j}\cdot{ \Pi}^{(0)}_{i,2\cdot j+1}+{ \Pi}^{(1)}_{i,2\cdot j}\cdot{ \Pi}^{(1)}_{i,2\cdot j+1}\right)$$ and $${ P}^{(1)}_{i,j}=\frac{1}{2}\left({ \Pi}^{(0)}_{i,2\cdot j}\cdot{ \Pi}^{(1)}_{i,2\cdot j+1}+{ \Pi}^{(1)}_{i,2\cdot j}\cdot{ \Pi}^{(0)}_{i,2\cdot j+1}\right)$$
STEP II
: \
Give the $M\times \frac{N}{2}$ matrices ${\bf P}^{(0)}$ and ${ \bf P}^{(1)}$, the frozen bits from the first half of the codeword and $\rho_{in}$ as the number of elements in the list as inputs to the polar code decoder of length $N/2$. Assume that the decoder outputs ${\bf U}^{(0)}$ and ${\bf X}^{(0)}$ as the list of estimations of the information word and the outer polar codeword of length $N/2$, respectively. Both of these structures are matrices of dimension $M\times N/2$. The decoder also outputs ${\bf s}^{(0)}$ as the source indicator vector (of length $M$), and $\rho$ as the size of the list.
STEP III
: \
Prepare the input matrices for the decoder of the second outer polar code of length $N/2$. Specifically, generate two matrices ${\bf P}^{(b)}$ of dimension $M\times N/2$, $b\in \{0,1\}$, such that for $0 \leq i \leq \rho-1,\,\,\,0\leq j\leq N/2-1$ $${ P}^{(0)}_{i,j}=\frac{1}{2}\cdot\left\{
\begin{array}{ll}
{ \Pi}^{(0)}_{{ s}^{(0)}_i,2\cdot j}\cdot{ \Pi}^{(0)}_{{ s}^{(0)}_i,2\cdot j+1}, & \hbox{ ${ X}_{{ s}^{(0)}_i,j}^{(0)}=0$;} \\
{ \Pi}^{(1)}_{{ s}^{(0)}_i,2\cdot j}\cdot{ \Pi}^{(0)}_{{ s}^{(0)}_i,2\cdot j+1}, & \hbox{${ X}_{{ s}^{(0)}_i,j}^{(0)}=1$,}
\end{array}
\right.$$ and $${P}^{(1)}_{i,j}=\frac{1}{2}\cdot\left\{
\begin{array}{ll}
{ \Pi}^{(1)}_{ {s}^{(0)}_i,2\cdot j}\cdot{ \Pi}^{(1)}_{{ s}^{(0)}_i,2\cdot j+1}, & \hbox{ ${ X}_{{ s}^{(0)}_i,j}^{(0)}=0$;} \\
{ \Pi}^{(0)}_{{ s}^{(0)}_i,2\cdot j}\cdot{ \Pi}^{(1)}_{{ s}^{(0)}_i,2\cdot j+1}, & \hbox{${ X}_{{ s}^{(0)}_i,j}^{(0)}=1$.}
\end{array}
\right.$$
STEP IV
: \
Give these matrices ${\bf P}^{(0)}$ and ${\bf P}^{(1)}$, the vector of indices of the frozen bits from the second half of the codeword and $\rho$ (as the number of elements in the list) as inputs to the decoder of the second outer polar code of length $N/2$.
Assume that the decoder outputs ${{\bf U}}^{(1)}$ and ${{\bf X}}^{(1)}$ as the list of estimations of the information words and their corresponding outer polar codeword of length $N/2$. Both of these structures are matrices of dimension $M\times N/2$. The decoder also outputs $\bf s^{(1)}$ as the source indicator vector (of length $M$) and $\rho_{out}$ as the size of the output list.
Now, generate the outputs of the decoder ($i\in [0,\rho_{out}-1]$):
- ${ s}_{i} = { s}^{(0)}_{\sigma(i)}$, where $\sigma(i) = { s}_i^{(1)}$.
- ${\bf U}_{\rightarrow i}=[ {\bf U}^{(0)}_{\rightarrow { s}_{i}}, {\bf U}^{(1)}_{\rightarrow \sigma(i)}]$.
- ${\bf X}_{i,\text{even}}={\bf X}^{(0)}_{\rightarrow { s}_{i}}+{\bf X}^{(1)}_{\rightarrow {\sigma(i)}}$
- ${\bf X}_{i,\text{odd}}={\bf X}^{(1)}_{\rightarrow {\sigma(i)}}$,
where ${\bf X}_{i,\text{even}}$ (${\bf X}_{i,\text{odd}}$) are the vectors of the even (odd) indices columns of row number $i$ in the matrix ${\bf X}$, and for a matrix $\bf A$, the $i^{th}$ row is denoted by ${\bf A}_{\rightarrow i}$.
Let $T(n)$ be the decoding time complexity, for length $N=2^n$ polar code. Then $T(n)= 2\cdot T(n-1) + O(M\cdot N)$, and $T(1)=O(M)$, which results in $T(n)=O(M\cdot N\cdot\log_2N)$. Similarly, the space complexity of the algorithm can be shown to be $O(M\cdot N)$.
The generalization of the decoding algorithm for a homogenous kernel of dimension $\ell$ with alphabet $F$ is quite simple. Here we emphasize the principal changes, from the $(u+v,v)$ algorithm. First, the only change in the input is that we should have $|F|$ input channel matrices, ${\bf \Pi}^{(b)}$, one for each $b\in F$. In the decoding algorithm, we have $\ell$ pairs of steps, such that each one is dedicated to a different outer codes. Before step $2\cdot r -1$, we have decoded outer-codes $\mathcal{C}_{i}$ where $0\leq i \leq r-2$. We assume, that we have temporary lists ${\bf X}^{(i)}$ and ${\bf U}^{(i)}$ of the estimated codewords and their corresponding information words, which are represented by matrices of size $M\times N/{\ell}$. The $i^{th}$ matrix corresponds to the decoding of $\mathcal{C}_{i}$, $0\leq i \leq r-2$. We maintain a temporary indicator vector ${\bf s}^{(0)}$ of length $M$, such that ${\bf X}^{(i)}_{j\rightarrow}$ and ${\bf U}^{(i)}_{j\rightarrow}$ were estimated assuming model ${ s}^{(0)}_j$. We also have $\rho$ as the number of occupied elements in the list so far (on the initialization, $\rho={\rho}_{in}$).
STEP $2\cdot r-1$
: \
Using the decoding results of the outer-codewords from the previous steps i.e. ${\bf X}^{(m)}$, for $0 \leq m\leq r-2$, prepare the $N/{\ell}$ length likelihood lists, $\left\{{\bf P}^{(b)}\right\}_{b\in F}$. Each list is an $M\times N/{\ell}$ matrix, and all of them will serve as an input to the decoder of the $N/{\ell}$ length outer code number $r-1$. For the computation of row $i$ of ${\bf P}^{(b)}$, use the input statistical model $s^{(0)}_i$, that is the likelihoods in rows $\left\{{\bf \Pi}^{(b)}_{\rightarrow s^{(0)}_i}\right\}_{b\in F}$. Also, as the estimated codewords of the previous outer codes, we need to use rows $\left\{{\bf X}^{(m)}_{\rightarrow i}\right\}_{0 \leq m\leq r-2}$. To prepare $\left\{{\bf P}^{(b)}\right\}_{b\in F}$ we do computations on likelihoods (instead of llrs), which are the equivalent to step $2\cdot r -1$ in the description of the general SC decoding (Subsection \[sec:recSCDec\]).
STEP $2\cdot r$
: \
Give the matrices $\left\{{\bf P}^{(b)}\right\}_{b\in F}$, the vector of indices of the frozen bits from the second half of the codeword and $\rho$ (as the number of elements in the list) as inputs to the decoder of outer polar code number $r-1$.
Assume that the decoder outputs ${{\bf U}}^{(r-1)}$ and ${{\bf X}}^{(r-1)}$ as the list of estimations of the information word and their corresponding estimations of the transmitted codeword of the outer code number $r-1$, respectively. Both of these structures are matrices of dimension $M\times N/{\ell}$. The decoder also outputs $\bf s^{(1)}$ as the model indicator vector (of length $M$) and $\rho$ as the number of occupied elements in the list.
Allocate ${ \bf s}$, a temporary vector of size $M$, and temporary matrices $\tilde{{\bf X}}^{(i)},\tilde{{\bf U}}^{(i)}$ of size $M\times N/{\ell}$, where $0\leq i \leq r-2$.
- ${ s}_{i} = { s}^{(0)}_{\sigma(i)}$, where $\sigma(i) = { s}_i^{(1)}$ and $0\leq i \leq r-2$.
- $\tilde{{\bf X}}^{(i)}_{\rightarrow j }={{\bf X}}^{(i)}_{ \rightarrow \sigma(j) }\,\,\,\,0 \leq i\leq r-2,\,\,\,0\leq j\leq \rho-1$
- $\tilde{{\bf U}}^{(i)}_{\rightarrow j }={{\bf U}}^{(i)}_{\rightarrow \sigma(j) }\,\,\,\,0 \leq i\leq r-2,\,\,\,0\leq j\leq \rho-1$
Copy these matrices to the internal data structures.
- ${\bf s}^{(0)} ={\bf s} $.
- ${{\bf X}}^{(i)}=\tilde{{\bf X}}^{(i)}\,\,\,\,0 \leq i\leq r-2$
- ${{\bf U}}^{(i)}=\tilde{{\bf U}}^{(i)}\,\,\,\,0 \leq i\leq r-2$
If this is step $2\cdot \ell$ (the last step), then prepare the output.
- $\rho_{out}=\rho$.
- ${ \bf s}$.
- ${\bf U} =[ {\bf U}^{(0)}; {\bf U}^{(1)};...{\bf U}^{(\ell-1)} ]$.
- ${\bf X}_{i,\ell\cdot m:{\ell\cdot(m+1)-1}}=g\left({\bf X}^{(0)}_{i,m}, {\bf X}^{(1)}_{i,m},...,{\bf X}^{(\ell-1)}_{i,m} \right),\,\,\,\,\, 0\leq m\leq N/{\ell}-1.$
Where for a matrix $\bf A$, the subvector that is composed of the columns $n_1$ to $n_2$ of the $i^{th}$ row is denoted by ${\bf A}_{i,n_1:n_2}$.
The decoder for the basic $N={\ell}$ length code, also contains $\ell$ pairs of steps. The decoding is similar to the above, with the exception that instead of delivering the likelihood matrices $\left\{{\bf P}^{(b)}\right\}_{b\in F}$ (here these matrices are actually column vectors) to a decoder, we concatenate them to a vector $\tilde{{\bf P}}$ and choose the $\rho = \min\left\{ M,2\cdot \rho\right\}$ maximum elements from it, and generate the indicator vector ${\bf s}^{(1)}$ and the information symbols list ${\bf u}^{(r-1)}$, similarly to the case of the $N=2$ length decoder of the $(u+v,v)$ construction.
In case the kernel is mixed, the generalization is also quite easy. Let us consider the mixed example, from the end of Subsection \[sec:recSCDec\]. The only changes we have in the decoding algorithm, are for the pair of steps associated with the glued outer code $\mathcal{C}_{1,2}$. In step $3$ (the preparation step for this outer-code), we prepare $|F|^{2}$ input matrices ${\bf P}^{(b_1,b_2)}$, for $(b_1,b_2)\in F^2$. For this, we use the equivalent of equation (\[eq:genSCRule2\]) for likelihoods (instead of llrs). The decoder of $\mathcal{C}_{1,2}$ is supposed to return a list of estimations of the information words, their corresponding codewords and the model indicator vectors. These outputs and the temporary structures are re-organized, as is done in step $2\cdot r$ for the decoding algorithm of the homogenous kernel polar code. Note, however, that at the end of step $4$, there are three information words lists ${\bf U}^{(0)}$, ${\bf U}^{(1)}$ and ${\bf U}^{(2)}$ along with their corresponding three outer codewords lists. This is because we have decoded the glued outer code $\mathcal{C}_{1,2}$ simultaneously, which contributed ${\bf U}^{(1)}$, ${\bf U}^{(2)}$, ${\bf C}^{(1)}$ and ${\bf C}^{(2)}$ in the same decoding step.
A Recursive Description of the BP Algorithm {#sec:BP}
-------------------------------------------
BP is an alternative to SC decoding [@Arikan]. It is an iterative message-passing algorithm, which messages are defined using Forney’s normal factor graph [@Forney01]. There is no evidence which algorithm is better for general channels, except for the BEC, in which BP is shown to outperform SC [@Hussami2009]. However, simulations indicate that BP outperforms SC in many cases. The order of sending the messages on the graph is called the *schedule* of the algorithm. Hussami *et al.* suggested to use a “$Z$ shape schedule” for transferring the messages [@Hussami2009 Section II.A]. Here we prefer, to present a serial schedule which is induced from the GCC structure of the code.
\
We begin by describing the type of messages that are computed during the algorithm. Figure \[fig:uvNormFactGraph\] depicts the normal factor graph representation of Arikan’s kernel. We have $4$ symbol half edges denoted by $u,v,x_0$ and $x_1$. These symbols have the following functional dependencies among them $x_0 = u+v$ and $x_1=v$. The messages and the inputs that may be sent on the graph are assumed to be llrs, and their values are taken from $\mathbb{R}\bigcup\{\pm\infty\}$. The $\infty$ and $-\infty$ are special types of llrs, that indicate known values of $0$ and $1$, respectively. They are used to support the existence of the frozen bits of the polar code.
For the symbol half edges, we assume that we have $4$ input llr messages. These messages may be generated by the output of the channel, by known values associated with frozen bits, or by computations that were done in this iteration or previous ones. We denote these messages by $\mu^{(in)}_{u}$, $\mu^{(in)}_{v}$, $\mu^{(in)}_{x_0}$ and $\mu^{(in)}_{x_1}$. The algorithm computes (in due time) $4$ output llr messages, $\mu^{(out)}_{u}$, $\mu^{(out)}_{v}$, $\mu^{(out)}_{x_0}$ and $ \mu^{(out)}_{x_1}$, indicating the estimations of $u,v,x_0$ and $x_1$, respectively, by the decoding algorithm. The messages are computed using the extrinsic information principle, i.e. each message that is sent from a node on an adjacent edge is a function of all the messages that were previously sent to the node, except the message that was received over the particular edge. The nodes of the graphs are denoted by $a_0$ (the adder functional) and $e_1$ (the equality functional). Using the ideas mentioned above we have the following computation rules. $$\label{eq:BPUV1}
\mu_{e_1 \rightarrow a_0 }=f_{(=)}(\mu^{(in)}_{v},\mu^{(in)}_{x_1}),$$ $$\label{eq:BPUV2}
\mu_{a_0 \rightarrow e_1}=f_{(+)}(\mu^{(in)}_{u},\mu^{(in)}_{x_0}),$$ $$\label{eq:BPUV3}
\mu^{(out)}_{u}=f_{(+)}(\mu^{(in)}_{x_0},\mu_{e_1 \rightarrow a_0 }),$$ $$\label{eq:BPUV4}
\mu^{(out)}_{v}=f_{(=)}(\mu_{a_0 \rightarrow e_1},\mu^{(in)}_{x_1}),$$ $$\label{eq:BPUV5}
\mu^{(out)}_{x_0}=f_{(+)}(\mu_{e_1 \rightarrow a_0 },\mu^{(in)}_{u}),$$ $$\label{eq:BPUV6}
\mu^{(out)}_{x_1}=f_{(=)}(\mu_{a_0 \rightarrow e_1 },\mu^{(in)}_{v}),$$ where $f_{(=)}(z_0,z_1) = z_0+z_1$ and $f_{(+)}(z_0,z_1)=2\tanh^{-1}\left(\tanh(z_0/2)\cdot\tanh(z_1/2)\right)$. Note that, $\mu_{\alpha\rightarrow\beta}$ where $\alpha,\beta\in\{e_1,a_0\}$ is the message which is sent from node $\alpha$ to node $\beta$. $\mu^{(out)}_{u}$ and $\mu^{(out)}_{x_0}$ are sent from $a_0$ over the half edges corresponding to symbols $u$ and $x_0$, respectively. $\mu^{(out)}_{v}$ and $\mu^{(out)}_{x_1}$ are sent from $e_1$ over the half edges corresponding to symbols $v$ and $x_1$, respectively.
We, now, turn to give a recursive description of an iteration of the algorithm. The factor graph of the length $N$ code, has $\log_2N$ layers. In each layer, there exist $N/2$ copies of the normal factor graph, that we depicted in Figure \[fig:uvNormFactGraph\]. Their organization can be implied from the recursive description in Figure \[fig:uvExample\]. Therefore, for each layer, we have $N/2$ realizations of the input messages, output messages and inner messages (each one is corresponding to a different set of symbols and interconnect). To denote the $i^{th}$ realization of these messages, we use the notation $\mu_{\alpha\rightarrow\beta,i}$, $\mu_{\gamma,i}^{(in)}$ and $\mu_{\gamma,i}^{(out)}$, where $\alpha,\beta \in \{a_0,e_1\}$ and $\gamma\in \{x_0,x_1,u,v\}$. As before, we denote the channel llrs by the length $N$ vector $\{\lambda_{i}\}_{i=0}^{N-1}$.
STEP I
: \
Partition the llr vector into pairs of consecutive llr values $ \left\{\left(\mu_{x_0,i}^{(in)},\mu_{x_1,i}^{(in)}\right)=\left(\lambda_{2i},\lambda_{2i+1}\right) \right\}_{i=0}^{N/2-1}$.
Compute the messages $\left\{\mu_{e_1 \rightarrow a_0,i}\right\}_{i=0}^{N/2-1}$ using (\[eq:BPUV1\]). Compute the messages $\left\{\mu_{u,i}^{(out)} \right\}_{i=0}^{N/2-1}$, using (\[eq:BPUV3\]) (Note that the two computations in this step can be combined to one computation).
STEP II
: \
Give the vector $\left\{\mu_{u,i}^{(out)} \right\}_{i=0}^{N/2-1}$ as an input to the polar code BP iterative decoder of length $N/2$. Also provide the indices of the frozen bits from the first half of the codeword. Assume that the decoder outputs $\left\{\mu_{u,i}^{(in)}\right\}_{i=0}^{N/2-1}$ and the estimation of the information word.
STEP III
: \
Compute the messages $\left\{\mu_{a_0 \rightarrow e_1,i}\right\}_{i=0}^{N/2-1}$ using (\[eq:BPUV2\]). Compute the messages $\left\{\mu_{v,i}^{(out)} \right\}_{i=0}^{N/2-1}$ using (\[eq:BPUV4\]) (Note that the two computations in this step can be combined to one computation).
STEP IV
: \
Give the vector $\left\{\mu_{v,i}^{(out)} \right\}_{i=0}^{N/2-1}$ as an input to the polar code decoder of length $N/2$. Also provide to this decoder the indices of the frozen bits from the second half of the codeword. Assume that the decoder outputs $\left\{\mu_{v,i}^{(in)} \right\}_{i=0}^{N/2-1}$ and the estimation of the information word of the second outer polar codeword of length $N/2$.
The information part may be concatenated to the information part of step II, to generate the decision on the information word after this iteration.
Compute the messages $\left\{\mu_{e_1 \rightarrow a_0,i}\right\}_{i=0}^{N/2-1}$ using (\[eq:BPUV1\]).
Compute the messages $\left\{\mu_{x_0,i}^{(out)}\right\}_{i=0}^{N/2-1}$ and $\left\{\mu_{x_1,i}^{(out)}\right\}_{i=0}^{N/2-1}$ using (\[eq:BPUV5\]) and (\[eq:BPUV6\]), respectively.
Any input message or inner message, unless given (by the channel output or by a prior knowledge on the frozen bits) is set to $0$ before the first iteration. It is assumed that the inner messages are preserved between the iterations (and see a further discussion in the sequel).
To complete the recursive description of the algorithm, we need to consider the case of the length $N=2$ code. Assume, that we get $\mu_{x_0}^{(in)},\mu_{x_1}^{(in)}$ as the input values. Also, before the first iteration initialize $w\in \{u,v\}$ $$\mu_{w}^{(in)}=\left\{
\begin{array}{ll}
0, & \hbox{w is not frozen;} \\
(-1)^b\cdot \infty, & \hbox{u is frozen and equals b.}
\end{array}
\right.$$
STEP I
: \
Compute $\mu_{e_1 \rightarrow a_0 }$ according to (\[eq:BPUV1\]).
STEP II
: \
If $u$ is not frozen, compute $\mu_{u}^{(out)}$ according to (\[eq:BPUV3\]), and make a hard decision on this bit, based on its sign.
STEP III
: \
Compute $\mu_{a_0 \rightarrow e_1 }$ according to (\[eq:BPUV2\]).
STEP IV
: \
If $v$ is not frozen, compute $\mu_{v}^{(out)}$ according to (\[eq:BPUV4\]), and make a hard decision on it, based on its sign.
Compute $\mu_{x_0}^{(out)},\mu_{x_1}^{(out)}$ according to (\[eq:BPUV5\]), (\[eq:BPUV6\]).
We should note that $$f_{(=)}(\pm \infty,z_1)=f_{(=)}(z_0,\pm \infty)=\pm \infty$$ $$f_{(+)}(\pm \infty,z_1)=\pm z_1,\,\,\,\,f_{(+)}(z_0,\pm \infty)=\pm z_0.$$ We further note, that for $N=2$ length code, steps I and II can be combined to one operation, and similarly steps III and IV can be combined to one operation. Both of these combined steps are independent, so they may be performed in any order, or in parallel.
In this implementation, we assumed that there is a memory for storing messages of type $\mu_{u}^{(in)}$, $\mu_{v}^{(in)}$, $\mu_{x_0}^{(in)}$, $\mu_{x_1}^{(in)}$ and $\mu_{a_0\rightarrow e_1}$, that were previously computed. This memory is dedicated for each realization of such messages, specifically, for each layer of the graph and for each $(u+v,v)$ normal subgraph, as in Figure \[fig:uvNormFactGraph\]. Actually, for this particular schedule, excluding $\mu_{v}^{(in)}$, we do not need to save any message beyond the iteration boundary (this observation reduces the required memory consumption as we’ll see in the hardware implementation). The memory consumption of the algorithm is $\Theta\left(N\cdot \log (N)\right)$. The running time is also $\Theta\left(N\cdot\log(N)\right)$, assuming no parallelism is allowed.
In each iteration, we send one instance for each of the possible messages and for each $(u+v,v)$ block realization in the code, except for the $\mu_{e_1\rightarrow a_0}$ for which we send two messages (for all the layers, besides the last one). The full implementation may contain several iterations. The number of iterations may be fixed or set adaptively, which means that the algorithm continues until some consistency criteria are fulfilled. An example for such a criterion, is that the signs of the llr estimations for all the frozen bits agree with their know values (i.e. if all the frozen bits are set to zero, then $\text{sign}\left(\mu^{(out)}_{\gamma}\right)>0$ of all the frozen bits, $\gamma$). In this case, one can stop an iteration in the middle by holding a counter in a similar way to the method that is usually used in BP decoding of LDPC codes using the check-node based serial schedules (see e.g. [@Sharon07]). We note, however, that in the LDPC case, the consistency is manifested in the fact that all the parity check equations are satisfied.
In the next section we describe hardware architectures for the decoding algorithms we covered so far.
Recursive Descriptions of Hardware Architectures of Decoders for Arikan’s Construction {#sec:HrdwreArikConstr}
======================================================================================
We now turn to study hardware architectures, that are inspired by the recursive decoding algorithms, which we presented in Section \[sec:RecDescOfDecAlgor\]. This section covers hardware architectures for Arikan’s $(u+v,v)$ construction. A generalization of this discussion to other kernels is presented in Section \[sec:HardArchiForOthKer\]. We begin by the simple SC pipeline decoder (Subsection \[sec:SCPipeUV\]), and then progress to the more efficient SC line decoder (Subsection \[sec:UVLineDecoder\]). Both of these designs were presented by Leroux *et al.* [@Leroux10; @Leroux2012] in a non recursive fashion. We finish by considering a BP line decoder (Subsection \[sec:UVLineDecoderBP\]).
It is important to note that throughout the hardware discussion, our presentation is relatively abstract, emphasizing the important concepts and features of the recursive designs without dwelling into all the details. As such, the figures representing the block diagrams should not be considered as full detailed specifications of the implementation, but rather as illustrations that aim to aid the reader in the task of designing the decoder.
We usually prefer to use the same notation for signals array or registers arrays. Let $u(0:N-1)$ be an $N$ length signals array, then its $i^{th}$ value is denoted by $u(i)$. If $v$ is a two dimensional array of $M$ rows and $N$ columns, we denote it by $v(0:M-1,0:N-1)$. Naturally, the $i^{th}$ row of this array is denoted by $v(i,0:N-1)$, and it is a one dimensional array of $N$ elements, of which the $j^{th}$ element is denoted by $v(i,j)$.
The SC Pipeline Decoder {#sec:SCPipeUV}
-----------------------
\
A block diagram of the SC pipeline decoder for Arikan’s construction is depicted in Figure \[fig: pipArikan\]. The main ingredients of the diagram are listed below.
1. Processing Element (PE) - This is the basic computation unit of the decoder. It gets as input two channel llrs, an estimate of the $u$ input for the $(u+v,v)$ mapping and a control signal, $c_u$, indicating wether to compute the llr of $u$ ($c_u=0$) or $v$ ($c_u=1$). Note, that the estimate of $u$ is only needed in the latter case.
2. ${\bf \lambda}(0:N-1)$ - An array of $N$ registers holding the llrs from the channels.
3. SC decoding unit of polar code of length $N/2$ - This unit has the following inputs: $N/2$ length signals array of input llrs and a binary signals array containing the indices of the frozen bits of the code. Its outputs are $\tilde{ u}(0:N/2-1)$, which is the estimation of the transmitted information word (including the frozen bits), and $\tilde{ x}(0:N/2-1)$, which is the estimation of the transmitted codeword.
4. A register for the estimated information word ${ u}(0:N-1)$.
5. Encoding unit for generating the estimated codeword, it includes a register for the codeword ${x} (0:{N-1})$ and $N/2$ bitwise xor circuits for generating the codeword based on the output of the $N/2$ length decoder.
We note that a basic $N=2$ length decoder has only one PE, and operates according to the algorithm described in Section \[sec:Prelim\]. The algorithm for $N>2$ is based on the notion of the recursion, as we describe below.
STEP I
: \
Using the processing elements $PE_0,PE_1,...,PE_{N/2-1}$ with $c_u=0$, prepare the llr input for the decoder of the first $N/2$ length outer code and output it on the signals array $L(0:N/2-1)$, such that $$L(k)= 2\tanh^{-1}\left(\tanh(\lambda({2k})/2)\tanh(\lambda({2k+1})/2)\right),\,\,\,\,\,\, 0\leq k\leq N/2-1.$$
STEP II
: \
Give the signals array $L(0:N/2-1)$ and the list of indices corresponding to the first half of the codeword (i.e. the first outer code) as inputs to the polar code decoder of length $N/2$.
Call the decoder of length $N/2$ polar code on these inputs (decoding the first outer polar code).
Store ${u}(0:{N/2-1})=\tilde{ u}(0:{N/2-1})$, ${x}_{even}(0:N/2-1)=\tilde{ x}({0}:{N/2-1})$.
STEP III
: \
Using the signals array $\tilde{ x}(0:{N/2-1})$ as the vector of estimations of $u$ from the $(u+v,v)$ pair, operate the processing elements $PE_0,PE_1,...,PE_{N/2-1}$ with $c_u=1$. This will prepare the llr input for the second outer code, and output it on signals array ${ L}(0:{N/2-1})$, such that $$L(k) = (-1)^{\tilde{x}(k)}\lambda({2k})+\lambda({2k+1}),\,\,\,\,\,\,\, 0\leq k\leq N/2-1.$$
STEP IV
: \
Give the signals array $L(0:N/2-1)$ as an input to the polar code decoder of length $N/2$. Also provide the indices of the frozen bits corresponding to the second half of the codeword (i.e. the second outer code).
Call the decoder of the length $N/2$ polar code on these inputs (which means that we decode the second outer polar code).
Store ${ u}({N/2}:{N-1})=\tilde{ u}(0:{N/2-1})$, ${x}_{even}(0:{N/2-1})={ x}_{even}(0:{N/2-1})+\tilde{ x}({0}:{N/2-1})$, ${x}_{odd}(0:{N/2-1})=\tilde{ x}(0:{N/2-1})$.
Here, for an array ${ x}$, we denote by ${ x}_{even}$ and ${ x}_{odd}$ the $2-$decimated arrays containing ${ x}$’s even indices samples and odd indices samples, respectively. Note, that to avoid any delays due to sampling by a register, it is important that the codeword estimation (which is one of the outputs of the decoder) will be the output of the encoding layer and not the register following it. This issue and further timing concerns are considered in the next subsection.
Let us consider the complexity of this circuit. We assume that a call to a PE finishes in one clock cycle. Denote by $T(n)$ the time (in terms of the number of clock cycles) that is required to complete the decoding of $N=2^n$ length polar code. Then, $T(n)=2+2\cdot T(n-1) \,\,\,\,\,\, n> 1$ and $T(1) = 2$. This recursion yields $T(n) = 2N-2$. Denote by $P(n)$ the number of PEs for a decoder of length $N=2^n$ polar code, we have $P(n) = 2^{n-1} + P(n-1) \,\,\,\,\, n > 1$ and $P(1) = 1$, so $P(n) = 2^n - 1 = N-1$. The cost of the encoding unit is of $2\cdot \sum_{i=1}^n 2^i = 4\cdot(N-1)$ bits registers, and $\sum_{i=0}^{n-1}2^i=N-1$ xor circuits. We should have $R(n)$ registers for holding llr values, so $R(n) = 2^n +R(n-1) \,\,\,\,\, n>1$ and $R(1) = 2$, so $R(n) = 2\cdot P(n) = 2N-2$. Note, that in this design, we assume that the re-encoding unit is a combinatorial circuit.
The SC Line Decoder {#sec:UVLineDecoder}
--------------------
In the pipeline design of the decoder of length $N$, the $N/2$ processing elements $\left\{ PE_k \right\}_{k=0}^{N/2-1}$, are only used during steps I and III of the algorithm. During the other steps (that ideally consumes $2\cdot T(n-1) = 2N-4$ clock cycles of the total $2N-2$), these processors are idle, and this results in an inefficient design. To improve this, we observe that the maximum number of operations that can be done in parallel by the PEs in the SC decoding algorithm is $N/2$. So, in order to allow this maximum level of parallelism, a design must have at least $N/2$ processors. The line decoder[^3], that we describe in this subsection, achieves this lower-bound. In order to support this, we need to redefine the decoder of length $N$ polar code.
First, The line decoder has two operation modes.
Standard Mode (S-Mode)
: \
In this mode, the decoder gets as inputs llrs and the indices of the frozen bits, and outputs the hard decision on the information word and its corresponding codeword (this is the operation mode we assumed so far).
PE-Array Mode (P-Mode)
: \
In this mode, the decoder gets as input a signals array of llrs ${ \lambda}({0}:{N -1})$, a control signal $c_u$, and a binary array of length $N/2$, ${ z}({0}:{N/2-1})$. The output is a signals array ${ L}(0:N-1)$ of llrs, where for $0\leq k\leq N/2-1$ $${L}(k)=\left\{
\begin{array}{ll}
2\cdot \tanh^{-1}\left(\tanh\left(\lambda({2k})/2 \right)\cdot\tanh\left(\lambda({2k+1})/2 \right)\right), & \hbox{$c_u=0$;} \\
(-1)^{z(k)}\cdot \lambda({2k})+\lambda({2k+1}), & \hbox{$c_u=1$.}
\end{array}
\right.$$
In Figure \[fig: linArikan\], we give a block diagram of this decoder. Note, that in order to maintain the maximum level of parallelism, the length $N$ polar code decoder has $N/2$ processors. Thus, in order to build the length $N$ polar code decoder using an embedded $N/2$ length polar code decoder (already having $N/4$ processors), we use an additional array of $N/4$ PEs, which is referred to as the *auxiliary array*. The input signal *modeIn* indicates wether the decoder is used in *S-Mode* or in *P-Mode*. The *mode* signal is an internal signal that controls wether the $N/2$ length embedded decoder is in *P-Mode*.
\
The algorithm for the S-Mode is described below.
STEP I
: \
Simultaneously,
- At the multiplexers array (MUX array), at the input of the embedded decoder of length $N/2$ polar code, set the control signal $c_m=0$, which means that the array ${\lambda}({0}:{N/2-1})$ is selected as an input to this unit. Set $c_u=0$ and use the decoder of length $N/2$ polar code in P-Mode, which causes this unit to output the signals array ($0\leq k \leq N/4-1$) $${L}(k) = 2\cdot \tanh^{-1}\left(\tanh\left(\lambda({2k})/2 \right)\cdot\tanh\left(\lambda({2k+1})/2 \right)\right).$$ Store this array in the registers array ${R}(0:{N/4-1})$.
- Use the auxiliary array of processors and compute for $N/4 \leq k \leq N/2-1$ $${L}(k) = 2\cdot \tanh^{-1}\left(\tanh\left(\lambda({2k})/2 \right)\cdot\tanh\left(\lambda({2k+1})/2 \right)\right)$$ and store them in the registers array ${ R}({N/4}:{N/2-1})$.
STEP II
: - At the MUX array, at the input of the decoder of the length $N/2$ polar code, set the control signal $c_m=1$, which means that content of the registers array ${ R}({0}:{N/2-1})$ is selected as an input to this unit.
- Provide the vector of indices, corresponding to the frozen bits from the first half of the codeword to the $N/2$ length decoder.
Call the decoder of the length $N/2$ polar code in S-Mode on these inputs (decoding of the first outer polar code).
Store ${u}(0:{N/2-1})=\tilde{ u}(0:{N/2-1})$, ${ x}_{even}(0:{N/2-1})=\tilde{ x}({0}:{N/2-1})$.
STEP III
: \
Simultaneously,
- At the MUX array, at the input of the embedded decoder of length $N/2$ polar code, set the control signal $c_m=0$, which means that the array ${\lambda}({0}:{N/2-1})$ is selected as an input to this unit.
Set $c_u=1$ and use the decoder of length $N/2$ polar code in P-Mode, which causes this unit to output the signals array ($0\leq k \leq N/4-1$) $${L}(k) = (-1)^{\tilde{x}(k)}\cdot\lambda({2k})+\lambda({2k+1}).$$ Store this signals array in the registers array ${R}(0:{N/4-1})$. Note, that we use $\tilde { x}(0:{N/4-1})$, the estimation of the first half of the codeword, that the embedded decoder gave as output in step II, as an input to this unit.
- Use the auxiliary array and compute for $N/4 \leq k \leq N/2-1$ $${L}(k) = (-1)^{\tilde{x}(k)}\cdot\lambda({2k})+\lambda({2k+1})$$ and store them in the registers array ${ R}({N/4}:{N/2-1})$.
STEP IV
: \
At the MUX array, at the input of the decoder of the length $N/2$ polar code, set the control signal $c_m=1$, which means that the array of registers ${ R}({0}:{N/2-1})$ is selected as an input to this unit.
Provide to the $N/2$ length decoder, the vector of indices, corresponding to the frozen bits of the second half of the codeword.
Call the decoder of the length $N/2$ polar code in S-Mode on these inputs (decoding of the second outer polar code).
Store ${u}({N/2}:{N-1})=\tilde{ u}(0:{N/2-1})$, ${x}_{even}(0:{N/2-1})={ x}_{even}(0:{N/2-1})+\tilde{ x}(0:{N/2-1})$, ${x}_{odd}(0:{N/2-1})=\tilde{ x}(0:{N/2-1})$.
The P-Mode operation of the decoder is quite simple. Use $c_m = 0$, which means that the channel llrs are given as an input to the line decoder of the $N/2$ length polar code. Also provide as input the vector ${x}_{in}(0:N/2-1)$, that will serve here as estimations of the $u$ bits from the $(u+v,v)$ pairs. Set $c_u=c_{u,in}$, and operate simultaneously the auxiliary array of processors and the line-decoder of length $N/2$ in P-Mode. Return the llr output of the line-decoder of length $N/2$ and the auxiliary array, i.e. the signals array $L(0:N/2-1)$.
We now analyze the complexity of the decoder. Let $P(n)$ be the number of processors of the $N=2^n$ decoder. Then, $P(n)=2^{n-2}+P(n-1)\,\,\,\,\,\, P(1)=1$, so $P(n)=2^{n-1}=N/2$. The number of registers, we use in the design for the llrs (not including registers for the input and the encoding registers) are $R(n) = 2^{n-1} +R(n-1),\,\,\,\,\,\, R(1) = 1$, so we have $R(n) = 2^n-1 = N-1$. The number of multiplexers for the llrs is denoted by $M(n) = 2^{n-1}+M(n-1)\,\,\,\,\, M(1)=0$, so $M(n) = N-2$.
We want to make a remark about the efficiency of the design we propose here. The recursive design has a potential advantage of being a clearer reflection of the underlined algorithm. It also has the potential advantage of emphasizing the parts of the system that may be reused. However, it may have a disadvantage when considering the routing of signals in the circuit. Because we want to use the decoder of the $N/2$ length polar code as a closed box, we route all the signals from it and to it, using its interface. This may result in some signals traversing a long path before reaching their target processor. These paths may be too long for the circuit to have a good clock frequency, thereby resulting in degradation of the achievable throughput. It is therefore advised to optimize the circuit by “opening” the recursive units and making the paths shorter, after completing the design of the circuit in a recursive manner. It will also be a good idea, that when building a decoder for a $2N$ length code, the designer will use this “optimized” design of the $N$ length decoder in the $2N$ length design, enjoying the benefits of the recursion. We give here two examples of these long paths hazards, that we believe that are likely to pose a problem along with their possible solutions.
1. The multiplexers layer at the input of the embedded line decoder of the length $N/2$ code is required because of the introduction of the P-Mode. A closer look of our design, reveals that some of the signals have long paths before reaching their target PE. For example, the inputs $\lambda_0$ and $\lambda_1$ need to traverse $\log_2(N)-1$ multiplexer layers before reaching their processor. Since the P-Mode needs to be accomplished in one time-unit, this long path may be prohibitive. By “opening” the $N/2$ length decoder box, the designer is able to control the lengths of the paths by a proper routing.
2. The “Encoding Layer” also suffers from long routing. We assumed, in our analysis, that the encoding procedure is combinatorial, and therefore can be done within the clock cycle. This may be a problem when several encoding circuits are operated one after the other. This is, for example, the case of step IV of the decoder of length $N/2^{i}$ code, that occurs within the step IV of the decoder of length $N/2^{i-1}$ code for $1 \leq i \leq \log_2N-2$. In this case, $O(\log N)$ operations need to occur in a sequential manner in one clock cycle. For large $N$ and high clock frequency circuit, this may not be feasible. The idea of Leroux *et al.* [@Leroux2012] was to use flip-flops for saving the partial encoding for each code bit in the different layers of the decoding circuit. Each such flip-flop, is connected using a xor circuit to the signal line of the estimated information bit. As such, whenever the SC decoder decides on an information bit, the flip-flops corresponding to the code bits that are dependent on this information bit are updated accordingly. These flip-flops need to be reset whenever we start decoding their corresponding outer-code. For example, when we start using the embedded $N/2$ length decoder (on step II and step IV) its flip-flops of partial encoding need to be erased (as they correspond to new outer code).
It should be noted, that this idea may also be described recursively, by changing the specification of the length $N$ polar code decoder in S-mode, and requiring it to output the estimated information bits as soon as they’re ready. The decoder should also have an $N$ length binary indicator vector, that indicates which code bits is dependent on the currently estimated information bit. It is easy to see that using the indicator vector of the length $N/2$ decoder, it is possible to calculate the $N$ length indicator vector, by using the $(u+v,v)$ mapping. This, however, generates again a computation path of length $\Theta(\log N)$. This problem, can be addressed, by having a fixed indicator circuit for each partially encoded-bit flip-flop. This circuit will indicate which information bit should be accumulated depending on the ordinal number of this bit. For example, for the decoder of the code of length $N$, we should have an array of $N/2$ flip-flops, each one corresponds to a bit of the codeword of the $N/2$ length first outer code. Each one of these flip-flops, should have an indicator circuit, that gets as input a value of a counter signaling the ordinal number of the information bit that has been estimated, and returns $1$ iff its corresponding codeword bit is influenced by this information bit. For example, the indicator circuit, corresponding to the first code bit, is a constant $1$, because ${ x}_0 =\sum_{i=0}^{N/2-1}{ u}_i$, i.e. it is dependent on all the information bits. On the other hand, the last bit’s indicator (i.e. of $x_{N/2-1}$) returns $1$ iff its input equals to $N/2-1$, because $x_{N/2-1}=u_{N/2-1}$. Using the global counter (that is advanced whenever an information bit is estimated) and the indicator circuits, each code bit that is influenced by this information bit will sum it up to its flip-flop.
Using the Kronecker power form of the generating matrix of the $(u+v,v)$ polar code, it can be seen that each of such indicator circuits can be designed by using no more than $O(\log n) = O(\log\log N)$ AND and NOT circuits, therefore the total cost of these circuits will be of $O(N\log \log N)$ in terms of space complexity.
In summary, the recursive architecture may be developed and modified to achieve the timing requirements of the circuit. This may be done by “opening the box” of the embedded decoders, and even altering them to support more efficient designs.
A careful examination of the line-decoder reveals that the *auxiliary array* is only used on steps I and III, and is idle on the other steps. This might motivate us to consider two variations on this design. The first one, adds hardware and use these arrays to increase the throughput, while the second one decreases the throughput and thereby reduces the required hardware.
### Parallel Decoding of Multiple Codewords {#sec:ParDecLine}
There are cases that it is required to increase the throughput of the decoder, by allowing parallel decoding of multiple codewords. A simple solution is to introduce $\mathrm{p}$ decoders when there is a need for decoding $\mathrm{p}$ codewords simultaneously. Because the *auxiliary array* of processors is idle most of the time, it seems like a good idea to “share” this array among several decoders. By appropriately scheduling the commands to to the processors, it is possible to have an implementation of a decoder for $\mathrm{p}$ parallel codewords which is less expensive than just duplicating the decoders (the naive solution).
Since the array is idle during steps II and IV, in which the decoder of the length $N/2$ code is active, it is possible to have $\mathrm{p}\leq T(n-1)+1=N-1$ decoders sharing the same *auxiliary array*. The decoding of each one of them is issued in a delay of one clock cycle from each other. Assuming the that $\mathrm{p}=N-1$, we have a decoding time $T(n)+N-2=3N-4$ for $N-1$ codewords while having $\mathrm{p}\cdot P(n-1)+N/4 = (N-1)(N-2)+N/4$ processors, which is about half of the number of processors in the naive solution.
This notion can be developed further. For the decoder of the length $N/2$ code that is embedded in the $N$ length decoder, there is a an auxiliary array of $N/8$ processors. This auxiliary array is used on steps I and III of the decoder of length $N$ and length $N/2$. Therefore, it is idle most of the time, and we can share it among the $\mathrm{p}$ decoders of length $N/2$. Assuming that $\mathrm{p} = N-1$, we may allocate $3$ auxiliary arrays that will be shared among the decoders, each one is dedicated for one of these different step: one array for step I (and III) of the $N$ length decoder, one array for step I of the $N/2$ length decoder and one array for step III of the $N/2$ length decoder. For each of the decoded codewords the number of clock cycles between these steps is at least $\mathrm{p}$, therefore there will be no contention on these resources and the throughput will not suffer because of this hardware reduction.
In general, for $\mathrm{p} = N-1$, the *auxiliary array* within the embedded decoder of length $\frac{N}{2^i}$ polar decoder ($i \in [1,\log_2(N)-2]$), can be shared among the $\mathrm{p}$ decoders, provided that we allocate an instance of the array for each of the decoding steps it is used in, during the first half of the decoding algorithm for the length $N$ code (i.e. for the time of steps I and II). Thus, for this specific array, we have $1$ call in step I of the $N$ length decoder, $1$ call for step I and $1$ call for step III of the $\frac{N}{2}$ length decoder, $2$ calls for step I and $2$ calls for step III of the $\frac{N}{2^{2}}$ length algorithm, ..., $2^i$ calls for step I and $2^i$ calls for step III for the length $\frac{N}{2^i}$ decoder. In summary, we need $\sum_{t=0}^{i}2^t = 2^{i+1}-1$ *auxiliary arrays* of processors, each one contains $\frac{N}{2^{i+2}}$ PEs. In particular, we need $N-1$ PEs for the $2$ length decoder (each PE is allocated to a specific decoder), and $\frac{N}{2}\cdot \sum_{i=0}^{\log_2(N)-2}\frac{2^{i+1}-1}{2^{i+1}}\approx \frac{N}{2}\left(\log_2(N)-1\right)$ PEs for the other decoders lengths. This adds up to approximately $\frac{N}{2}\left(1+\log_2(N)\right)$ PEs. We conclude that this solution allows an increase of the throughput in a multiplicative factor of $N$, while the PEs hardware is only increased by an approximately $\log_2(N)$ factor. Note, that the number of registers should increase by a multiplicative factor of $\mathrm{p}$.
A closer look at the above hardware design, reveals that we actually allocated for each sub-step of steps I and II of the $N$ length decoder a different array of processors. The decoding operation of the $\mathrm{p}$ codewords will go through these units in a sequential order. However, each decoder should have its own set of registers saving the state of the decoding algorithm. Another observation is that when we finish decoding the first codeword (i.e. the one we started decoding in time $0$), we can start decoding codeword number $N$ in the next time slot (and then codeword number $N+1$, etc.), in a pipelined fashion. It should be noted that Leroux *et al.* considered a similar idea, and referred to it as the *vector-overlapping* structure [@Leroux10].
### Limited Parallelism Decoding {#sec:LimitedParDecLine}
Another approach for addressing the problem of low utilization of the *auxiliary arrays* is to limit the number of processing elements that may be allowed to operate simultaneously. This is a very practical consideration, as typically, a system designer has a parallelism limitation which is due to power consumption and silicon area. The limited parallelism, inevitably results in an increase of the decoding time, and thereby a decrease of the throughput. The line decoder of the code of length $N$ has a PE parallelism of $N/2$, because it may simultaneously compute at most $N/2$ llrs using the $N/2$ PEs.
We consider a line decoder of length $N$ code with limited parallelism of $N/{2^i}$, where $i\in [1,\log_2N]$. This means, that the decoder has exactly $\frac{N}{2^i}$ PEs. If $i=1$ then the decoder is actually, the standard line-decoder. If $i>1$ then the decoder’s block diagram will be similar to the one shown in Figure \[fig: linArikan\], with the following changes.
- There will be no *auxiliary PEs array*.
- The embedded line decoder of the $N/2$ length code will be replaced by a limited parallelism line decoder, with parallelism factor of $N/{2^i}$.
- The signals array $L(0:N/4-1)$ at the output of the embedded line decoder will also be connected to the registers array $R({N/4}:{N/2-1})$ .
- The multiplexers array at the the input of the $N/2$ length line decoder, will change to also include the input array ${ \lambda}({N/2}:{N-1})$. This means, that we should have an array of $3\rightarrow 1$ multiplexers (instead of $2\rightarrow 1$), in which the $k^{th}$ multiplexer selects between inputs $\lambda(k), \lambda(k+N/2)$ and $R(k)$.
- There will be an additional array of multiplexers at the input of the line-decoder for selecting between ${x}({0}:{N/4-1})$ and ${x}({N/4}:{N/2-1})$, to support the use of both parts of the decided codeword. Similarly, for the P-Mode, we should have an array of multiplexers to select between the two parts of the $x_{in}(0:N/2-1)$ array.
The S-mode decoding algorithm will have $4$ steps as before, however steps I and III are modified as follows.
STEP I
: \
Sequentially,
- **STEP I-a**: At the MUX array, at the input of the (limited parallelism) decoder of the length $N/2$ polar code, set the control signal $c_m=0$, which means that ${ \lambda}({0}:{N/2-1})$ is selected as an input to this unit.
Set $c_u=0$ and use the $N/2$ length polar code decoder in P-Mode. Store the output array of signals $L(0:N/4-1)$, corresponding to $${L}(k) = 2\cdot \tanh^{-1}\left(\tanh\left(\lambda({2k})/2 \right)\cdot \tanh\left(\lambda({2k+1})/2 \right)\right)\,\,\,\,\,,0\leq k\leq N/4-1$$ in the registers array ${ R}(N/4:{N/2-1})$.
- **STEP I-b**: Set the control signal of the MUX array to $c_m=1$, which means that ${\lambda}({N/2}:{N-1})$ is selected as an input to this unit. Set $c_u=0$ and use the decoder of the polar code of length $N/2$ in P-Mode. Store the output signals array $${L}(k-N/4) = 2\cdot \tanh^{-1}\left(\tanh\left(\lambda({2k})/2 \right)\tanh\left(\lambda({2k+1})/2 \right)\right), \,\,\,\,\,\,{N/4}\leq k \leq {N/2-1}$$ in registers array ${ R}({N/4}:{N/2-1})$.
<!-- -->
STEP III
: \
Sequentially,
- **STEP III-a**: At the MUX array, at the input of the (limited parallelism) decoder of length $N/2$ polar code, set the control signal $c_m=0$, which means that ${ \lambda}({0}:{N/2-1})$ is selected as an input to this unit.
Set $c_u=0$ and use the $N/2$ length polar code decoder in P-Mode. Store the output array of signals $L(0:N/4-1)$, corresponding to $${L}(k) = (-1)^{\tilde{x}(k)}\cdot \lambda({2k})+\lambda({2k+1})\,\,\,\,\,,0\leq k\leq N/4-1$$ in the registers array ${ R}(N/4:{N/2-1})$. Note that we use $\tilde { x}(0:{N/4-1})$, the first half of the output from step III, as an input to the $N/2$ length decoder.
- **STEP III-b**: Set the control signal of the MUX array to $c_m=1$, which means that ${\lambda}({N/2}:{N-1})$ is selected as an input to this unit. Set $c_u=0$ and use the decoder of the polar code of length $N/2$ in P-Mode. Store the output signals array $${L}(k-N/4) = (-1)^{\tilde{x}(k)}\cdot \lambda({2k})+\lambda({2k+1}), \,\,\,\,\,\,{N/4}\leq k \leq {N/2-1}$$ in registers array ${ R}({N/4}:{N/2-1})$. Note that we, now, use $\tilde { x}({N/4}:{N/2-1}$, the second half of the output from step III, as an input to the $N/2$ length decoder.
The P-Mode operation of the decoder is also changed, and now contains two steps.
STEP I
: \
At the MUX array, at the input of the (limited parallelism) polar code decoder of length $N/2$ set the control signal $c_m=0$, which means that ${\lambda}({0}:{N/2-1})$ is selected as an input to this unit. Set $c_u = c_{u,in}$ and use the $N/2$ length polar code decoder in P-Mode. Store the output signals array $L(0:N/4-1)$ in the registers array ${ R}({0}:{N/4-1})$. If $c_{u,in}=1$, use the first half of the input signals array $x_{in}$ (i.e. $x_{in}(0:N/4-1)$) as an input to the $N/2$ length decoder (otherwise this input is ignored).
<!-- -->
STEP II
: \
Set the control signal of the MUX array to $c_m=1$, which means that ${ \lambda}({N/2}:{N-1})$ is selected as an input to this unit. Set $c_u = c_{u,in}$ and use the $N/2$ length polar code decoder in P-Mode. Store the output signals array $L(0:N/4-1)$ in the registers array ${ R}({N/4}:{N/2-1})$. If $c_{u,in}=1$, use the second half of the input signals array $x_{in}$ (i.e. $x_{in}(N/4:N/2-1)$) as an input to the $N/2$ length decoder (otherwise this input is ignored).
The output of the decoder is the array of signals corresponding to the array of registers ${R}(0:N/2-1)$.
Let’s analyze the time complexity of this algorithm. We denote the S-Mode running time (in terms of clock cycles) for length $N=2^n$ polar code with limited parallelism of $N/{2^i}=2^{n-i}$, by $T(n,n-i)$. We note that $T(n,n-1) = T(n)$, where $T(n)=2N-2$ is the running time of the standard line decoder. The recursion formula is $$T(n,n-i)=2\cdot T(n-1,n-i)+ 4\cdot T_p(n-1,n-i),$$ where $T_p(n,m)$ is the running time of the $N=2^{n}$ length decoder with $2^{m}$ limited parallelism in P-Mode. $$T_p(n,m) = \left\{
\begin{array}{ll}
1, & \hbox{$n-m\leq 1$;} \\
2\cdot T_p(n-1,m), & \hbox{otherwise.}
\end{array}
\right.$$ Therefore, $$T_p(n,m)=\left\{
\begin{array}{ll}
1, & \hbox{$n-m\leq 1$;} \\
2^{n-m-1}, & \hbox{otherwise.}
\end{array}
\right.$$ It can be shown that $$\label{eq:timeTradeoffLimPara}
T(n,n-i)= 2\cdot N +(i-2)\cdot 2^i\,\,\,\,\,\,\,\,, i\geq 1.$$ Equation (\[eq:timeTradeoffLimPara\]) reveals the tradeoff between the number of PEs and the running time of the algorithm. For example, decreasing the number of processors by a multiplicative factor of $8$, compared to the standard case (i.e. $i=4$), results in an increase of only $34$ clock cycles in the decoding time. We note however, that to build such a decoder, additional control hardware (e.g. multiplexers layers) should be designed.
It seems that for a limited list size, the Successive Cancellation List decoder may also be implemented by a line decoder. This requires to duplicate the hardware by the size of the list, $M$, and to introduce the appropriate logic (i.e. comparators and multiplexer layers). It is possible to provide an implementation with $O(f(M)\cdot N)$ time complexity, where $f(\cdot)$ is a polynomialy bounded function, that is dependent on the efficiency of algorithms for selection of $M$ most likely paths (in the $N=2$ decoder). Furthermore, the normalization of the likelihoods should be considered carefully, and also should have its impact on the precise (i.e. non asymptotic) time complexity.
The BP Line Decoder {#sec:UVLineDecoderBP}
--------------------
\
As we saw in Subsection \[sec:BP\], BP is an iterative algorithm, in which messages are sent on the normal factor graph representing the code. In this subsection, we consider an implementation of the BP decoder with the GCC serial schedule. The proposed decoder structure is a variation on the recursive structure of the SC line decoder. Figure \[fig: BPLineDecodeUV\] depicts a block diagram for this design. The main changes, in respect to the SC decoder, are in the memory resources and the processor structure.
The memory plays a fundamental role in the design, as it helps keeping computed messages within the iteration boundary and beyond it. The basic requirement is that each “butterfly” realization of the $(u+v,v)$ factor graph, should have memory resources to store its messages. To allow messages to be kept within the iteration boundary, it is only required to have one registers array for each length of outer code and for each message type. However, the need for keeping a message beyond the iteration boundary requires a dedicated memory array for each instance of the outer code.
In the case of the $(u+v,v)$ code and the GCC schedule, only messages of type $\mu_{v}^{(in)}$ need to be kept beyond the iteration boundary. We suggest to address this requirement, in the following way. For the decoder of length $N$, we associate a registers matrix $\mu_{v}^{(in)}(0:\#_r(N)-1,0:N/2-1)$. Here, $\#_r(N)$ is the *number of realizations* of factor graphs corresponding to outer codes of size $N$ that exist in our code. For the code of length $N$, there is only one factor graph of this size (i.e. the entire graph), and therefore for this decoder $\#_r(N)=1$.
Consider, now, the $N/2$ length decoder that is embedded within the $N$ length decoder. We see in Figure \[fig: BPLineDecodeUV\], that this decoder has its number of realizations as $2\cdot \#_r(N)$, i.e. for the $N$ length decoder we have $\#_r(N/2)=2$. This is because we have two outer codes of length $N/2$ in the $N$ length code. Therefore, the memory matrix associated with it has two rows and $N/4$ columns. The first row is dedicated for the first realization of the outer code and the second row is dedicated for the second realization. Within this $N/2$ length decoder, there is an embedded $N/4$ length decoder with $2\cdot\#_r(N/2)$ realizations, so in this case $\#_r(N/4)= 4$. As a result, it has a registers matrix with $4$ rows and $N/8$ columns (each row is dedicated to one of the $4$ outer codes of length $N/4$ in this GCC scheme). This development continues, until we reach the embedded decoder of length $2$, which, by induction, has $\#_r(2)=N/2$ realizations for the $N$ length decoder, so it requires a registers matrix with $N/2$ rows and one column.
For a correct operation of the decoder, it is required to inform the embedded decoders to which realization of the outer code’s factor graph they are currently referring to. This is the role of the signals $realizationID_{N/2}$, $realizationID_{N}$ and $OuterCodeID$, that indicate the specific realization as follows. The signal $realizationID_{N}$ notifies the decoder of length $N$, on which realization of the factor graph of the code of length $N$ it is working on. Note, that because we describe here a decoder for a code of length $N$, we have only one realization of this graph, therefore this signal is fixed to $0$. However, if this was an embedded decoding unit within a larger length code decoder, then this signal should indicate the ordinal number of the outer code we’re decoding, ranging from $0$ to $\#_r(N)-1$. The signal $realizationID_{N/2}$ gives the identification of the realization of the outer polar code of length $N/2$. It is computed as $2\cdot realizationID_N+OuterCodeID$, where $OuterCodeID$ equals $0$ on step II, and equals $1$ on step IV of the iteration for the $N$ length decoder.
We also need to have registers arrays for the messages of type $\mu_{e_1\rightarrow a_0},\mu_{a_0 \rightarrow e_1},\mu_{u}^{(in)},\mu_{u}^{(out)}$ and $\mu_{v}^{(out)}$, each one of them of length $N/2$. We denote them by $\mu_{e_1\rightarrow a_0}(0:N/2-1), \mu_{a_0\rightarrow e_1}(0:N/2-1), \mu_{u}^{(in)}(0:N/2-1), \mu_{u}^{(out)}(0:N/2-1)$ and $\mu_{v}^{(out)}(0:N/2-1)$. Note, that as opposed to the memory structure for the $\mu_{v}^{(in)}$ messages, these arrays do not need to be available beyond the iteration boundary, therefore it suffices to have them as arrays and not matrices. Furthermore, the arrays for messages $\mu_{e_1\rightarrow a_0}$, $\mu_{u}^{(out)}$ and $\mu_{v}^{(out)}$, can be replaced by one temporary array. However, in the description of the hardware structure, we chose not to do this, in order to keep the discussion more comprehensible.
\
Figure \[fig: BPPEPRocessor\], depicts the processing element $BP\_PE$ that is considered here. This unit has two inputs for message llrs, and depending on the control signal $c_{BPPE}$ it performs either the $f_{(+)}(\cdot,\cdot)$ function or the $f_{(=)}(\cdot,\cdot)$. Because it has to implement the functionalities of equations (\[eq:BPUV1\])-(\[eq:BPUV6\]), we introduce routing layers for the inputs (OP-MUX) and the outputs (OP-De-MUX) that ensure that the proper inputs will be given to the processor and that its output is stored in the appropriate array, depending on the computation schedule of the iteration.
Besides the messages that serve as inputs or outputs to the processor, we allocate two additional message inputs, denoted by $ext_{a}$ and $ext_{b}$, and one additional output message, denoted by $ext_{out}$. These inputs and output are used during the P-Mode of the decoder. We note, that in Figure \[fig: BPLineDecodeUV\], we preferred, for brevity, not to specify these routing units for each processor, but rather to group them into routing arrays. The inputs and outputs to these routing arrays are arrays of inputs and outputs corresponding to the types of inputs and outputs that appear in Figure \[fig: BPPEPRocessor\]. The convention is that in these routing arrays, the $i^{th}$ output corresponds to the $i^{th}$ input from each signals array (the signals array is selected by the control signal of the routing array). Moreover, the $i^{th}$ output of the OP-MUX array corresponds to the consecutive $i^{th}$ processor from the array of processors it serves. Similarly, the $i^{th}$ input of the OP-De-MUX array corresponds to the $i^{th}$ consecutive processor from the array of processors it serves.
As in the SC case, the BP line decoder has two operation modes.
- S-Mode - The decoder gets as input $\mu^{(in)}$ type of messages referring to its inputs. It outputs, $\mu^{(out)}$ type of messages, corresponding to its inputs (i.e. messages, that are sent from the subgraph which realization the decoder is operating on, to its neighbors) and estimation of the information word vector (denoted by *infoEst*).
- P-Mode - The decoder serves as an array of $N/2$ processors and performs simultaneously the computation of the type of message indicated by $C_{PPE,external}$ using signals $ext_a$ and $ext_b$ as the inputs and $ext_{out}$ as the output.
The S-Mode decoding algorithm operates as follows.
STEP I
: \
Simultaneously,
- At the MUX array, at the input of the decoder of the code of length $N/2$, set the control signal $c_m=0$, which means that the OP-MUX array is selected as the input to the decoder. Set $c_{opMUX}$ such that $\mu^{(in)}_{v}(0:N/4-1)$ and $\mu_{x_1}^{(in)}(0:N/4-1)$ will be selected as the first input and the second input, respectively of this unit. Set $c_{BPPE,internal}$ to correspond to the computation of (\[eq:BPUV1\]) and use the $N/2$ length polar code decoder in P-Mode. Set the OP-De-MUX array to direct the output to $\mu_{e_1 \rightarrow a_0}(0:N/4-1)$.
- Having the same values for $c_{opMUX}$ and $c_{BPPE,internal}$, use the auxiliary array of processors to operate on the inputs $\mu^{(in)}_{v}(N/4:N/2-1)$ and $\mu_{x_1}^{(in)}(N/4:N/2-1)$ and have the output directed to $\mu_{e_1\rightarrow a_0}^{}(N/4:N/2-1)$.
Simultaneously,
- Keep $c_m=0$. Set $c_{opMUX}$ such that $\mu_{x_0}^{(in)}(0:N/4-1)$ and $\mu_{e_1\rightarrow a_0}(0:N/4-1)$ will be selected as the first input and the second input, respectively to the $N/2$ length decoder. Set $c_{BPPE,internal}$ to correspond to the computation of (\[eq:BPUV3\]) and use the decoder of length $N/2$ in P-Mode. Set the OP-De-MUX array to direct the output to $\mu_{u}^{(out)}(0:N/4-1)$.
- Having the same values for $c_{opMUX}$ and $c_{BPPE,internal}$, use the auxiliary array of processors to operate on the inputs $\mu_{x_0}^{(in)}(N/4:N/2-1)$ and $\mu_{e_1\rightarrow a_0} (N/4:N/2-1)$ and have the output directed to $\mu_{u}^{(out)}(N/4:N/2-1)$.
STEP II
: - At the MUX array, at the input of the BP decoder of the $N/2$ length polar code, set the control signal $c_m=1$, which means that the input from the second multiplexer is selected as input to this unit. Specifically, since $OuterCodeID=0$ it means that $\mu_{u}^{(out)}(0:N/2-1)$ is the input to this decoder.
- Provide the indices of the the frozen bits from the first half of the codeword to the $N/2$ length decoder, and operate it in S-Mode. Store the estimation of the information word (output signals array *infoSet*) to the bits array $u(0:N/2-1)$. Direct the output messages to be saved in $\mu_{u}^{(in)}(0:N/2-1)$, using the de-mux that is connected to the *outMessages* signals array, at the output of the $N/2$ length decoder.
STEP III
: \
Simultaneously,
- At the MUX array, at the input of the BP decoder of the $N/2$ length polar code set the control signal $c_m=0$, which means that the OP-MUX array is selected as the input to the decoder. Set $c_{opMUX}$ such that $\mu^{(in)}_{x_0}(0:N/4-1)$ and $\mu_{u}^{(in)}(0:N/4-1)$ will be selected as the first input and the second input, respectively to this unit. Set $c_{BPPE,internal}$ to correspond to the computation of (\[eq:BPUV2\]), and use the $N/2$ length decoder in P-Mode. Set the OP-De-MUX array to direct the output to the array $\mu_{a_0\rightarrow e_1}(0:N/4-1)$.
- Having the same values for $c_{opMUX}$ and $c_{BPPE,internal}$, use the auxiliary array of processors to operate on the inputs $\mu^{(in)}_{x_0}(N/4:N/2-1)$ and $\mu_{u}^{(in)}(N/4:N/2-1)$ and have the output directed to $\mu_{a_0\rightarrow e_1}^{}(N/4:N/2-1)$.
Simultaneously,
- Keep $c_m=0$ and change $c_{opMUX}$ such that $\mu^{(in)}_{x_1}(0:N/4-1)$ and $\mu_{a_0\rightarrow e_1}(0:N/4-1)$ will be the first input and the second input, respectively to the $N/2$ length decoder. Set $c_{BPPE,internal}$ to correspond to the computation of (\[eq:BPUV4\]) and use the $N/2$ length decoder in P-Mode. Set the OP-De-MUX array to direct its output to the array $\mu_{v}^{(out)}(0:N/4-1)$.
- Having the same values for $c_{opMUX}$ and $c_{BPPE,internal}$, use the auxiliary array of processors to operate on the inputs $\mu^{(in)}_{x_1}(N/4:N/2-1)$ and $\mu_{a_0\rightarrow e_1}(N/4:N/2-1)$ and have the output directed to $\mu_{v}^{out}(N/4:N/2-1)$.
STEP IV
: \
- At the MUX array, at the input of the decoder of the code of length $N/2$, set the control signal $c_m=1$, which means that the input from the second multiplexer is selected as an input to this unit. Also set $OuterCodeID=1$, which means that $\mu_{v}^{(out)}(0:N/2-1)$ is the input to this decoder.
- Provide the indices of the the frozen bits from the second half of the codeword to the $N/2$ length decoder, and operate it in S-Mode. Perform the decoding of the second outer polar code of length $N/2$. Save the estimation of the information word (output signals array *infoSet*) to the bits array $u(N/2:N-1)$. Direct the output messages to be stored in $\mu_{v}^{(in)}(0:N/2-1)$, using the de-mux that is connected to the *outMessages* signals array, at the output of the $N/2$ length decoder.
Simultaneously,
- At the MUX array, at the input of the decoder of the code of length $N/2$, set the control signal $c_m=0$. Set $c_{opMUX}$ such that $\mu^{(in)}_{v}(0:N/4-1)$ and $\mu_{x_1}^{(in)}(0:N/4-1)$ will be selected as the first input and the second input, respectively of this unit. Set $c_{BPPE,internal}$ to correspond to the computation of (\[eq:BPUV1\]) and use the $N/2$ length polar code decoder in P-Mode. Set the OP-De-MUX array to direct the output to $\mu_{e_1 \rightarrow a_0}(0:N/4-1)$.
- Having the same values for $c_{opMUX}$ and $c_{BPPE,internal}$, use the auxiliary array of processors to operate on the inputs $\mu^{(in)}_{v}(N/4:N/2-1)$ and $\mu_{x_1}^{(in)}(N/4:N/2-1)$ and have the output directed to $\mu_{e_1\rightarrow a_0}^{}(N/4:N/2-1)$.
Simultaneously,
- Keep $c_m=0$ and change $c_{opMUX}$ such that $\mu^{(in)}_{u}(0:N/4-1)$ and $\mu_{e_1\rightarrow a_0}(0:N/4-1)$ will be the first input and the second input, respectively to the $N/2$ length decoder. Use the polar code decoder of length $N/2$ in P-Mode, and set $c_{BPPE,internal}$ to correspond to the computation of (\[eq:BPUV5\]). Set the OP-De-MUX array to direct the output to $\mu_{x_0}^{(out)}(0:N/4-1)$.
- Having the same values for $c_{opMUX}$ and $c_{BPPE,internal}$, use the auxiliary array of processors to operate on the inputs $\mu^{(in)}_{u}(N/4:N/2-1)$ and $\mu_{e_1\rightarrow a_0}(N/4:N/2-1)$ and have the output directed to $\mu_{x_0}^{out}(N/4:N/2-1)$.
Simultaneously,
- Keep $c_m=0$ and change $c_{opMUX}$ such that $\mu^{(in)}_{v}(0:N/4-1)$ and $\mu_{a_0 \rightarrow e_1 }(0:N/4-1)$ will be the first input and the second input, respectively to the $N/2$ length decoder. Set $c_{BPPE,internal}$ to correspond to the computation of (\[eq:BPUV6\]), and use the $N/2$ length polar code decoder in P-Mode. Set the OP-De-MUX array to direct the output to $\mu_{x_1}^{(out)}(0:N/4-1)$.
- Having the same values for $c_{opMUX}$ and $c_{BPPE,internal}$, use the auxiliary array of processors to operate on the inputs $\mu^{(in)}_{v}(N/4:N/2-1)$ and $\mu_{a_0 \rightarrow e_1}(N/4:N/2-1)$ and have the output directed to $\mu_{x_1}^{out}(N/4:N/2-1)$.
The output of the decoder, in S-mode, will be the array $u(0:N-1)$ and the two arrays $\mu_{x_0}(0:N/2-1)$ and $\mu_{x_1}(0:N/2-1)$ interleaved. This means, that the output message vector is an array in which the entries with even indices are from $\mu_{x_0}(0:N/2-1)$ and the entries with the odd indices are from $\mu_{x_1}(0:N/2-1)$.
In the P-Mode, the decoder serves as an array of $N/2$ processors that operate in parallel. the control signal $C_{BPPE,external}$ indicates which operation is performed on all the processors. The inputs to the processor are denoted by the signals arrays $ext_{a}(0:N/2-1)$ (the first input) and $ext_{b}(0:N/2-1)$ (the second input). The output is directed to the signals array $ext_{out}(0:N/2-1)$. The P-Mode decoding algorithm operates as follows.
Simultaneously,
- At the MUX array, at the input of the BP-decoder of the polar code of length $N/2$, set the control signal $c_m=0$, which means that the OP-MUX array is the input of the decoder. Set $c_{opMUX}$ such that $ext_a(0:N/4-1)$ and $ext_b(0:N/4-1)$ will be the first input and the second input, respectively. Use the polar code decoder of length $N/2$ in P-Mode, and set $c_{BPPE,internal}$ to be equal to $C_{BPP,external}$. Have the OP-De-MUX array to direct the output to $ext_{out}(0:N/4-1)$.
- Having the same values for $c_{opMUX}$ and $c_{BPPE,internal}$, use the auxiliary array of processors to operate on the inputs $ext_a(N/4:N/2-1)$ and $ext_b(N/4:N/2-1)$ and have the output directed to $ext_{out}(N/4:N/2-1)$.
Let us, now, consider the time complexity (in terms of the number of clock cycles consumed by an iteration) of this design. As before, let $T(n)$ be the time complexity of the decoder of the polar code of length $N=2^n$. We assume that each call to a PE requires one clock cycle. In our design, we therefore have $$\label{eq:recBPUV}
T(n) = 2\cdot T(n-1)+7,$$ and $T(1)=4$, so $T(n)=5.5\cdot N-7=\Theta(N)$. The memory consumption, however is $\Theta(N\cdot \log N)$, because of the memory matrices for the $\mu_{v}^{(out)}$ type of messages. The number of processing elements in this design is $N/2$. It should be noted, that the suggested processor can be further improved to support some operations to occur in parallel. For example, if the PE could run one operation of $f_{+}(\cdot)$ and one operation of $f_{=}(\cdot)$ in parallel, we could have the two last operations in step IV to be done in one clock cycle, therefore reducing the free addend in (\[eq:recBPUV\]) to $6$. Further reduction would be achieved if one could perform $f_{+}(\cdot)$ and direct its output to $f_{=}(\cdot)$ in one clock cycle. This will result in joining the two operations in step III, into one operation. Allowing the computation of $f_{=}(\cdot)$ and directing its output to $f_{+}(\cdot)$ in the same clock cycle, will results in consolidation of the two operations of step I into one operation (actually, the latter change may also allow to consolidate the second and third computation in step IV, making the first change redundant). These changes will result in $4$, as the free addend in (\[eq:recBPUV\]) and $T(2)=2$, so $T(n)=3\cdot N-4$. Naturally, these changes require the appropriate amendments in the routing units, that we described before.
We want to note here that the remarks, which we made on the SC line decoder at the end of Subsection \[sec:UVLineDecoder\] also apply here. Specifically, the long paths hazards, requiring a more efficient designs by opening the recursive boxes is also relevant for the BP decoder, specifically for the routing layers in P-Mode. Furthermore, the issue of idle clock cycles for the PEs is also a problem of this design and the solution of Subsections \[sec:ParDecLine\] and \[sec:LimitedParDecLine\] may be adapted to this decoder too. However, while in the SC decoder, the existence of inactive PEs is due to the properties of the SC algorithm, which dictates the scheduling of the message computation, in the BP case, this is due to the scheduling we choose and not a mandatory property of the algorithm. Other types of scheduling do exist, and currently there is no evidence which scheduling is better (for example, in terms of the achieved error rate or in terms of the average number of iterations required for convergence). Hussami *et al.* [@Hussami2009] proposed to use the Z-shape schedule, which description suggests a constant level of parallelism of $N$ PEs (of the type we considered here) operating all the time. This seems to give the Z-shape schedule an advantage over the GCC schedule if the number of processors is not limited (unless the technique of Subsection \[sec:ParDecLine\] is applied). It is an interesting question to find which schedule is better, when the number of processors is limited. This is a matter for further research.
Hardware Architectures for General Kernels {#sec:HardArchiForOthKer}
==========================================
So far, we described algorithms for decoding of polar codes in a recursive way. This notion has enabled us to restate the hardware implementation for SC for Arikan’s construction, that were proposed by Leroux *et al.* [@Leroux2012]. In addition, we suggested an implementation for BP decoding for the GCC schedule. In this section, we would like to generalize these constructions for other types of kernels. Because we already covered the implementation for Arikan’s codes in some details, we will be more brief in this section, mainly emphasizing the principle differences from the designs in Section \[sec:HrdwreArikConstr\].
Recursive Description for the SC Line Decoder for General Kernels {#sec:SCLineForGeneralKernel}
-----------------------------------------------------------------
\
Figure \[fig: LineDecoderGeneralHmGen\] depicts a block diagram for a SC line decoder for a general linear kernel of dimension $\ell$, over alphabet $F$. This kernel has an $\ell\times\ell$ generating matrix, $G$ associated with it. We assume, that this decoder has the same requirements for the inputs and outputs, that were given for the $(u+v,v)$ line decoder in Subsection \[sec:UVLineDecoder\].
The basic processing element of this design (denoted by PE), gets $\ell$ llr functions (each function is of $|F|-1$ values), and the coset vector that reflects the previous stages decisions. The control signal $c_u$ indicates which type of llr function the processor should output. There are $\ell$ types of computations that the processor should support according to the different stages of decoding, as (\[eq:genSCRule\]) implies. Since we consider here a linear kernel, when decoding outer code number $k$, the assumption on ${\bf u}_0^{k-1}$ (the information sub-vector input to the kernel) is manifested by the coset vector, which this sub-vector induces. This coset vector is generated by ${\bf u}_0^{k-1}\cdot G_{\rightarrow(0:k-1)}$, where $G_{\rightarrow(0:k-1)}$ is a matrix containing only the first $k$ rows of $G$. This coset vector is gradually computed and maintained in the registers array $x(0:N-1)$, as we explain in the sequel. We note, that if the kernel is not linear, then each processor should get the previously decided bits associated with it, i.e. the estimated sub-vector ${\bf u}_0^{k-1}$, in order to perform (\[eq:genSCRule\]).
The way the llr computations of (\[eq:genSCRule\]) are done is an important question, that we do not elaborate on here. For example, it may be beneficial to consider trellis implementation, of the decoding stages, or even consider using approximations of it, such as *min-sum* rule [@Leroux2012], or near ML decoding variants, such as *order statistics* or *box and match* [@Trifonov2012].
Since the outer codes in this design are of length $N/{\ell}$, the processors in the preparatory steps of the SC algorithm (i.e. steps $2\cdot r -1$, as defined in Section \[sec:RecDescOfDecAlgor\]) should generate $N/\ell$ llr functions, serving as inputs to the decoder of the outer code. Therefore, to have the maximum level of parallelism we use $N/{\ell}$ PEs in the decoder. The embedded $N/{\ell}$ length recursive decoder is able to contribute only $N/ \ell^2$ processors, so the auxiliary array of processors needs to supply the rest of the processors, i.e. it should have $N/{\ell}-N/{\ell^2}$ additional processors.
The *encoding unit* gets the decisions on the codewords of the outer codes from the $N/{\ell}$ length decoder. Using these decisions, it computes the estimated coset vectors of the inner codes. To support this, we use the signal *outerCodeID*, that identifies which outer-code is currently decoded. At the end of step $2\cdot r$, we have $outerCodeID = r-1$, because we just finished decoding outer code number $r-1$, by the $N/{\ell}$ length decoder. This decoder outputs the estimation of the codeword using the signals vector $\tilde{x}(0:N/{\ell}-1)$. Now, the encoding layer performs the following operation, for $0 \leq i \leq N/\ell-1$, $$\label{eq:GeneralEnc}
x\left(\ell \cdot i:\ell\cdot(i+1)-1\right)=x\left(\ell \cdot i:\ell\cdot(i+1)-1\right)+\tilde{x}(i)\cdot G_{\rightarrow r-1}.$$ This means, that we add row number $r-1$ of $G$, multiplied by the symbols of the recently estimated outer codeword, to the previously estimated coset vectors (Note, that we have $N/{\ell}$ coset vectors, such that $x\left(\ell \cdot i:\ell\cdot(i+1)-1\right)$, corresponds to the $i^{th}$ inner code, $0 \leq i \leq N/\ell-1$). At the end of step $2\cdot\ell$, the output of the encoding layer is the estimation of the codeword.
As in the $(u+v,v)$ line decoder, we have two operation modes. The first one is S-Mode, in which the decoder gets llr functions and the indices of the frozen symbols, and outputs the hard decisions on the information word and its corresponding codeword. The second one is P-Mode, in which the decoder operates as an array of processors and performs the same type of operation according to the signal $c_{u}$.
In S-Mode, we have $\ell$ pairs of computation steps, as described below ($1\leq r \leq \ell$).
STEP $2\cdot r-1$
: \
Simultaneously,
- At the MUX array, at the input of the decoder of the polar code of length $N/\ell$, set the control signal $c_m=0$, which means that the array ${\bf \lambda}(0:N/{\ell}-1)$ is selected as an input to this unit. Set $c_u=r-1$ and supply the coset vectors $x(0:N/{\ell}-1)$ to the unit (the latter is achieved because $modeIn=0$). Use the decoder of the polar code of length $N/\ell$ in P-Mode. This means that the processors will perform the computation of the llrs of type $r-1$ according to (\[eq:genSCRule\]), where $k=r$. The values of the computations are stored in the registers array $R(0:N/{\ell^2}-1)$.
- Use the auxiliary array of processors, and perform the same computations given the rest of the llrs array, ${\bf \lambda}(N/{\ell}:N-1)$, and the rest of the cosets vector, $x(N/{\ell}:N-1)$. The outputs of the computations are stored in the registers array $R(N/\ell^2:N/{\ell}-1)$.
STEP $2\cdot r$
: - At the MUX array, at the input of the decoder of the polar code of length $N/\ell$, set the control signal $c_m=1$, which means that the values of the registers array $R(0:N/{\ell}-1)$ are inputs to this unit.
- Provide to the $N/{\ell}$ length polar code decoder, the indices of the frozen symbols from the range $[(r-1)\cdot N/\ell,r\cdot N/{\ell}-1]$. Operate the $N/{\ell}$ length polar code decoder in S-Mode, which results in decoding of the outer code number $r-1$. Store the estimated information word in the following way $u((r-1)\cdot N/\ell:r\cdot N/{\ell}-1)=\tilde{u}(0:N/{\ell}-1)$. Perform the computation of the coset vector as defined in (\[eq:GeneralEnc\]).
If this is the last step (i.e. $r=\ell$), then we give as output the content of $u(0:N-1)$ and the content of $x(0:N-1)$ (To avoid the sampling delay due to the registers, we prefer to give as output $\left[u(0:N-\ell+1)\,\,\,\,\, \tilde{u}(0:N/\ell-1)\right]$ instead of $u(0:N-1)$, and the output of the *encoding layer* block instead of $x(0:N-1)$).
The P-Mode operation is quite straight forward. We have the signal $modeIn=1$, which indicates that the $N$ length decoder operates in P-Mode. This causes the input cosets vectors (denoted by the signals array $x_{in}(0:N-1)$) to be routed to the processors (instead of the internal cosets vector $x(0:N-1)$). The embedded $N/{\ell}$ length decoder operates in P-Mode (i.s. $mode = 1$, as well). As a result, both the auxiliary array of processors and the embedded $N/{\ell}$ length decoder computes the operation that is indicated by the signal $c_{u,in}$, and output the computations results using the signals array $L(0:N/{\ell}-1)$.
The complexity analysis is also quite simple. As an example, if we assume that the processor requires $c$ clock cycles to complete the computation of each of its $\ell$ stages, then we have for $N=\ell^n$ length code, $T(n)=\ell\cdot T(n-1)+\ell\cdot c$ and $T(1)=\ell\cdot c$, so $T(n)=c\cdot\left( N + \ell\cdot\left(\log_{\ell}N-1\right)\right)$ clock cycles. The number of $R$ registers for holding the llr functions (each function contains $|F| -1$ values) can be shown to be $N\sum_{i=1}^{\log_{\ell}N-1}\ell^{-i}=N\cdot \frac{1-N^{-1}\ell^2}{\ell-1}$. The long routing path hazard that we raised in the context of the $(u+v,v)$ decoder, may also be of concern here. Therefore, our suggestion to open the recursion boxes and to optimize them accordingly, may be relevant here as well. The ideas of sharing the auxiliary array of processors for increasing the throughput, or decreasing the parallelism studied in Subsections \[sec:ParDecLine\] and \[sec:LimitedParDecLine\] respectively, are also applicable here with the obvious adaptations.
About Decoders for Mixed Kernels and General Concatenated Codes {#sec:MixedKernelsHW}
---------------------------------------------------------------
So far, we considered decoders for homogenous kernels that may be non-binary. These codes have the nice property, that the outer codes in their GCC structure are themselves polar codes from the same family (but shorter ones). Therefore, we were able to use a single embedded decoder of a code of length $N/{\ell}$ within the decoder of the code of length $N$. This embedded decoder is used $\ell$ times, each time with different inputs (i.e. indices of the frozen symbols and the input messages). This property no longer applies when mixed kernels are employed.
Consider, for example, the $\ell=4$ dimension mixed kernel that we presented in one of our previous papers [@Presman2011]. In the decoder of the mixed code of length $N=4^n$, we should have an embedded decoder of the mixed code of length $N/4$, and an additional embedded decoder for the $RS(4)$ polar code of length $N/4$. It should be noted, however, that even here, a reuse of hardware is still possible, as the decoder for the $RS(4)$ of length $N/4$, requires an embedded decoder for the $RS(4)$ of length $N/16$ within it. The latter decoder (and its embedded decoders) can be shared with the decoder for mixed code of length $N/4$ (that requires an embedded $RS(4)$ decoder of the same length).
A further step in generalization of this structure, is the general concatenated structure, in which the outer codes are not required to be polar codes. This means, that other types of codes may be used with their corresponding decoding algorithms. Examples of such structures using BCH codes and near ML decoding algorithms, were recently described by Trifonov [@Trifonov2011]. In these types of constructions, we need to have a separate decoding unit for each outer code. As in the cases of the mixed kernels, if the outer codes share structure and decoding algorithm these resources may be reused, thereby enabling a more efficient design.
Summary and Conclusions {#summary-and-conclusions .unnumbered}
=======================
We considered the recursive GCC structures of polar codes which led to recursive description of their decoding algorithms. Specifically, known algorithms (SC and SCL) were formalized in a recursive fashion, and then were generalized for arbitrary kernels. The BP decoding algorithm the with the GCC schedule was also depicted. Then, recursive hardware architectures for these algorithms were considered. We restated known architectures, and generalized them for arbitrary kernels.
In our discussion, we preferred for brevity, to give somewhat abstract descriptions of the subjects, emphasizing the main properties while neglecting some of the technical details. However, a complete hardware design requires a full treatment of all of these details (as was done by Leroux *et al.* for the $(u+v,v)$ case [@Leroux2012]). We intend to verify this design for arbitrary kernels in a further work.
Another issue, that needs a more careful attention, is the BP decoder, and specifically the proposed GCC schedule. A comparison between it and other proposed schedules (e.g. the $Z$ shaped schedule) is an interesting question, which is also a subject for further research. The usage of BP decoder for arbitrary kernels is another interesting problem, that also worth further studying. For these kernels, the way to compute the messages is well understood. However, the question of an appropriate schedule that enables the convergence of the algorithm, is not clear. We note however, that for a specific kernel, if such a schedule exists it may be beneficial to try to define it in a recursive manner, thereby enabling the utilization of the approach in this paper to construct a decoding hardware for it.
[^1]: Noam Presman and Simon Litsyn are with the the School of Electrical Engineering, Tel Aviv University, Ramat Aviv 69978 Israel. (e-mails: {presmann, litsyn}@eng.tau.ac.il.).
[^2]: The construction of the GCCs is a generalization of Forney’s code concatenation method [@Forney1966].
[^3]: We note, that the original line decoder, which was presented by Leroux *et al.* [@Leroux2012 Section 3.3] is not precisely the same design, which we discuss here. The differences, appear to be minor (especially in the area of the routing between the llr registers and the PEs), so we preferred not to distinguish it from Leroux’s design by giving it another name.
|
{
"pile_set_name": "ArXiv"
}
|
NUMBER 13-06-585-CR
COURT OF APPEALS
THIRTEENTH DISTRICT OF TEXAS
CORPUS CHRISTI - EDINBURG
JUAN CARLOS LEDEZMA, Appellant,
v.
THE STATE OF TEXAS, Appellee.
On appeal from the 103rd District Court of Cameron County, Texas.
MEMORANDUM OPINION
Before Justices Yañez, Rodriguez, and Benavides
Memorandum Opinion by Justice Yañez
Pursuant to a plea bargain, appellant, Juan Carlos Ledezma, was convicted on two
counts of aggravated assault. Appellant is now appealing these convictions, arguing that
they should be set aside because the record reflects that, at the time he waived his right
to be accused by indictment, he was not represented by counsel in a manner consistent
with article 1.051(a) of the Texas Code of Criminal Procedure.1 We affirm.
BACKGROUND
On July 19, 2006, police officers from the Brownsville Police Department received
a dispatch relating to an aggravated assault with a deadly weapon. Officers spoke with the
victim of the assault. The victim identified appellant as the suspect in the assault, and
provided officers with a description of appellant’s vehicle. Officers soon located the vehicle
and apprehended appellant. On July 26, 2006, the State brought a complaint against
appellant, charging him with aggravated assault (two counts);2 unlawful possession of a
firearm (two counts);3 evading arrest;4 and possession of cocaine in an amount more than
four grams, but less than 200 grams, within 1,000 feet of a school.5 The State sought to
enhance these charges based on appellant’s prior felony conviction.6
On July 27, 2006, the trial court appointed an attorney to represent appellant. That
same day, appellant waived (1) his entitlement to an arraignment;7 (2) his right to be
1
See T EX . C OD E C R IM . P R O C . A N N . art. 1.051(a) (Vernon Supp. 2008).
2
See T EX . P EN AL C OD E A N N . § 22.02 (Vernon Supp. 2008).
3
See id. § 46.04 (Vernon Supp. 2008).
4
See id. § 38.04 (Vernon 2003).
5
See T EX . H EALTH & S AFETY C OD E A N N . § 481.115(d), 481.134(c)(1) (Vernon 2003 & Supp. 2008).
6
See T EX . P EN AL C OD E A N N . § 12.42(b) (Vernon Supp. 2008).
7
See T EX . C OD E C R IM . P R O C . A N N . art. 26.011 (Vernon Supp. 2008). Appellant’s arraignm ent waiver
is evidenced in a docum ent entitled, “W ritten W aiver and Consent to Stipulation of Testim ony, W aiver of Jury,
and Plea of Guilty.” The record evidences the trial court’s discussion of appellant’s waiver of arraignm ent.
W e note, however, that the record also contains a form entitled, “Arraignm ent,” in which appellant announces
that he is ready for arraignm ent, and enters a plea of not guilty. Because the parties afford no attention to this
docum ent, we shall do the sam e.
2
accused by indictment;8 (3) his counsel’s entitlement to ten days of preparation time to
prepare for a proceeding;9 (4) his right to a jury trial;10 and (5) his right to the appearance,
confrontation, and cross-examination of witnesses, thus allowing the trial court to base its
judgment on witnesses’ written statements.11 Pursuant to a plea bargain, appellant then
pleaded guilty to the two counts of aggravated assault and true to the enhancement count.
The State, in exchange, dropped the remaining charges and recommended a punishment
of forty-five years’ imprisonment. The trial court, having found appellant guilty on both
counts of aggravated assault and having found the enhancement count true, sentenced
appellant to forty-five years’ imprisonment. The trial court also signed a document
certifying that this was a plea bargain case, and appellant had no right of appeal.
The trial court received a pro se letter from appellant on August 25, 2006. The
letter, which was written in Spanish, essentially stated that appellant had not knowingly and
intelligently entered his guilty pleas. On October 6, appellant’s appellate counsel
requested the trial court’s permission to appeal. The trial court then scheduled a hearing
on appellant’s “Request for Permission to Appeal” for October 26. At the hearing,
appellant’s counsel argued a laundry list of reasons as to why appellant’s pleas should be
set aside. At the conclusion of the hearing, the trial court, orally and in writing, granted
appellant permission to appeal. This appeal then ensued.
DISCUSSION
8
See id. art. 1.41 (Vernon 2005).
9
See id. art. 1.051(e) (Vernon 2005).
10
See id. art. 1.13 (Vernon 2005).
11
See id. art. 1.15 (Vernon 2005).
3
Appellant argues that his pleas should be set aside because the trial court did not
have jurisdiction over him. Appellant’s argument is largely derived from the court of
criminal appeals’ opinion in King v. State, wherein the court stated:
It is well to bear in mind that a felony information acts in lieu of or as
a substitute for an indictment[,] and its validity is therefore essential to the
court’s jurisdiction. If an accused has not effectively waived his right to an
indictment in full accordance with the statute[,] the felony information is void.
An indictment is still mandatory in absence of a valid waiver. For the waiver
to be effective it must be intelligently, voluntarily and knowingly given by the
accused while represented by counsel.12
Appellant thus argues that the trial court lacked jurisdiction because the State did not
secure an indictment or a valid waiver of indictment. Appellant contends that his waiver
of indictment is ineffective because it was not intelligently, voluntarily and knowingly given
by him while represented by counsel.13 Appellant argues that being “represented by
counsel” is qualified by article 1.051(a) of the code of criminal procedure, which states that
“[t]he right to be represented by counsel includes the right to consult in private with counsel
sufficiently in advance of a proceeding to allow adequate preparation for the proceeding.”14
Accordingly, appellant asserts that though he had counsel when he waived indictment, he
was not truly represented by counsel because (1) he was taken, without advanced notice,
to a proceeding where the State sought his waiver of indictment by leveraging a limited-
time plea bargain offer against him; (2) he was provided with counsel a short time before
the proceeding began; and (3) this short time was not sufficiently in advance of the
proceeding to allow him to adequately prepare with counsel for the proceeding.
12
King v. State, 473 S.W .2d 43, 51-52 (Tex. Crim . App. 1971).
13
See id. at 52.
14
T EX . C OD E C R IM . P R O C . A N N . art. 1.051(a).
4
Appellant appears to argue that the protection afforded to him by article 1.051(a)
was violated in two ways: (1) he was not afforded an opportunity to meet with counsel
sufficiently in advance of the proceeding; and (2) he and counsel were not allowed
adequate preparation time for the proceeding. We find that the latter alleged violation
subsumes the former, however, because article 1.051(a) measures what constitutes
consulting with counsel sufficiently in advance by whether adequate preparation time was
allowed for the proceeding. Accordingly, if an accused was allowed adequate preparation
time with counsel, the accused cannot be heard to complain that he or she was not
provided with the consultation of counsel sufficiently in advance of a proceeding.
Appellant has failed to demonstrate that he was not allowed time to adequately
prepare for the proceeding—i.e., his waiver of indictment. First and foremost, we observe
that article 1.051(a) states that the right to representation by counsel “includes the right to
consult in private with counsel sufficiently in advance of a proceeding to allow adequate
preparation for the proceeding.”15 The article only promises an accused the opportunity
to adequately prepare for a proceeding. The record reflects that appellant was afforded
an opportunity to consult with counsel prior to waiving indictment. The record does not
reflect that this opportunity was abridged by anyone other than appellant, when he elected
to voluntarily waive indictment, as evidenced by the record:
THE COURT: . . . Mr. Ledezma, you are, I’m sure, aware that, in order to
be prosecuted as a—for any felony offense, you have to first be indicted by
the grand jury. You have not been indicted at this point.
However, I’ve been handed a waiver of indictment where you and your
lawyer are saying that you want to give up your right to wait and see if the
15
Id. (em phasis added).
5
grand jury does, in fact, indict you and proceed with this particular case at
this time. Do you understand?
THE DEFENDANT: Yes, sir.
....
THE COURT: Is it your wish to waive indictment on these [counts] and allow
the [S]tate to proceed directly with these charges?
THE DEFENDANT: Yes, sir, it is.
THE COURT: The court is going to approve the waiver.
We further observe that there is no indication in the record that appellant did not
adequately prepare with counsel for the waiver of indictment. Appellant submitted a signed
“Waiver of Indictment” to the trial court, informing the court that he had “been advised by
his attorney and by the Court of his rights and the nature of the charge against him and his
right not to be tried in this case except on the indictment of a Grand Jury.”16 This
document, with nothing in the record to call it into question, constitutes sufficient evidence
of appellant’s adequate preparation with counsel. Finally, we are not persuaded by
appellant’s attempt to establish a lack of adequate preparation by pointing to the fact that
his appointment of counsel and waiver of indictment occurred on the same day. We find
that brevity of consultation, without more, does not establish adequacy of preparation
anymore than it establishes ineffectiveness of counsel,17 or an accused’s lack of
16
Em phasis added.
17
See W alker v. Caldwell, 476 F.2d 213, 218-19 (5th Cir. 1973) (citing a num ber of cases that
evidence the Fifth Circuit’s unwillingness to grant relief “where the petitioner attacked the effectiveness of his
appointed counsel on the sole ground of the shortness of the tim e his counsel spent on his behalf”)
6
understanding with regard to an entered plea.18 For all of these reasons, we overrule
appellant’s sole issue on appeal.19
CONCLUSION
We affirm the trial court’s judgment.
LINDA REYNA YAÑEZ,
Justice
Do not publish. TEX . R. APP. P. 47.2(b).
Memorandum Opinion delivered and filed
this the 4th day of December, 2008.
18
See Hancock v. State, 955 S.W .2d 369, 372 (Tex. App.–San Antonio 1997, no pet.) (rejecting
defendant’s contention that he did not understand the charges against him or the consequences of his plea
because he was unable to consult with counsel prior to the m orning he entered his plea).
19
A court cannot find, without additional inform ation, that a defendant’s brevity of consultation with
counsel resulted in an unknowing and involuntary plea or waiver of rights. A defendant could, however, seek
to obtain additional inform ation that m ay aid his legal challenge through a hearing on a petition for writ of
habeas corpus. The Fifth Circuit’s opinions in W alker v. Caldwell, 476 F.2d 213, and Colson v. Smith, 438
F.2d 1075 (5th Cir. 1971), illustrate the type of additional inform ation that m ay result in a finding that a plea
or waiver of rights was entered unknowingly and involuntarily.
7
|
{
"pile_set_name": "FreeLaw"
}
|
Snow Day weekend!
Our Journey in Life gets a snow day! It’s over the weekend so no real snow days but it’s going to be a wonderful weekend none the least.
As far as our Surrogacy Journey goes we are still waiting. waiting for contracts to get done. This is my first journey so I am not sure if this works the same for everyone, but for us the Contract will be drawn up and then sent to my Ips. once they read over the contract and if its the way they want it then they will sign it then the contract will be sent to me.
|
{
"pile_set_name": "Pile-CC"
}
|
About Us
Located in Sharjah / United Arab Emirates, has been serving and marketing the German and American auto parts. Specialised in the supply of Mercedes Benz, BMW, Jeep, Chrysler, Dodge
Automotive Spare Parts. For more than 13 years in UAE and 40 years in Jordan, we have been gaining intimate knowledge
|
{
"pile_set_name": "Pile-CC"
}
|
With another Scottish swimming pool announcing they are introducing naked swim sessions Louise talks to some naturists about the appeal of these sessions and asks a psychologist why some people are drawn to naturism.
|
{
"pile_set_name": "OpenWebText2"
}
|
Q:
Server and client side encryption in S3 (EMR config)
Quick & simple question, but couldn't find anything in the docs:
In my emrfs-site.xml, can I set both fs.s3.enableServerSideEncryption and fs.s3.cse.enabled to true? Will that provide double encryption, or it's not possible?
A:
No, "Amazon S3 SSE and CSE are mutually exclusive; you can choose either but not both." per http://docs.aws.amazon.com/emr/latest/ReleaseGuide/emr-cluster-configuration-object-encryption.html
|
{
"pile_set_name": "StackExchange"
}
|
Tag Archives: joint
WASHINGTON – Scientists hope that laser-based processes may help create arterial stents and longer-lasting medical implants 10 times faster, and less expensively.
Yung Shin, a professor of Mechanical Engineering and director of Purdue’s Center for Laser-Based Manufacturing, stresses the need for new technologies to meet the huge global market for artificial hips and knees, insisting that the worldwide population of people younger than 40 who receive hip implants is expected to be 40 million annually by 2010, and double to 80 million by 2030.
Besides speeding production to meet the anticipated demand, Shin says that another goal is to create implants that last longer than the ones that are made presently.
“We have 200,000 total hip replacements in the United States. They last about 10 years on average. That means if you receive an implant at 40, you may need to have it replaced three or four times in your lifetime,” he said.
In one of their techniques, the researchers deposit layers of a powdered mixture of metal and ceramic materials, melting the powder with a laser and then immediately solidifying each layer to form parts.
Shin says that, given that the technique enables parts to be formed one layer at a time, it is ideal for coating titanium implants with ceramic materials that mimic the characteristics of natural bone.
“Titanium and other metals do not match either the stiffness or the nature of bones, so you have to coat it with something that does. However, if you deposit ceramic on metal, you don’t want there to be an abrupt change of materials because that causes differences in thermal expansion and chemical composition, which results in cracks. One way to correct this is to change the composition gradually so you don’t have a sharp boundary,” Shin said.
The gradual layering approach is called a “functionally gradient coating”.
The researchers have revealed that they used their laser deposition processes to create a porous titanium-based surface and a calcium phosphate outer surface, both designed to better match the stiffness of bone than conventional implants.
The laser deposition process enables researchers to make parts with complex shapes that are customized for the patient.
“Medical imaging scans could just be sent to the laboratory, where the laser deposition would create the part from the images. Instead of taking 30 days like it does now because you have to make a mold first, we could do it in three days. You reduce both the cost and production time,” Shin said.
According to the researchers, the laser deposition technique lends itself to the requirement that each implant be designed specifically for each patient.
“These are not like automotive parts. You can’t make a million that are all the same,” Shin said.
He says that the process creates a strong bond between the material being deposited and the underlying titanium, steel or chromium.
The researcher further reveals that tests have shown that the bond is at least seven times as strong as industry standards require.
Using computational modelling, the researchers simulate, study and optimise the processes.
The researchers, however, admit that more studies are required before the techniques are ready for commercialisation.
They have revealed that their future work will involve studying “shape-memory” materials that are similar to bone and also have a self-healing capability for longer-lasting implants.
They are also working on a technique that uses an “ultra short pulse laser” to create arterial stents, which are metal scaffolds inserted into arteries to keep them open after surgeries to treat clogs.
Since the laser pulses last only a matter of picoseconds, or quadrillionths of a second, they do not cause heat damage to the foil-thin stainless steel and titanium material used to make the stents.
The laser removes material in precise patterns in a process called “cold ablation”, which turns solids into a plasma. The patterns enable the stents to expand properly after being inserted into a blood vessel.
BIRMINGHAM – Can bacteria help build bones implants? Well, at least scientists at the University of Birmingham say “Yes”.
Lead researcher LynneMacaskie suggests that Serratia bacteria that manufacture hydroxyapatite (HA) could be used to make stronger, more durable bone implants.
In a study, the researchers showed that the bacterial cells stuck tightly to surfaces like as titanium alloy, polypropylene, porous glass and polyurethane foam by forming a biofilm layer containing biopolymers that acted as a strong adhesive.
The HA coating then builds up over the surface. For practical use, the HA layer must stick tightly, then the material is dried and heated to destroy the bacteria.
With the help of micro-manipulation technique, the researchers measured the force needed to overcome the bioglue adhesion, and showed that dried biofilm stuck 20-times more tightly than fresh biofilm.
When coated with HA the adhesion was several times more again. Slightly roughening the surface made the bioglue much more effective.
Presently, implant materials are made by spraying-on hydroxyapatite. This does not have good mechanical strength and the spraying only reaches visible areas.
The new biocoating method reaches all the hidden surfaces as the bacteria can “swim” into hidden nooks and crannies.
Macaskie insists that bacterial HA has better properties than HA made chemically as the nanocrystals of HA produced by the bacteria are much smaller than HA crystals produced chemically, giving them a high mechanical strength.
“The bacteria are destroyed by heating, leaving just the HA stuck to the surface with their own glue – rather akin to a burnt milk-saucepan,” said Macaskie.
“We need to do more work actually to turn the materials into materials we can use in biomedicine and the environment,” she added.
The study was presented at Society for General Microbiology’s meeting at Heriot-Watt University, Edinburgh.
“The boomer and senior population is growing, so joint and bone health are top of mind for that demographic,” says Mintel’s KristaFaron, senior new-product analyst. “When it comes to supplements, calcium, vitamin D and magnesium dominate for bone health. Glucosamine, chondroitin and omega-3s are dominating the joint category. These traditional ingredients will continue to dominate but unexpected forms will emerge.”
For example, Bonemilk, a milk product with extra calcium plus glucosamine, was just recently launched. However, Minute Maid Active with glucosamine — despite the marketing heft of a leading mainstream brand — was pulled from the shelves after two years on the market. Formulators are also taking traditional joint-health ingredients and re-orienting them to the performance field, as with Vuel grape sports drink, a joint-rejuvenating beverage containing glucosamine, MSM and electrolytes.
“Once you move away from pills, joint health is an untapped area for joint-health drinks and foods,” Faron says. More consumers are now turning to foods — up 29 per cent— and beverages — up 11 per cent — fortified with joint-health ingredients, according to Nielsen data.
A compelling option is type II collagen, an ingredient that provides a naturally occurring matrix of chondroitin sulphate, hyaluronic acid and hydrolysed collagen type II, as well as glucosamine and other proteoglycans. Its dollar sales in 2007 were up 98.75 per cent, accoding to Nutrition Business Journal.
MSM, third in ingredient sales for the category, is worth $5 million. “A strong evidence base supports the utility of MSM for the promotion of joint health,” says TonyKeller, president of TandemRain Innovations, supplier of ActivMSM. “With ActivMSM being FDA GRAS, we see a significant opportunity for the entry of MSM into conventional foods and beverages, extending its joint-health pedigree and ushering in new applications for cardiovascular health. The suggestion of MSM being a sulphur metabolism modifier also opens up platforms for skin/hair/nails applications.” Beyond these major players, there is no shortage of ingredients looking to get in on the joint-health action.
Glucosamine is a compound found naturally in the body, made from glucose and the amino acid glutamine. Glucosamine is needed to produce glycosaminoglycan, a molecule used in the formation and repair of cartilage and other body tissues. Production of glucosamine slows with age.
Glucosamine is available as a nutritional supplement in health food stores and many drug stores. Glucosamine supplements are manufactured in a laboratory from chitin, a substance found in the shells of shrimp, crab, lobster, and other sea creatures. In additional to nutritional supplements, glucosamine is also used in sports drinks and in cosmetics.
Glucosamine is often combined with chondroitin sulfate, a molecule naturally present in cartilage. Chondroitin gives cartilage elasticity and is believed to prevent the destruction of cartilage by enzymes. Glucosamine is sometimes combined with methylsulfonylmethane, or MSM, in nutritional supplements.
Why Do People Use Glucosamine?
Osteoarthritis
Glucosamine supplements are widely used for osteoarthritis, particularly knee osteoarthritis. In osteoarthritis, cartilage — the rubbery material that cushions joints — becomes stiff and loses its elasticity. This makes the joint prone to damage and may lead to pain, swelling, loss of movement, and further deterioration.
Since the body’s natural glucosamine is used to make and repair joint cartilage, taking glucosamine as a nutritional supplement is thought to help repair damaged cartilage by augmenting the body’s supply of glucosamine.
There is promising evidence that glucosamine may reduce pain symptoms of knee osteoarthritis and possibly slow the progression of osteoarthritis. For example, a study published in the journal Archives of Internal Medicine examined people with osteoarthritis over three years. Researchers assessed pain and structural improvements seen on x-ray. They gave 202 people with mild to moderate osteoarthritis 1,500 mg of glucosamine sulfate a day or a placebo.
At the end of the study, researchers found that glucosamine slowed the progression of knee osteoarthritis compared to the placebo. People in the glucosamine group had a significant reduction in pain and stiffness. On x-ray, there was no average change or narrowing of joint spaces in the knees (a sign of deterioration) of the glucosamine group. In contrast, joint spaces of participants taking the placebo narrowed over the three years.
One of the largest studies on glucosamine for osteoarthritis was a 6-month study sponsored by the National Institutes of Health. Called GAIT, the study compared the effectiveness of glucosamine hydrochloride (HCL), chondroitin sulfate, a combination of glucosamine and chondroitin sulfate, the drug celecoxib (Celebrex), or a placebo in people with knee osteoarthritis.
Glucosamine or chondroitin alone or in combination didn’t reduce pain in the overall group, although people in the study with moderate-to-severe knee pain were more likely to respond to glucosamine.
One major drawback of the GAIT Trial was that glucosamine hydrochloride was used rather than the more widely used and researched glucosamine sulfate. A recent analysis of previous studies, including the GAIT Trial, concluded that glucosamine hydrochloride was not effective. The analysis also found that studies on glucosamine sulfate were too different from one another and were not as well-designed as they should be, so they could not properly draw a conclusion. More research is needed.
Still, health care providers often suggest a three month trial of glucosamine and discontinuing it if there is no improvement after three months. A typical dose for osteoarthritis is 1,500 mg of glucosamine sulfate each day.
Other Conditions
Other conditions for which glucosamine is used include rheumatoid arthritis, inflammatory bowel disease (Crohn’s disease and ulcerative colitis), chronic venous insufficiency, and skin conditions, although further evidence is needed.
Side Effects and Safety of Glucosamine
Most studies involving humans have found that short-term use of glucosamine is well-tolerated. Side effects may include drowsiness, headache, insomnia, and mild and temporary digestive complaints such as abdominal pain, poor appetite, nausea, heartburn, constipation, diarrhea, and vomiting. In rare human cases, the combination of glucosamine and chondroitin has been linked with temporarily elevated blood pressure and heart rate and palpitations.
Since glucosamine supplements may be made from shellfish, people with allergies to shellfish should avoid glucosamine unless it has been confirmed that it is from a non-shellfish source. The source of glucosamine is not required to be printed on the label, so it may require a phone call to the manufacturer.
There is some evidence suggesting that glucosamine, in doses used to treat osteoarthritis, may worsen blood sugar, insulin, and/or hemoglobin A1c (a test that measures how well blood sugar has been controlled during the previous three months) levels in people with diabetes or insulin resistance.
Theoretically, glucosamine may increase the risk of bleeding. People with bleeding disorders, those taking anti-clotting or anti-platelet medication, such as warfarin, clopidogrel, and Ticlid, or people taking supplements that may increase the risk of bleeding, such as garlic, ginkgo, vitamin E, or red clover, should not take glucosamine unless under the supervision of a healthcare provider.
Visitors
Blogroll
Sign Up For Daily Posts in Your Mailbox
Name*
Email*
Telemedicine Consults For Everyone – CALL 888-TELEMED
Some Random Thoughts
Never, ever exercise in front of a TV or while reading. You lose 50 percent of the benefit of the exercise by not hearing and feeling your heart rate, your sweat and the pain levels that need to be encountered in fitness training.
|
{
"pile_set_name": "Pile-CC"
}
|
We developed a method to study the existence and distribution of protein-free spaces in the cytoplasm, by the exclusion of probes of known dimensions from the chemically cross-linked and freeze-fractured cytoplasm matrix. The results were observed by electron microscopy and the probe utilized to explore the cytoplasm compaction was non-cationized ferritin (100 A degrees in diameter). We observed that cells in a resting state have crowded cytoplasm that after cross-linkage by glutaraldehyde is impermeable to ferritin. In contrast, ferritin freely penetrates the cross-linked cytoplasm of growing cells. Variations in cytoplasm permeability to ferritin were observed in single populations of lymphocytes from peripheral blood. These variations are in accordance with the heterogenous character of these populations. Results obtained with lymphocytes activated with Phytohaemagglutinin suggest that lymphocytes with permeable cytoplasms probably correspond to activated cells. Contrary to lumphocytes, homogenous populations of cells (neutophils from peripheral blood and cells from specific stages of differentiation of the fungus Phytophthora palmivora) respond uniformly to ferritin penetration. We also observed that cytoplasm compaction can change during differentiation and that the distribution of protein-free spaces in muscle cells characterizes different physiological states. We conclude the exclusion of probes from freeze-fractured cytoplasm is an easy and convenient method to evaluate cytoplasm compaction and to detect significant changes in the distribution of protein-free spaces at selected cellular states.
|
{
"pile_set_name": "NIH ExPorter"
}
|
For 20 minutes after a bad tackle in his junior year of college, Malik Boynton didn't know if he'd ever play football again.
"The more I moved, the more my body shut down, limb by limb," said the former Pittsburgh Steelers prospect, who recently signed to the Winnipeg Blue Bombers.
The 2016 tackle, which came while Boynton was playing with Austin Peay State University in Tennessee, could have ended his football career. He lost feeling in first his left arm, then his right arm, then his legs buckled and he fell flat on his face.
He started to regain feeling as he lay in the back of the ambulance, and would later learn he had suffered a spinal concussion.
"It was just … a crazy injury, like, one of those one-in a million injuries, and I was blessed to bounce back from it. But the next day I was moving around," Boynton said in an interview with Ismaila Alfa on CBC Manitoba's afternoon radio show, Up to Speed.
Life up to that point hadn't always been blessed for Boynton. Growing up in Detroit, his family went through a series of struggles in his teenage years, beginning with the death of his mother from breast cancer.
His father was laid off from his job shortly after and the family moved in with Boynton's aunt, who also died of cancer only a couple of months after they moved in.
Boynton bounced around from house to house, often staying with teammates and coaches, sleeping where he could.
Staying positive during that time was "the only option," said Boynton, who says his younger brother didn't make the same choices and ended up in prison.
"I took a different route. I just knew the consequence at the end of that. I wanted to set a good example for my siblings," he said.
Boynton was invited to try out at the Pittsburgh Steelers camp, but left without a contract. He then signed briefly with the Memphis Express, a team in the Alliance of American Football league, which folded earlier this month after being in existence for only eight weeks.
That opened up the opportunity for Boynton to sign with the CFL's Blue Bombers.
"I'm extremely grateful and excited to start the process," he said.
Boynton will play defensive back for the Bombers in the upcoming season.
|
{
"pile_set_name": "OpenWebText2"
}
|
Q:
How not to show image if there is none?
I have three table. Category, PostAd and PostImage.
To show post i have to show it from category table.
My controller Code.
$data['category'] = Category::with(['child','children','parent','postads.postimage','postads'=>function($q) use ($asc){
$q->orderBy('created_at',$asc);
}])->where('id',$id)->get();
To display image i have to use nested relationship in postads.postimage.
Blade Code
@foreach($category as $cat)
@foreach($cat->postads as $c)
<a href="{{route('particular',['id'=>$c->id])}}">
<li>
@foreach($c->postimage as $pi)
<img src="{{asset('thumbnail/'.$pi->image)}}" alt="No image" style="margin-top: 5px" >
@endforeach
<section class="list-left">
<h5 class="title">{{$c->adtitle}}</h5>
<span class="adprice">Rs. {{$c->price}}</span>
<p class="catpath">{{$cat->categoryname}} » {{$cat->categoryname}}</p>
</section>
<section class="list-right">
@auth
<div class="like1">
<i class="fas fa-heart" pid="{{$c->id}}" uid="{{auth()->user()->id}}"></i>
</div>
@endauth
<span class="date">{{date('D',strtotime($c->created_at))}}-{{date('M',strtotime($c->created_at))}}-{{date('Y',strtotime($c->created_at))}}</span>
<span class="cityname">{{$c->address}}</span>
</section>
<div class="clearfix"></div>
</li>
</a>
@endforeach
@endforeach
because of foreach loop my design gets damaged.
I want to show image alt but because of this nothing is shown and everything gets damaged.
A:
You can use @forelse
@forelse($c->postimage as $pi)
<img src="{{asset('thumbnail/'.$pi->image)}}" alt="No image" style="margin-top: 5px" >
@empty
<!-- some HTML, default image or something, whatever you need -->
@endforelse
https://laravel.com/docs/5.8/blade#loops
|
{
"pile_set_name": "StackExchange"
}
|
HMS Meteorite
HMS Meteorite was an experimental U-boat developed in Germany, scuttled at the end of World War II, subsequently raised and commissioned into the Royal Navy. The submarine was originally commissioned into the Kriegsmarine in March 1945 as U-1407. It was built around a Walter engine fuelled by high test peroxide (HTP).
History
The three completed German Type XVIIB submarines were scuttled by their crews at the end of the Second World War, U-1405 at Flensburg, and U-1406 and U-1407 at Cuxhaven, all in the British Zone of Occupation. U-1406 and U-1407 were scuttled on 7 May 1945 by Oberleutnant zur See Gerhard Grumpelt even though a superior officer, Kapitän zur See Kurt Thoma, had prohibited such actions. Grumpelt was subsequently sentenced to seven years' imprisonment by a British military court.
At the Potsdam Conference in July 1945 U-1406 was allocated to the US and U-1407 to Britain and both were soon salvaged.
Royal Navy service
U-1407 was salvaged in June 1945, and transported to Barrow-in-Furness, where she was refitted by Vickers with a new and complete set of machinery also captured in Germany, under the supervision of Professor Hellmuth Walter. Because she was intended to be used solely for trials and possibly as a high-speed anti-submarine target, her torpedo tubes were removed. She was commissioned into the Royal Navy on 25 September 1945 and renamed HMS Meteorite.
During 1946 Meteorite carried out a series of trials under the guidance of Walter and his original team from Germaniawerft, Kiel. The trials raised considerable interest in the possibility of HTP as an alternative to nuclear power as air-independent propulsion and the Admiralty placed an order for two larger experimental Walter boats based on the German Type XXVI, and , to be followed by an operational class of 12 boats.
Meteorite was not popular with her crews, who regarded the boat as a dangerous and volatile piece of machinery. She was difficult to control due to aircraft-type controls and a lack of forward hydroplanes. She was officially described as "75% safe".
Fate
Meteorite's Royal Navy service came to an end in September 1949, and she was broken up by Thos W Ward of Barrow-in-Furness.
References
Bibliography
External links
Category:German Type XVII submarines
Category:Ships built in Hamburg
Category:1945 ships
Category:U-boats commissioned in 1945
Category:U-boats scuttled in 1945
Category:World War II submarines of Germany
Category:Submarines of the Royal Navy
Category:Cold War submarines of the United Kingdom
Category:Experimental submarines
Category:Maritime incidents in May 1945
|
{
"pile_set_name": "Wikipedia (en)"
}
|
using System;
using System.IO;
using System.Collections.Generic;
using System.ComponentModel;
using System.Reflection;
using System.Runtime.InteropServices;
using System.Text;
using Excel = Microsoft.Office.Interop.Excel;
namespace Interop
{
class Program
{
static void Main(string[] args)
{
Console.WriteLine("Interop Assemblies Performance Test - 5000 Cells.");
Console.WriteLine("Write simple text.");
// start excel, and get a new sheet reference
Excel.Application excelApplication = CreateExcelApplication();
Excel.Workbooks books = excelApplication.Workbooks;
Excel.Workbook book = books.Add(Missing.Value);
Excel.Sheets sheets = book.Worksheets;
Excel.Worksheet sheet = sheets.Add() as Excel.Worksheet;
// do test 10 times
List<MarshalByRefObject> comReferencesList = new List<MarshalByRefObject>();
List<TimeSpan> timeElapsedList = new List<TimeSpan>();
for (int i = 1; i <= 10; i++)
{
DateTime timeStart = DateTime.Now;
for (int y = 1; y <= 5000; y++)
{
string rangeAdress = "$A" + y.ToString();
Excel.Range range = sheet.get_Range(rangeAdress);
range.Value = "value";
comReferencesList.Add(range as MarshalByRefObject);
}
TimeSpan timeElapsed = DateTime.Now - timeStart;
// display info and dispose references
Console.WriteLine("Time Elapsed: {0}", timeElapsed);
timeElapsedList.Add(timeElapsed);
foreach (var item in comReferencesList)
Marshal.ReleaseComObject(item);
comReferencesList.Clear();
}
// display info & log to file
TimeSpan timeAverage = AppendResultToLogFile(timeElapsedList, "Test1-Interop.log");
Console.WriteLine("Time Average: {0}{1}Press any key...", timeAverage, Environment.NewLine);
Console.Read();
// release & quit
Marshal.ReleaseComObject(sheet);
Marshal.ReleaseComObject(sheets);
Marshal.ReleaseComObject(book);
Marshal.ReleaseComObject(books);
excelApplication.Quit();
Marshal.ReleaseComObject(excelApplication);
}
/// <summary>
/// creates a new excel application
/// </summary>
/// <returns></returns>
static Excel.Application CreateExcelApplication()
{
Excel.Application excelApplication = new Excel.Application();
excelApplication.DisplayAlerts = false;
excelApplication.Interactive = false;
excelApplication.ScreenUpdating = false;
return excelApplication;
}
/// <summary>
/// writes list items to a logile and append average of items at the end
/// </summary>
/// <param name="timeElapsedList">a list with log results</param>
/// <param name="fileName">name of logfile in current assembly folder </param>
/// <returns>average of timeElapsedList</returns>
static TimeSpan AppendResultToLogFile(List<TimeSpan> timeElapsedList, string fileName)
{
TimeSpan timeSummary = TimeSpan.Zero;
string logFile = Path.Combine(Path.GetDirectoryName(Assembly.GetEntryAssembly().Location), fileName);
if (File.Exists(logFile))
File.Delete(logFile);
foreach (TimeSpan item in timeElapsedList)
{
timeSummary += item;
string logFileAppend = item.ToString() + Environment.NewLine;
File.AppendAllText(logFile, logFileAppend, Encoding.UTF8);
}
TimeSpan timeAverage = TimeSpan.FromTicks(timeSummary.Ticks / timeElapsedList.Count);
File.AppendAllText(logFile, "Time Average: " + timeAverage.ToString(), Encoding.UTF8);
return timeAverage;
}
}
}
|
{
"pile_set_name": "Github"
}
|
(CNN) At President Donald Trump's instruction, Vice President Mike Pence walked out of an NFL game in his home state on Sunday after players from the San Francisco 49ers knelt during the National Anthem before their game versus the Indianapolis Colts.
The episode drew scrutiny as an expensive political stunt, costing hundreds of thousands of dollars in flights alone, but drew praise from Trump, who hailed Pence's actions on Twitter.
So how did the walk-out come to be? Here's the timeline:
-- NFL announces its regular season schedule , including an October 8 game between the Indianapolis Colts and the San Francisco 49ers. Some players for the 49ers, led by then-quarterback Colin Kaepernick, began kneeling during the anthem last season to protest racial inequality and police brutality.
-- The Colts announce a statue of Peyton Manning, the longtime Indianapolis quarterback, will be unveiled outside Lucas Oil Stadium on October 8. As a former Indiana governor, Pence has long expressed his appreciation for Manning.
Sept. 23 -- Trump -- Trump criticizes players who kneel during the National Anthem during a campaign stop in Alabama. He spends the rest of the weekend fueling the controversy on Twitter and speaking to team owners.
Sept. 25 -- Pence offers support for Trump's views -- which had sparked harsh backlash -- during his own campaign rally in Alabama: "We've all got a right to our opinions, but I don't think it's too much to ask the players in the National Football League to stand for our National Anthem."
JUST WATCHED Pence honors Las Vegas victims and responders Replay More Videos ... MUST WATCH Pence honors Las Vegas victims and responders 01:15
Unknown -- The Vice President's office, in coordination with the White House, makes plans for Pence to attend the Colts-49ers game. A senior Pence aide said on Sunday planning for the trip had been in the works for "several weeks." Law enforcement sources said the trip had been on their radar for a while.
Friday, Oct. 6 -- As Pence visits storm-damaged Puerto Rico and the US Virgin Islands, his office announces the Vice President's weekend itinerary: a visit to Las Vegas after the mass shooting there, followed by the trip to Indianapolis to watch the game and participate in the Manning celebration.
Unknown -- Trump and Pence discuss the Colts-49ers game and the prospects a protest could arise. "They agreed if a protest took place he would leave," a White House official said. It's unclear when their conversation took place. White House officials have yet to specify the timing of this discussion when asked by CNN.
Saturday, Oct. 7 -- Pence flies from Joint Base Andrews, outside Washington, to Las Vegas, where he speaks at a prayer walk for victims of the mass shooting. He departs Las Vegas around 5:45 p.m. ET and lands in Indianapolis just past 9 p.m. ET. He spends the night in a newly renovated Marriott downtown.
Sunday, Oct. 8
11:27 a.m. ET -- Pence -- Pence tweets a 2014 photo of himself and wife Karen in Colts gear, writing: "Looking forward to cheering for our @Colts & honoring the great career of #18 Peyton Manning at @LucasOilStadium today. Go Colts!"
Looking forward to cheering for our @Colts & honoring the great career of #18 Peyton Manning at @LucasOilStadium today. Go Colts! pic.twitter.com/C3aCYUNpqG — Vice President Pence (@VP) October 8, 2017
11:56 a.m. ET -- Pence departs his hotel in a motorcade en route Lucas Oil Stadium. Second lady Karen Pence, aides, Secret Service, and a traveling press pool accompany him. The stadium is just around the corner from the Marriott, and Pence arrives a few minutes later, passing crowds of Colts fans in his armored vehicle along the route.
12 p.m. ET hour -- Aides to the Vice President tell the traveling press pool they will remain inside their van, since Pence may depart early from the football stadium.
12 p.m. ET hour -- Pence meets privately with Manning inside the stadium.
12:42 p.m. ET -- The traveling pool reporter indicates he hasn't been given access to the stadium (sometimes when covering politicians at sporting events, the traveling press isn't permitted to shoot footage inside since coverage of those events is typically exclusive to a particular television network).
Approx. 12:55 p.m. ET -- The National Anthem is played inside Lucas Oil Stadium, during which some players for the 49ers kneel. Pence, wearing a blue blazer, stands and places his hand on his heart, according to a photo he later tweeted. Mrs. Pence, wearing a Colts jersey, does the same. The Pences appeared to be sitting in a box on the upper level of the stadium.
1:08 p.m. ET -- Pence -- Pence tweets that he left the game after some players kneeled: "I left today's Colts game because @POTUS and I will not dignify any event that disrespects our soldiers, our Flag, or our National Anthem."
1:10 p.m. ET -- Pence's motorcade departs Lucas Oil Stadium to retrace earlier route back to the Marriott. The traveling pool reporter says the first indication that Pence was leaving the game early came in Pence's tweet.
1:21 p.m. ET -- Pence Twitter -- Pence begins thread explaining his decision to leave the game early: "At a time when so many Americans are inspiring our nation with their courage, resolve, and resilience...now, more than ever, we should rally around our Flag and everything that unites us... While everyone is entitled to their own opinions, I don't think it's too much to ask NFL players to respect the Flag and our National Anthem."
1:24 p.m. ET -- Pence -- Pence tweets an image of his full statement laid out on a White House-designed template.
1:32 p.m. ET -- The White House emails Pence's full statement to reporters.
1:42 p.m. ET -- Pence -- Pence tweets photo of himself and Mrs. Pence standing during the National Anthem.
We were proud to stand - with all our @Colts - for our soldiers, our flag, and our National Anthem 🇺🇸 pic.twitter.com/mkZiKMkPDD — Vice President Pence (@VP) October 8, 2017
1:54 p.m. ET -- Pence and his entourage depart the Marriott en route the Indianapolis airport.
2:16 p.m. ET -- President Trump, President Trump, writing from the Trump National Golf Club in Sterling, Virginia, praises Pence's decision on Twitter and reveals he requested Pence leave early: "I asked @VP Pence to leave stadium if any players kneeled, disrespecting our country. I am proud of him and @SecondLady Karen."
2:25 p.m. ET -- Air Force Two is wheels up to Los Angeles. On the flight, the senior Pence aide offers the following statement to the traveling pool reporter:
"Attending the Colts game celebrating Peyton Manning was planned and scheduled for several weeks. The Vice President added a last minute trip to speak at the unity prayer walk in Las Vegas and pay respects to those who lost their lives. He wanted to keep his commitment to attend the Colts game hoping all the players would stand for the National Anthem. All the Colts players did stand for the National Anthem, but several 49ers did not. As he had discussed with the President, when several 49ers players disrespected the flag and the Nation Anthem, the Vice President decided to leave the game."
3:00 p.m. ET -- Pence changes the cover photo on his Twitter page to the image of him and his wife standing during the National Anthem at the Colts game.
7:02 p.m. ET -- Pence lands in Los Angeles to attend a private fundraiser in Beverly Hills. He also makes a stop at the home of his daughter Charlotte, who recently moved to the Los Angeles area. Pence remained overnight at a hotel near the Los Angeles airport.
10:55 p.m. ET -- Amid questions -- Amid questions about the cost of his trip to Indianapolis , Pence's office releases a statement explaining he would have returned to Washington if he hadn't been attending the Colts game in Indiana:
"The Vice President was not going to miss the Las Vegas memorial prayer walk on Saturday, which he was honored to attend on behalf of President Trump. If the Vice President did not go to Indiana for the Colts game, he would have flown back to D.C. for the evening -- which means flying directly over Indiana. Instead, he made a shorter trip to Indiana for a game that was on his schedule for several weeks."
Monday, Oct. 9
7:05 a.m. ET -- From the White House, Trump dispatches another -- From the White House, Trump dispatches another tweet about the matter, writing: "The trip by @VP Pence was long planned. He is receiving great praise for leaving game after the players showed such disrespect for country!"
|
{
"pile_set_name": "OpenWebText2"
}
|
title: $:/language/Docs/Types/image/svg+xml
description: Immagine SVG (Structured Vector Graphics)
name: image/svg+xml
group: Immagine
|
{
"pile_set_name": "Github"
}
|
Use of INR to assess degree of anticoagulation in patients who have dental procedures.
Dental professionals frequently treat patients who are receiving anticoagulation therapy. Proper treatment may require adjustment of the anticoagulant dose usually on the basis of the patient's current prothrombin time. This test has been shown to be less accurate than previously thought. The international normalized ratio is another method that attempts to standardize the degree of anticoagulation and to improve reproducibility of results. This system is slowly being implemented in laboratories in the United States. Practitioners who treat patients taking anticoagulants need to be aware of this system in order to make appropriate management decisions.
|
{
"pile_set_name": "PubMed Abstracts"
}
|
Q:
header location php why when no white space
why on earth wont this work their is no spaces no outputs but yet i put redirect just in before configuartion .php which just holds my db connections what gives
<?php
require('configuration.php');
$voucher = $_SESSION['voucher'] ;
$result = mysql_query("SELECT used FROM codes where code='".$voucher."'");
$row =mysql_fetch_row($result);
if ( $row['used'] == "1" ) {
header('Location: invalid.php');
exit;
}
if ( $row['used'] == "0" ) {
header('Location: valid.php');
exit;
}
?>
A:
Chances are configuration.php has something being output. (Remember that require/include output anything at that point in time, so any white-space or characters would also be output at that time).
out of curiosity, if you do the following does it work:
<?php
ob_start();
require('configuration.php');
// your code with header(...);
ob_end_flush();
If it works with the ob_start/ob_end_flush calls in place, configuration.php is outputting something. However, some things to note:
Never send data coming in from a client (via $_GET/$_POST/$_SESSION) directly to SQL. Even though you may be setting the session data, depending where it comes from (cookie for example) it's very easy to start poking around your database.
Location should be a fully-qualified path (http://mydomain.com/myfile.php not just myfile.php)
|
{
"pile_set_name": "StackExchange"
}
|
Comparison of ACIOL Retention With IOL Exchange in Patients Undergoing Descemet Stripping Automated Endothelial Keratoplasty.
To investigate clinical outcomes in the management of anterior chamber intraocular lenses (ACIOLs) in patients requiring Descemet stripping automated endothelial keratoplasty (DSAEK) for pseudophakic corneal edema. This is a retrospective review of DSAEK procedures performed at a single center between May 1, 2006, and August 1, 2014. Forty-three eyes (41 patients) with pseudophakic corneal edema and an ACIOL were identified. In 26 eyes (60.5%), the ACIOL was retained [intraocular lens retention (IOLR) group], and in 17 eyes (39.5%), intraocular lens exchange [(IOLX) group] was concurrent with DSAEK. No significant difference was noted between the IOLR and IOLX groups for the following: the incidence of primary graft failure (7.7% vs. 5.9%; P = 1.0); the incidence (3.8% vs. 0.0%; P = 1.0) or rate (0.036 per eye-year vs. 0 per eye-year; P = 0.28) of secondary graft failure; or the incidence (7.7% vs. 11.8%; P = 1.0) or rate (0.056 per eye-year vs. 0.073 per eye-year; P = 0.69) of endothelial rejection. However, the incidence (23.1% vs. 58.8%; P = 0.026) and rate (0.291 per eye-year vs. 0.475 per eye-year; P = 0.033) of increased intraocular pressure were significantly higher in the IOLX group. There were more complications in the IOLX group, although the difference was not significant (7.7% vs. 29.4%; P = 0.093). There is no significant difference in the incidence of primary graft failure or in the rate of secondary graft failure or endothelial rejection in eyes with ACIOL retention or exchange. However, as IOLX is associated with intraoperative and postoperative complications and an increased rate of postoperative intraocular pressure elevation, we recommend performing DSAEK with retention of well-positioned ACIOLs in these eyes.
|
{
"pile_set_name": "PubMed Abstracts"
}
|
Threaded inserts are used in a variety of situations to provide a reliable threaded hole for a fastener. For instance, threaded inserts are used in workpieces that are too soft to provide reliable threads, in workpieces that are too thin to accept threads, or in a workpiece with damaged or stripped threads. The threaded insert can be inserted into an opening in such a workpiece so that the workpiece can still receive a threaded fastener.
A particular variety of threaded insert is a helical insert, which can be seen in prior art FIG. 8. As can be seen in the semi-sectional portion of FIG. 8, the helical insert 10 is a coil of wire that typically has a diamond or hexagonal shaped cross-section such that exterior and interior v-shaped edges are created on the insert. In that way, when the coil is compressed, the cross-section allows the exterior set of v-shaped edges 13 to engage the workpiece 14 (not shown). The inner set of v-shaped edges 15 then serves as the threads to receive a threaded fastener. The downstream end of the helical insert 10 features a medially projecting tail 17, referred to as a tang. The tang 17 aids in installation of the helical insert. On the final coil of the helical insert, proximal to the tang 17, is a notch 19. As described below, the notch 19 allows the tang 17 to be removed after installation of the helical insert.
Installation of a helical insert on a manufactured part is typically accomplished in four steps. First, the receiving hole is drilled into the workpiece. Second, threads for receiving the helical insert are tapped into the hole. Third, the helical insert is installed in the workpiece using an installation tool. The installation tool is a cylindrical rod having a diameter of a size to accommodate the helical insert. The end of the installation tool has a stepped surface to engage and drive the tang of the helical insert during installation, and the portion of the installation tool immediately superior to the end has threads to receive the helical insert. In the installation step, the helical insert is wound onto threads at the end of the installation tool until the tang contacts the flat end of the tool. Then, the installation tool is inserted into the hole in the workpiece and rotated in a first direction to feed the helical coil into the hole until the proper depth is achieved. Then the installation tool is rotated in a reverse direction to release the helical insert in the hole. In the final step, another tool having a diameter smaller than the diameter of the helical insert is inserted through the helical insert until it contacts the tang. A hammer is used to strike the tool, which breaks the tang from the helical insert at the location of the notch.
When manufacturing parts that require a helical insert, correctly installing the helical insert into the part is important to assure the quality of the final product. When working on a high volume of parts, a significant number of those parts can fail to contain a helical insert or fail to have the tang removed from the helical insert. Such part defects can cause delays and increased costs for customers, who have to spend time and money to fix these parts or order new ones.
Thus a need exists in the art for a device that a manufacturer can use to check the quality of manufactured parts that include helical inserts.
|
{
"pile_set_name": "USPTO Backgrounds"
}
|
Private keys are supposed to be, well… private. If a key is out there “in the wild”, anyone could have it. Anyone could be using it to compromise your security.
Want to know if a private key is in the wild? This site is for you. We collect private keys, and provide you with proof they’re compromised, so you can know when a key is untrustworthy.
To see whether a key is in our database, look it up.
If you have a compromised key, please submit it.
If you have questions or comments, feel free to check out the FAQ or contact us.
|
{
"pile_set_name": "OpenWebText2"
}
|
Q:
Jupyter Notebook figure size settings
If I plot a graph or display a table in Jupyter Notebook the figures are really small and unreadable. What is the best way to set the figure size settings globally in Jupyter Notebook? For comparison in Quantopian's Notebook version plots and tables are a lot larger. I know there are separate settings for matplotlib and other libraries but I would like to set global settings. I also tried this setting, but it didn't work.
%config InlineBackend.figure_format='retina'
A:
If "globally" means for all successive outputs in a given notebook, I use
plt.rcParams['figure.figsize'] = (16,8)
after importing matplotlib. This affect all subsequent plots. If you wish a single configuration for all notebooks, the reply by Louise Davies is more appropriate.
A:
I'm pretty sure there's no "global setting" for all packages in jupyter. The best you can do is configure the defaults for matplotlib and pandas etc. Note that other graphing libraries have larger default outputs (from the top of my head I know plotly has full width graphs by default)
To configure global python settings, run ipython profile create to create a ~/.ipython/profile_default/ipython_kernel_config.py file, and add the line:
c.InlineBackend.rc = { 'figure.figsize': (20.0, 10.0) }
See https://matplotlib.org/users/customizing.html for more info on customising matplotlib
As for pandas, I don't think it has global options for style. You could add stuff to jupyter's custom.css file (`~/.jupyter/custom/custom.css) like so:
.dataframe {
font-size: 14px !important;
}
|
{
"pile_set_name": "StackExchange"
}
|
Military Times: SitRep Online for March 3, 2014
In this episode: The Army is prepping for new camo, thousands of airmen are facing retraining and sailors safe from end strength cuts.
http://archive.armytimes.com/VideoNetwork/3292838120001/Military-Times-SitRep-Online-for-March-3-2014http://archive.armytimes.com/VideoNetwork/3292838120001/Military-Times-SitRep-Online-for-March-3-2014http://bcdownload.gannett.edgesuite.net/militarytimes/44862801001/201403/2291/44862801001_3292921258001_2014-Sitrep-logo-March03-480.jpgMilitary Times: SitRep Online for March 3, 2014In this episode: The Army is prepping for new camo, thousands of airmen are facing retraining and sailors safe from end strength cuts.3670budgetu2Armyretrainingend strengthMarinesSitRepairmenA-10new camoair forceCamoSoldiersmilitary timestestingmarine corpsnavysailorsbudget cuts01:30
|
{
"pile_set_name": "Pile-CC"
}
|
Kindness Activities Preschool Anchor Chart
My preschoolers and I have been discussing the concept of kindness a lot recently. In addition to talking about kindness and reading books on the topic, the kiddos made a kindness activities anchor chart.
#PLAYfulpreschool kindness activities
By my definition, an anchor chart is basically a written reference for children. Such charts help children to learn about a specific topic, and they often include examples and pictures. Many anchor charts are made by teachers, but I thought involving the students would be more meaningful.
MY LATEST VIDEOS
Materials we used
Chart paper
Magazines
Photos of our class through the year
Directions
Write down a simple definition of kindness. I wrote, “What is Kindness? Kindness is when we are friendly, thoughtful, and generous.” The kiddos and I have talked about being kindness many times – how it means we’re being a friend by helping, how we are thinking of other people, and how we can share our time (and take turns with classroom toys).
Call the children over, in small groups or individually. Engage in a conversation about kindness. Some possible questions to ask the children:
What does it mean to be kind?
What does a kind friend do?
Can you tell me about being kind and thoughtful?
Next, ask the children to find examples of kindness. This can be done by looking through magazines, going through class/family pictures, and leafing through books. In the case of magazines and photos, have children cut out kindness examples and glue the pictures to the chart.
Ask the children, “What do you see that’s kind in this picture? How is he/she being kind here?” Record the children’s thoughts on the chart.
Once the chart is all done, be sure to hang it up where the kiddos can reference it! If there are unkind words or actions, use the chart to help bring the children’s attentions back to what kindness means.
More kindness activities
Not all of the kiddos in my class were interested in helping to make the anchor chart. That’s okay, because I know that they’ll learn about being kind in other ways. Here are more ideas for talking with kids about kindness:
Read books that illustrate what it means to be kind, as well as what it means to be unkind. Books can be great examples and lead to some in-depth discussions with young children.
Act out examples of being kind! I’ve found that preschoolers usually enjoy role playing. Humor tends to be an important part of it, especially if I’m the one being unkind and they have to tell me the better choices to make!
What a brilliant idea – and I’ve learnt what an anchor chart is. I see them being used in my daughters preschool classroom a lot and they are fantastic resource for the kids as well as a great insight for parents to see as they pick up and drop off the children.
Trackbacks
[…] Cards for Sick Children by Capri + 3 Virtual Kindness Notes from Kids by Still Playing School Kindness Activities Preschool Chart by Fun-A-Day Teaching Preschoolers Kindness with Bob and Larry by Life Over […]
[…] some ideas for small things they can do to be kind to those around them. Record their ideas on a kindness anchor chart and keep track of the nice things the children do throughout the week . . . the month . . . the […]
|
{
"pile_set_name": "Pile-CC"
}
|
Online E-Commerce Business Ideas Giants like Amazon, Facebook or Google, have created obtaining your product into the hands of shoppers additional possible than ever. I quickly came to envision these platforms as partners in Online business, knowing that they need to form your product winning the maximum amount as you do then, everybody wins. So why are numerous sellers operating around them, and not with them? This one outlook Shift is vital to Growing YourGiants like Amazon, Facebook or Google, have created obtaining your product into the hands of shoppers additional possible than ever. I quickly came to envision these platforms as partners in, knowing that they need to form your product winning the maximum amount as you do then, everybody wins. So why are numerous sellers operating around them, and not with them?
However, You Approach E-commerce Business, The Additional Success Everybody Can Have. Here's how:
1. Notice That Their Customers is Your Customers.
The overwhelming majority of customers categorical going away an internet business opportunities service or product thanks to poor client expertise. in mind, build shopping for your product on Amazon easy for all customers Feedback of once individuals relish the expertise, they're willing Back shop for your product through Amazon.
There was an amount of your time at the start of online business ideas 2019 once we had a product quality issue that resulted in an associated flow of come back requests. rather than merely providing customers with a refund, we tend to conjointly send them a full replacement and enclosed extra freebies alongside it. It's safe to mention, we tend to went higher than and side, that helped customers come back, not solely to us but conjointly to Amazon.
Focus resources and energy into providing online business ideas unimaginable client support through Amazon. Reply to reviews and reach bent customers through the platform her input and ideas. When a client receives associate expertise like this, they'll have instilled trust in your complete, and conjointly the platform they're shopping for from.
2. Play Rules Each Win.
If you are attempting to induce a position on the platform, by making content that's clickbaity, sharing faux news or listing dishonest reviews, you'll be manipulating their customers and ultimately building a way of mistrust. This negative impact on the perception and worth of the platform can result in them losing customers, and you higher believe your successful online businesses won't be welcome within the future.
As they say, if you would like to travel quickly, go alone. however, if you would like to travel so much, go along. it's importance you collaborate by conformity the foundations.
Online business ideas for beginners Cutting corners and breaking the foundations on a marketing platform could get you a spurt of success, however, that may not planning to sustain you for long-run growth. I actually have ne'er sought-after out quarter-hour of fame! Instead, the main focus has been on building a time period of growth and success, continuously keeping the long run goals in mind.
I have been marketing on the Best business ideas to make money Amazon platform for over four years and solely been kicked out of the system for fewer than every day thanks to a defect. Believe Pine Tree State once I tell you that this was painful enough. Imagining the profit loss that companies consider obtaining caught doing embezzled actions needs to be prejudicial to their overall profit.
3. Produce Loop Building Assets.
It's an easy profitable home business ideas system the additional merchandise cash Amazon makes, and therefore the additional they're willing to support you. In turn, the additional Amazon provides, the additional you'll sell.
Our successful home-based businesses available foundation has been nonmoving in doing right by the client and providing the simplest on-line service attainable. Amazon took notice during this and created the U.S.A. a most well-liked partner, that skyrocketed our marketing capabilities and audience reach.
Start to seem at each call you create through the lens of respectful partnership build selections type an additional intelligent place and your company will stand to Take of your time. this is often, however, I actually have continuously checked out these platforms to form lasting success-and I feel that you simply can have it, too.
|
{
"pile_set_name": "OpenWebText2"
}
|
Caspase-4 may play a role in loss of proximal tubules and renal injury in nephropathic cystinosis.
Nephropathic cystinosis is characterized clinically by generalized proximal renal tubular dysfunction, renal Fanconi Syndrome and progressive renal failure. Glomerular-proximal tubule disconnection has been noted in renal biopsies from patients with nephropathic cystinosis. In vitro studies performed in cystinotic fibroblasts and renal proximal tubular cells support a role for apoptosis of the glomerulotubular junction, and we have further extended these studies to human native cystinotic kidney specimens. We performed semi-quantitative analysis of tubular density in kidney biopsies from patients with nephropathic cystinosis and demonstrated a significant reduction (p=0.0003) in the number of proximal tubules in the kidney tissue of patients with cystinosis compared to normal kidneys and kidneys with other causes of renal injury; this reduction appears to be associated with the over-expression of caspase-4. This study provides the first quantitative evidence of a loss of proximal tubules in nephropathic cystinosis and suggests a possible role of caspase-4 in the apoptotic loss of proximal tubular cells. Further work is needed to elucidate if this injury mechanism may be causative for the progression of renal functional decline in nephropathic cystinosis.
|
{
"pile_set_name": "PubMed Abstracts"
}
|
Introduction {#s1}
============
Working memory (WM), the ability to retain information online in order to guide goal directed behavior, is evident in rudimentary form as early as infancy ([@bib11]; [@bib18]), indicating that core WM processes are available throughout development. However, protracted improvements in performance, typically measured as the percentage of correctly performed trials or as changes in mean behavioral response metrics, such as reaction time and accuracy, demonstrate that WM continues to develop across adolescence and into early adulthood ([@bib17]; [@bib27]; [@bib1]; [@bib10]; [@bib49]; [@bib28]). Developmental decreases in behavioral variability during cognitive tasks are also evident through adolescence ([@bib30]; [@bib24]; [@bib48]), however this has not been directly examined in the context of WM.
Behavioral variability, or intra-individual variability, is a sensitive barometer of cognitive function; excessive variability frequently attends disorders such as schizophrenia ([@bib23]); ADHD ([@bib33]); and age-related cognitive decline ([@bib29]). The association between behavioral variability and cognitive performance suggests that adolescent stabilization of behavior reflects the continuing alteration of fundamental aspects of brain processing that support the transition to adult-like levels of performance. Mechanistically accounting for the stabilization of behavior is critical to our understanding of adolescent neural development. Thus, in the present study, we examine the neural processes that underlie behavioral *variability*.
Behavioral variability has been found to be associated with fluctuations in neural ([@bib34]; [@bib9]; [@bib35]) or blood-oxygen-level dependent (BOLD) signals occurring within individual brain areas ([@bib50]; [@bib39]; [@bib51]), and networks of brain areas ([@bib41]). Several lines of research suggest that these brain/behavior relationships are driven, at least in part, by trial-to-trial variability in gain modulating signals ([@bib13]; [@bib12]; [@bib38]) that enhance the activity of individual neurons or brain areas experiencing net excitation and further suppress the activity of neurons or areas experiencing net inhibition ([@bib43]). Recent electrophysiological evidence indicates that some gain modulating signals are shared across cortical areas and that moment-to-moment variation in such distributed gain signaling may account for a significant portion of neural and behavioral variability ([@bib38]). Additionally, the structure of neural covariance within populations of simultaneously recorded sensory neurons is best explained as the result of multiple ongoing gain modulating signals ([@bib38]), raising the possibility that different sources of gain variability affect neural activity in a functionally targeted way.
Widespread or global gain modulating processes would not, by definition, change the spatial distribution or 'pattern' of task evoked neural activity. Rather, they would influence the *amplitude* with which they are expressed, manifesting as trial-to-trial variability in the amplitude of expression of whole-brain patterns of activity that support task-relevant processes. We refer to this hypothesized phenomenon here as 'brain state' variability. With this operationalized definition of gain modulation, once an average pattern of task-evoked activity is known, the occurrence of fluctuations in global gain signals may be determined by measuring trial-to-trial differences in the amplitude of expression of the average whole-brain task state.
In the present developmental functional magnetic resonance imaging (fMRI) experiment, we exploit this anticipated characteristic of global gain modulation to study the relationship between trial-to-trial variability in behavioral responses during a memory-guided saccade (MGS) task and trial-to-trial variability in widespread gain signals (i.e., brain state variability) that occur near the time of the behavioral response. We explore the possibility that the reduction of behavioral variability observed during development is the result of stabilizing widespread gain signals.
We performed fMRI on an accelerated longitudinal cohort of 126 subjects between the ages of 8 and 33 years ([Figure 1a](#fig1){ref-type="fig"}) as they performed a variant of the MGS task ([@bib22]) ([Figure 1b](#fig1){ref-type="fig"}). On each trial, subjects first made a visually guided 'encoding' saccade to the target stimulus, which appeared at one of six locations: ±3, 6, or 9° along the horizontal visual meridian, during an initial visuomotor/encoding (VME) epoch. Children typically have difficulty suppressing orienting saccades to target stimuli ([@bib27]); by allowing subjects to make the initial encoding saccade, we removed a possible age-related source of behavioral differences that would be related to response inhibition rather than WM performance. After making a saccade to the target, subjects returned their gaze to a central fixation point, marking the onset of the maintenance epoch. The working memory retrieval epoch began when fixation was extinguished and subjects generated a saccade to the remembered location. We varied the duration of the initial target presentation (either 1.5 or 3 s) and the duration of the delay (1.5 or 9 s). We measured the subject\'s gaze location in the scanner with an MR compatible infrared camera and eye-tracking system (Model R-LRO6, Applied Science Laboratory, Bedford, MA).
{#fig1}
We applied several levels of analysis: First we characterized developmental changes in mean behavioral performance and behavioral variability and determined the extent to which measures of behavioral variability constitute a distinct metric of developmental status beyond that provided by measures of mean behavioral performance. Second, we identified canonical whole brain patterns of activity (brain states) corresponding to visuomotor, WM maintenance, and retrieval processes and determined whether these states were similarly expressed across development. Third, we measured the relationship between trial-to-trial fluctuations in the amplitude of expression of the task-related brain states and single trial behavioral performance. Fourth, we examined developmental trajectories of brain state variability and its relationship with individual developmental trajectories of behavioral variability.
Results {#s2}
=======
Behavioral performance improves and stabilizes through development {#s2-1}
------------------------------------------------------------------
For each trial, we assessed two measures of performance: reaction time (RT), the interval between the extinction of the fixation stimulus at the end of the delay interval and the initiation of the MGS, as well as saccadic error (SE), the signed visual angle separating the horizontal location of the target and the end point of the MGS. For the four task conditions during a session, we computed average RT and the standard deviation of RT. As additional measures of mean behavioral performance and behavioral variability, we defined saccade inaccuracy as the absolute value of the average SE for a given target and saccade imprecision as the standard deviation of SE for each target.
Using a linear mixed-effects model to account for the longitudinal nature of the data, we found, consistent with prior findings ([@bib27]), that average RT and inaccuracy decreases during development following an age^−1^ trajectory ([Figure 1c,e](#fig1){ref-type="fig"}). We estimated how much each behavioral measure changed at the group level between the ages of 8 and 33 and observed a reduction in average RT amounting to approximately 239 ms, a 39% decline (t(1346)=9.30; p=6.03e-20 Inaccuracy decreased by approximately 0.25°, or 47% (t(1346)=3.11; p=0.0019).
Importantly, both measures of behavioral variability also decrease with development following an age^−1^ trajectory ([Figure 1d,f](#fig1){ref-type="fig"}). The developmental change in the standard deviation of RT amounts to approximately 156 ms, a 67% reduction (t(1346)=7.84; p=9.4e-15). The imprecision of the MGS decreases with age, resulting in an estimated reduction of 0.87° of visual angle, a change, which amounts to roughly 35% (t(1346)=4.03; p=5.83e-5).
In order to determine whether behavioral variability provides additional information beyond mean behavioral measures about the developmental status of a subject, we compared the performance of linear models that predicted subject age from either mean RT or inaccuracy (null models) to matched linear models containing the corresponding behavioral variability factor (full models). We found that a null model predicting age from only mean RT was significantly improved by including the standard deviation of RT (null model: DF=4 AIC=−2212.7, Log-Likelihood=1110.4; full model: DF=5 AIC=−2220.7, Log-Likelihood=1115.6; p=0.002)([@bib4]). Including imprecision as a term improved model performance, but the difference did not achieve statistical significance (null model: DF=4, AIC=−2108.4, Log-Likelihood=1058.2; full model: DF=5 AIC=−2110.1, Log-Likelihood=1060.0, p=0.052). Thus, of the two measures of behavioral variability, reaction time variability, but not imprecision, appears to measurably reflect a unique aspect of cognitive development.
Transforming voxelwise time courses of task-evoked BOLD signal into time courses of brain state expression {#s2-2}
----------------------------------------------------------------------------------------------------------
Performance of the MGS task requires coordinated activation of many brain regions involved in visuomotor, WM maintenance, and retrieval processes. Each of these processes is associated with a distinct whole-brain pattern of activity, or 'brain state', which is expressed at the appropriate times during a trial. Developmental changes in average global gain would result in changes in the average amplitude of expression of task-related brain states across trials; subjects with greater mean global gain would express the task-related brain states to a greater extent, while subjects with reduced global gain would express the brain states with reduced amplitude. We therefore sought to estimate the canonical brain state patterns associated with the visuomotor, maintenance, and retrieval processes involved in the MGS task and to determine whether adolescent development is associated with changes in the mean amplitude of their expression.
Using a simple dimensionality reduction technique based on linear regression (see Materials and methods), we constructed whole brain patterns of activity, that is, brain states, associated with visuomotor/encoding (VME), WM maintenance, and retrieval processes. These task-related brain state patterns were extracted from idealized time courses of BOLD activity (see Materials and methods) observed during the long delay trials ([Figure 2a](#fig2){ref-type="fig"}).
{#fig2}
To represent VME processes we extracted the average patterns of BOLD signal occurring 6 s after the initial 'encoding' saccade. To represent retrieval processes, we again extracted patterns of activity present 6 s after the MGS. This delay allowed the BOLD signals associated with these processes to reach their peaks ([@bib20]). The pattern of activity associated with working memory maintenance was extracted from the TR immediately prior to the execution of the MGS, allowing us the purest estimate of delay period activation (furthest from the preceding visually-guided saccade, without intruding into the subsequent MGS). We orthogonalized each of the brain state patterns to ensure that they captured unique aspects of task activity by regressing the VME-related patterns from maintenance-related patterns, and regressing both VME- and maintenance-related patterns from the retrieval related patterns. This process removed remaining components of VME-related activity from the maintenance activity and importantly, allowed us to remove the pattern of activity associated with visuomotor responses from the retrieval-related pattern occurring during the MGS. Implicit in this procedure is the assumption that VME, maintenance, and retrieval processes are associated with distinct and consistent patterns of whole-brain BOLD activation that are expressed with the time course of a hemodynamic response.
Some neuronal gain modulators, particularly those acting through cholinergic pathways, alter gain with hemispheric specificity, similar to the effects of directed spatial attention ([@bib42]; [@bib16]; [@bib3]). To account for such potential hemispheric differences in gain signaling, we decomposed each brain state into two component patterns: a target hemifield specific 'spatial' component and a target hemifield non-specific, 'mean' component. Mean brain state components correspond to the average whole-brain patterns of activity associated with VME, maintenance, or retrieval, regardless of which visual hemifield the target was presented in ([Figure 2b](#fig2){ref-type="fig"} upper panels). Spatial components were constructed by computing the difference ---right minus left--- between brain state patterns determined separately for right and left side targets; they reflect the differences in activity during each task epoch resulting from the changes in the target's visual hemifield ([Figure 2b](#fig2){ref-type="fig"} lower panel). The whole brain patterns of activity observed during different epochs of the task can therefore be approximated with linear combinations of the mean and spatial components of the brain state patterns. For instance, maintenance related activity observed during trials in which the target appears in the right visual hemifield, can be approximated by *adding* the mean and spatial components of the maintenance brain state patterns, while maintenance activity observed during trials in which the target appears in the left visual hemifield can be approximated by *subtracting* the spatial component of the maintenance brain state from the mean component.
Taken together, the brain states characterize the patterns of engagement of canonical regions underlying the VME epoch (e.g., frontal eye fields), maintenance (e.g., prefrontal and frontal eye fields) and the non-visuomotor aspects of retrieval and response (e.g., preSMA) ([@bib2]; [@bib14]).
We verified that the resulting brain state components captured the relevant whole-brain patterns of BOLD signal associated with specific task epochs, by projecting each whole-brain volume of a subject's average trial time course, onto the complete set of brain state components using linear regression (see Materials and methods). We performed this operation separately for each task condition. By temporally ordering the regression weights associated with each brain state component from each TR of the average trial time course, we converted the whole-brain time courses of task activity into time courses of task-related brain state expression ([Figure 3](#fig3){ref-type="fig"}). This procedure is conceptually similar to principle component/independent component analyses in which whole-brain voxel-wise time series are converted into time courses of expression of whole-brain patterns. Here, however, the individual components, that is, brain states, have known behavioral and cognitive processes with which they have been empirically associated. These brain state patterns, although derived from only the long delay trials, also served as an effective basis for describing the whole-brain patterns of BOLD signal evoked during the short delay trials as well ([Figure 3](#fig3){ref-type="fig"}; lower panel).
{ref-type="fig"}, yellow, red, and blue background colors correspond to VME-, maintenance- and retrieval-related brain state components. Error bars depict one standard error of the mean. These figures depict the absence of significant age related differences in the mean amplitude of expression of the canonical task-related brain states.](elife-25606-fig3){#fig3}
Mean global gain does not change through development {#s2-3}
----------------------------------------------------
As expected, given our approach for constructing the brain states patterns, the time courses of brain state expression for all subjects exhibited similar temporal characteristics. However, it remained possible that there might be age-related differences in mean global gain, which would affect the average *amplitude* of brain state expression. To determine whether such differences were present, we performed omnibus F-tests on the brain state time courses for each task condition to assess the null hypotheses that all of the coefficients for *Age\*TR* (*Age\*TR\*TargetHemifield* in the case of the spatial brain state components) were equal to zero. We observed minimal age-related differences in the average time courses of brain state expression. These differences were observed for the spatial, but not mean, component of the VME states across all trial types (all p\<0.001). We did not detect any significant age-related differences in the expression of either mean or spatial components of the maintenance state (all p\>0.14). Results for age-related differences in the time courses of expression for the retrieval states were mixed: We observed no omnibus age-related differences within either of the long delay conditions, but within the long presentation short delay trials we observed a small age-related difference in the expression of the spatial retrieval state (F(42,14784)=1.6; p=0.012). Post-hoc examination of the individual *Age\*TR* and *Age\*TR\*TargetHemifield* coefficients at each time point in the trial revealed that this effect was driven by slightly greater expression of the state by adults, across right and left side targets, during the 5th, 6th, and 8th TRs. However, in our post-hoc analysis no single time point reached significance (minimum(p)=0.055). We also observed that the mean retrieval state was differentially expressed across age within the short presentation short delay trials (F(40,14112)=2.0; p\<0.001). Post-hoc analyses revealed that this effect was driven by a slightly greater expression of the mean retrieval state by adults during this condition during the 9th-13th TRs, well after the occurrence of peak expression for this state.
From visual inspection it is clear that adults exhibit a slightly prolonged expression of the spatial component of the VME brain state during the different trials (seen most prominently in [Figure 3](#fig3){ref-type="fig"}; lower panel). We also wanted to know whether the peak amplitude of spatial VME expression differed with age. From each session we examined the amplitude of peak expression of the spatial VME state for each trial type. Because the sign of expression of the spatial VME state varies depending on target hemifield, we extracted the maximum value of positive expression for right side trials, and we extracted the minimum value of expression for left side trials. If adults expressed the spatial VME state to a greater extent than children and adolescents due to greater average gain, this would result in greater positive expression for right side trials and reduced (more negative) peak expression during left side trials. We therefore examined the *Age\*Target* interaction term, which we found did not reach significance (t(2696)=1.59; p=0.111).
Combined, these results demonstrate that the set of brain state patterns provide a simplified low dimensional basis for describing BOLD signal changes evoked by the memory-guided saccade task. Importantly, age-related differences in the expression of the brain state patterns during task performance were minimal, and only the spatial component of the VME state exhibited consistent age-related differences in expression across trial types. Even here, however, the age-related differences were not ones of amplitude, but of duration, suggesting that mean global gain does not change during adolescent development.
Trial-to-trial reaction time and accuracy are associated with fluctuations in global gain that affect the amplitude of expression of brain states {#s2-4}
-------------------------------------------------------------------------------------------------------------------------------------------------
We hypothesized that behavioral performance is affected by fluctuations in global gain signals when they occur around the time of a behavioral response. The signature of variability in global gain signaling would be 'brain state variability', or momentary fluctuations in the amplitude of expression of whole brain states of activity associated with ongoing neural processes.
However, due to the delayed and prolonged nature of the BOLD response, the activity measured near the time of the MGS consists of multiple superposed states of activity associated with visuomotor, WM maintenance, and retrieval processes. Each of these processes may be affected differently by global gain variability and variability affecting each process might differentially affect behavioral performance. We therefore examined the trial-wise relationship between behavioral performance and the expression of the mean and spatial components of the VME, maintenance and retrieval brain state patterns in an interval of time centered on the occurrence of each MGS.
After removing the mean trial responses from each voxel, we looked for remaining patterns of activity across the brain that matched each of the canonical brain states. To accomplish this, we projected the whole-brain pattern of BOLD signal residual values at each TR onto the set of canonical brain state patterns using linear regression. This approach converted the whole-brain residual time series into a time series of task-related brain state fluctuations, and allowed us to determine the extent to which each brain state component was over- or under-expressed at a particular point in time.
We divided the resulting fluctuation time series for each brain state component into snippets, intervals of time centered on each MGS and extending ±15 TRs (each TR=1.5 s) before and after ([Figure 4a and b](#fig4){ref-type="fig"}). Each snippet was associated with the RT and SE of a particular trial. After aligning snippets from each trial to the TR in which the MGS occurred (TR 0), we used regression models to measure the relationship between both trial-to-trial differences in RT (z-scored) and SE (z-scored and rectified) and trial-to-trial differences in the expression of each brain state component at different times relative to the MGS. In order to measure the contribution of brain state variability to behavioral variability, we first estimated the fraction of trial-to-trial behavioral variance accounted for by a null model that included terms for non-neural trial-to-trial factors (e.g. target eccentricity) and performance factors (e.g. RT when testing SE and vice versa). Then we measured the additional fraction of behavioral variability that could be explained by including the measures of brain state expression from each TR in the snippets. The difference in explained variance between the full and null models indicates the fraction of trial-to-trial behavioral variability uniquely associated with brain state variability occurring at a particular time relative to the execution of a MGS ([Figure 5a and b](#fig5){ref-type="fig"} \[top panels\]).
{#fig4}
{#fig5}
As hypothesized, the relationship between brain state expression and both RT and SE peaked around the time that the MGS was executed. For trial-wise RT, brain state/behavior associations were significant beginning with the TR when the MGS was executed, peaking 1 TR after the saccade, and lasting for a total of 6 TRs. The relationship between trial-wise SE and brain state expression was similar, but much less prominent and evident during only the third TR following the MGS.
Trials with faster RTs were associated with greater early expression of the mean VME and maintenance brain states (TRs 0--2), and reduced later expression of all mean states (TRs 3--5). Greater SE was associated with greater expression of the mean VME state (TR 3).
RT and SE also covaried with fluctuations in the amplitude of expression of the spatial components of the brain state patterns (bottom three rows of [Figure 4a and b](#fig4){ref-type="fig"}). The fastest RTs occurred when whole-brain patterns of task-related activity were biased in the direction of the target (VME: TRs 1--3; maintenance and retrieval: TRs 1--2). That is, the fastest right side trials were those in which brain states were expressed most 'rightwardly', and the fastest left side trials were those in which the spatial brain states exhibited the greatest 'leftward' expression. Interestingly, greater target hemifield appropriate expression of the spatial VME state (TRs 2--3) was associated with increased SE, indicating that increasing the amplitude of its expression does not improve all aspects of task performance. Increased hemifield appropriate expression of the spatial component of the maintenance states (TR 3), in contrast, was associated with reduced SE.
That greater expression of VME brain states was associated with faster RT and greater SE, prompted us to examine the behavioral data for signs of a speed-accuracy trade-off. We found a significant quadratic relationship between z-scored RT and SE at the trial level indicating that, within a session, excessively fast and slow responses were associated with reduced accuracy (t(16754)=4.64; p=3.52e-6). Collectively, these results demonstrate that, trial-to-trial, both reaction time and accuracy of subjects' responses covary with the amplitude of expression of whole-brain patterns of task related activity.
Brain state variability reflects fluctuations in amplitude, not just timing, of brain state expression {#s2-5}
------------------------------------------------------------------------------------------------------
Greater early expression and reduced later expression of the mean brain states for fast RT trials (represented by the transition from blue to red in some rows of the lower panel of [Figure 4a](#fig4){ref-type="fig"}), could possibly be explained by trivial correlation between the timing of a saccade and the latency of the expression of the brain state. That is, on trials with longer RTs, the BOLD activity would be shifted later, causing the brain state to appear under-expressed early and over-expressed late relative to the average time course. To determine whether this relationship could account for the observed correlation with behavior, we performed a set of simulations to compare the temporal patterns of BOLD signal residuals for fast and slow RT trials that would result from timing-, amplitude-, and timing and amplitude-based relationships (see Materials and methods).
As depicted by [Figure 5](#fig5){ref-type="fig"}, the contributions of timing and amplitude variability to brain state variability are distinguished by their effect on the temporal structure of the residual brain state time series. A purely timing based explanation of the mean brain state/RT relationship predicts that the integral of the mean brain state residual time series, for both fast and slow reaction time trials should converge to zero ([Figure 5](#fig5){ref-type="fig"}, column 1). A relationship between reaction time and brain state expression mediated by fluctuations in the amplitude of expression of the brain state patterns predicts that the same time integrals converge to non-zero values of opposite sign ([Figure 5](#fig5){ref-type="fig"}, column 2). A combination of timing- and amplitude-based relationships predicts an initial bifurcation of the time integrals of the fast and slow reaction time residuals that then a partial re-convergence ([Figure 5](#fig5){ref-type="fig"}, column 3). We found that the trial-wise relationship between RT and the expression of the mean VME brain state ---the state associated with eye-movements--- was inconsistent with both a purely amplitude-based mechanism, and purely timing-based mechanism and instead is likely to reflect a mixture of both the trivial time shifted BOLD response, and a true relationship between gain fluctuation and performance ([Figure 5](#fig5){ref-type="fig"}, column 4).
Brain state variability decreases with development {#s2-6}
--------------------------------------------------
If the stabilization of behavior during adolescence is related to a reduction in brain state variability, then the proportion of BOLD signal variability associated with brain state variability will decrease with age. We examined the subset of TRs (0--5; the highlighted region [Figure 4a](#fig4){ref-type="fig"}) around each correct MGS that showed a significant relationship with trial-to-trial behavioral performance and computed SS~brain~, the sum of whole-brain squared error associated with all brain state patterns across each TR as well as SS~error~, the sum of the remaining squared error. For each session, we computed the ratio SS~brain~/(SS~brain~ + SS~error~), producing values we refer to here as total brain state variability, which corresponds to the fraction of residual whole-brain BOLD signal variability associated with brain state variability. We found that total brain state variability decreases with age ([Figure 6a](#fig6){ref-type="fig"}) after controlling for mean frame-wise displacement (FD)([@bib37]; [@bib45]) and the number of correctly performed trials (t(333)=-3.35; p=9.0e-4).
{#fig6}
Brain state variability related to maintenance and retrieval processes decreases with development {#s2-7}
-------------------------------------------------------------------------------------------------
The measure of total brain state variability, SS~brain~, can be simply decomposed into a sum of contributions from the mean and spatial components of VME, maintenance, and retrieval brain state patterns; SS~brain~ = SS~VME~ + S~SMaint~ + SS~Retrieval~. We computed the ratio of each brain state component sum of squared error to (SS~brain~ + SS~error~), and found that the stability of each of the three sets of brain state components exhibit distinct developmental trajectories ([Figure 6b](#fig6){ref-type="fig"}). The proportion of VME-related brain state variability did not significantly decrease with age (t(333)=-1.47; p=0.14). However, maintenance- and retrieval-related brain state variability both show significant age-related decreases (t(333)=-3.8; p=1.8e-4 and t(333)=-5.53; p=6.27e-8 respectively). Pairwise comparisons of slopes reveal that trajectories of VME and retrieval-related variability can not be significantly distinguished from one another (t(670)=0.60; p=0.54), while maintenance-related brain state variability decreases more slowly through adolescence compared to both VME and retrieval-related variability (t(670)= 2.44; p=0.014 and t(670)=-3.71; p=2.2e-4 respectively).
Age-related changes in brain state variability are not related to motion {#s2-8}
------------------------------------------------------------------------
Next, we sought to determine whether a systematic relationship exists between in-scanner motion and measures of brain state variability. We reasoned that if brain state variability was unrelated to movement-related artifacts, then our finding that brain state variability was reduced in older subjects would still hold if we selectively sub-sampled our data so that we compared a group of adults who moved excessively to a group of children who moved relatively little. The biased subsampling routine that we employed is based on a mean-matching technique that has been described in detail elsewhere ([@bib7]). Briefly, we divided our data into two sets, split at the median age of our sample. We selectively drew samples from the two data sets such that, on average the older sample exhibited greater mean FD than the younger group. We found that reversing the relationship between motion and age does not significantly alter our finding that older subjects exhibited less brain state variability than younger subjects ([Figure 7](#fig7){ref-type="fig"}).
{#fig7}
Brain state variability predicts individual differences in behavioral variability {#s2-9}
---------------------------------------------------------------------------------
Our analyses to this point have shown that trial-to-trial differences in behavioral performance are associated with brain state variability, which stabilizes during adolescent development. If the stabilization of behavior during adolescence is the result of reduced brain state variability, then subjects exhibiting the greatest longitudinal reduction in behavioral variability should be those that exhibit the greatest decline in brain state variability. To determine whether this is the case, we leveraged the longitudinal design of our dataset to examine how individual differences in the developmental trajectories of RT variability and saccade imprecision are related to individual developmental trajectories of brain state variability.
We selected a subset of 29 subjects for whom we had at least four complete sessions of data, and estimated individual developmental slopes of total brain state variability. We modeled developmental changes in total brain state variability as a linear effect of age after observing superior performance compared to a model in which total brain state variability was fit with a age^-1^ term (simulated likelihood ratio test with 5000 iterations, age^−1^ model: DF=9 AIC=−3116.2, Log-Likelihood=15567.1; linear age model: DF=9 AIC=−3120.2, Log-Likelihood=1569.1; p=0.015);. We also estimated individual regression weights for the developmental trajectories of reaction time variability and imprecision after controlling for task condition and mean reaction time. Here we used an age^−1^ term to model individual trajectories.
Subjects exhibiting the greatest decreases in behavioral variability given their age have the greatest age^−1^ regression coefficients. In contrast, subjects exhibiting the greatest decrease in total brain state variability have the smallest age regression coefficients. Evidence consistent with the hypothesis that the developmental stabilization of behavior is driven by a reduction in brain state variability would be the existence of a negative correlation between the brain state and behavioral regression coefficients. We observed just such a negative relationship for reaction time variability (r = −0.48; p=0.008) ([Figure 8a](#fig8){ref-type="fig"}). This result remained significant (p\<0.05) when we limited our data set to subjects with 3--5 or more sessions as well. However, the within-subject relationship between brain state variability and saccade imprecision was not significant (r=0.28; p=0.14) ([Figure 8b](#fig8){ref-type="fig"}).
{#fig8}
Given the modest relationship between brain state variability and SE at the single trial level, we considered the possibility that we might be underpowered to detect the longitudinal relationship between total brain state variability and saccadic imprecision in our smaller sample size. We expanded our analyses to investigate whether individual differences in saccade imprecision were related to individual differences in brain state variability using our entire sample and including age^−1^ terms as covariates. Here we did observe a significant positive relationship between total brain state variability and saccade imprecision (t(1344)=3.2; p=0.001) ([Figure 8c](#fig8){ref-type="fig"}).
Discussion {#s3}
==========
Reduced variability is a key component of the behavioral improvements that are observed during adolescent development. We demonstrated an example of this stabilization using a working memory task in which subjects' performance of memory guided saccades improved on average and became less variable with age. To understand the neural basis of developmentally stabilized behavior, we investigated the relationship between variability in the reaction times and accuracies of eye-movements and fluctuations of global gain signals hypothesized to affect the amplitude of expression of whole-brain states of activity underlying distinct task-related processes. We found that while the average amplitude of expression of whole-brain task states was similar across subjects, regardless of their age, trial-to-trial variability in the amplitude of their expression decreased during adolescence and was correlated with trial-to-trial variability in the reaction time and accuracy of memory-guided saccades. Importantly, this brain state variability represented fluctuations in the *amplitude* of brain state expression across trials, not simply variability in the *timing* of their expression or global fluctuations in mean activity (see Materials and methods).
Additionally, variability occurring specifically in the expression of the mean and spatial components of the VME brain states associated with visuomotor processes mirrored the higher-order phenomenon of speed-accuracy trade-off, that is, greater VME expression was associated with faster responses and increased saccadic error. Greater expression of the spatial component of the working memory maintenance state, in contrast, was associated with faster responses and reduced saccadic error. These findings are broadly consistent with recent theoretical models ([@bib47]) and empirical data from non-human primates ([@bib21]) suggesting gain modulation plays a role the speed-accuracy trade-off. Appropriately balanced, independent variability in gain signals affecting VME and maintenance brain state expression, may explain the quadratic speed accuracy trade-off that we observed in our data.
We hypothesized that developmental decreases in the variability of global gain signaling would result in more stable expression of task-related brain states. Accordingly, we determined whether the expression of brain states associated with visuomotor/encoding (VME), maintenance, and retrieval processes, exhibited similar or different trajectories of variable expression across development. We found that the variability of the VME states did not decrease with age although they were significant predictors of single trial performance. Our task design did not allow us to dissociate the activity involved strictly in working memory encoding from that involved strictly in the visuomotor response, however, the re-expresssion of the VME states during the memory-guided saccade suggests that they are largely dominated by visuomotor activity. In contrast, working memory maintenance and retrieval processes, whose fluctuations were also related to trial-wise performance, showed significant decreases in the variability of their expression. Perhaps most significantly, we found a relationship between individual longitudinal changes in total brain state variability and changes in reaction time variability as well as a relationship between total brain state variability and memory-guided saccade imprecision after covarying for age. Combined, our findings provide evidence that adolescent developmental changes in behavioral variability are driven by the stabilization of gain signals specifically affecting cognitive processes while gain signals affecting sensorimotor processes continue to vary greatly across all ages.
A complex interplay between top-down control ([@bib32]; [@bib25]) and a mixture of contributions from several interconnected neuro-modulatory systems, each exerting its particular influence on ongoing sensorimotor, and cognitive processes ([@bib16]; [@bib26]; [@bib40]; [@bib6]) may underlie these developmental changes in brain state variability. Recent fMRI studies have shown that fluctuations in the activity of midbrain and brain stem nuclei affect resting state connectivity in what appears to be a functionally organized way ([@bib5]). Similarly, cholinergic modulation has been shown to amplify the spatially selective effects of perceptual processing and attention in a manner analogous to fluctuations in our spatial brain state components ([@bib16]; [@bib3]; [@bib40]). Finally, myelination and synaptic pruning, which continue to progress in critical brain systems ([@bib46]), occurring at different rates for different brain regions, ([@bib36]) may also affect neural signal to noise ratios and play a role in the stability of gain signals that contribute to behavioral variability. Differing rates of development in any of these systems could produce distinct developmental trajectories for the components of brain state variability.
The presence of brain state variability also bears upon the interpretation of brain/behavior correlations in general. In studies of single unit and population activity in non-human primates, correlations between the trial-to-trial fluctuations of neuronal activity and behavioral responses, often termed choice-probability (CP) or detect-probability (DP), have been interpreted as signifying a neuron\'s causal role in the behavior ([@bib44]). It has been proposed, however, that brain/behavior relationships like CP and DP, might reflect a neuron's covariation with a neuronal gain signals, such as attention, rather than direct causal involvement ([@bib35]; [@bib8]). Brain state variability is consistent with this hypothesis and expands upon it in two ways 1) That brain state variability is the covariation of many task-related (and presumably behaviorally relevant) brain regions suggests that brain/behavior correlations like CP and DP should be wide-spread throughout task-related brain areas; and 2) Our finding of distinct developmental trajectories of brain state variability affecting different task-related processes suggests that fluctuations in multiple functionally specific global gain signals contribute to observed brain behavior correlations. This interpretation also gains support from recent electrophysiological evidence that multiple independent gain modulating signals are apparent within the activity of populations of neurons in sensory cortex ([@bib38]).
An attractive model that would provide synthesis for our findings and those discussed above is that stable behavioral performance requires similarly stable allocation of top-down control processes, like attention. Such top-down processes partly exert their influence through multiple widespread gain signals that are functionally targeted, differentially affecting sensorimotor and cognitive processes. In this model, the stabilizing of working memory behavioral performance that we observed during adolescent development is the result of stabilizing those gain signals that affect working memory maintenance and retrieval processes. Our results, in sum, provide compelling evidence that core cognitive functions are online by childhood and what underlies cognitive development through adolescence is a fine-tuning of the ability to stabilize the expression neural activity associated with those cognitive processes.
Materials and methods {#s4}
=====================
Subjects {#s4-1}
--------
We tested 152 subjects between the ages of 8 and 33. Subjects were initially recruited between the ages of 8 and 30 years and were scanned approximately annually for 1--10 years. Subjects were included based on two criteria: (1) Mean frame-wise displacement (FD) was less than 0.15 mm; and (2) at least 50% of the trials from each of the four trial types had to be measurably correct. Here, incorrect trials are those for which measurements of reaction time and endpoints for both visually- and memory-guided saccades were unavailable due to blink artifacts, noisy data, or transient loss of pupil- or corneal reflection-lock. After applying these exclusion parameters our dataset consisted of 126 subjects (60 female). We applied no further outlier control for our analyses. Participants and/or their legal guardians provided informed consent before participating in this study. Experimental procedures for this study complied with the Code of Ethics of the World Medical Association (1964; Declaration of Helsinki) and the Institutional Review Board at the University of Pittsburgh. Subjects were paid for their participation in the study.
Eye-movements {#s4-2}
-------------
Eye-movements were recorded in the scanner with an infrared camera system equipped with long-range optics and sampling at 60 Hz (Model R-LRO6, Applied Science Laboratories, Bedford MA). Subjects' compliance with instructions was assessed and eye-movements were monitored via remote video during task performance. We used a nine-point calibration procedure to estimate the transformation from the eye-tracker\'s native encoding space to on-screen pixel location. Saccadic events were detected using an in-house suite of automated routines. Individual saccade candidate events were detected from local maxima in the eye-movement velocity trace. Saccade start and end times were determined by searching backward and forward in time in the velocity trace to find the sample where velocity dropped below 1/10th of the peak velocity ([@bib19]).
fMRI data and acquisition parameters {#s4-3}
------------------------------------
Imaging data were acquired using a Siemens 3-Tesla MAGNETOM Allegra (Erlangen, Germany) system with a standard radio-frequency (RF) head coil at the Brain Imaging Research Center, University of Pittsburgh, Pittsburgh, PA. Structural images were acquired using a sagittal magnetization prepared rapid gradient echo (MPRAGE) T1-weighted pulse sequence with 224 slices with 0.7825 mm slice resolution. Functional images were acquired using a gradient echo echo-planar (EPI) sequence sensitive to blood-oxygen-dependent (BOLD) contrast (T2\*) (TR=1.5 s, TE=25 ms, flip angle=70°, voxel size=3.125×3.125×4 mm slice resolution, 229 volumes). Twenty-nine slices per volume were collected with no gap and aligned to the anterior and posterior commissure (AC-PC) plane. Structural and functional fMRI data are available through the Dryad repository ([@bib31]).
Anatomical preprocessing {#s4-4}
------------------------
T1-weighted anatomical images were reconstructed from raw DICOM files and converted to NIFTI format. We estimated the bias field corrections using smoothed and highpass filtered anatomical data analyzed with FSLs *fast* algorithm. After bias field correction we constructed a skull stripped anatomical data set for the subject, which we used to estimate the 12 degree-of-freedom affine transformations that would align the subjects data with the MNI152 anatomical template. Finally, we computed the non-linear transformation that would bring the subject's affine-aligned anatomical data set into registration with the MNI152 template. We saved final combined linear/non-linear transformation for later use in registering the subjects' functional data to the standard space.
Functional preprocessing {#s4-5}
------------------------
fMRI data were preprocessed using a combination of AFNI (Analysis of Functional NeuroImages, RRID:[SCR_005927](https://scicrunch.org/resolver/SCR_005927)) and FSL software (FSL, RRID:[SCR_002823](https://scicrunch.org/resolver/SCR_002823)). In our pre-processing pipeline, raw data was converted from DICOM format to NIFTI volumes and slice-timing correction was applied using AFNI tools. We performed motion estimation and correction in two phases. First we pre-aligned each frame of a subject's functional data to a volume created by taking the temporal mean of the 4-D functional time series. Then, a second, 'true', average functional volume was computed from the pre-aligned functional data, producing a reference functional volume that was less affected by motion artifacts. We then aligned each frame of the original function time series to this second reference volume using sinc-function interpolation and estimating the time course of translational and rotational motion throughout the run. We used these estimated time series throughout our later analyses of the functional data.
Next, using FSL's brain extraction tool, we stripped the skull and superfluous tissues from the subject's motion corrected mean functional EPI images, afterward aligning the resulting mean EPI volumes to their anatomical MPRAGE volume using a six degree-of-freedom rigid-body transformation estimated using spline interpolation. To align each frame of the motion corrected EPI sequence to the subjects structural image, we applied the translation estimated in the previous step to each frame of the motion corrected functional time series and then removed the skull and extraneous tissues from each frame of the functional time series. Tissue remaining within the mean functional volume after the skull stripping procedure was removed by applying a dilated binary mask to the mean aligned functional volume that removed extreme voxels whose values did not reside in middle 98th percentile. We then removed voxel-wise temporal extrema using AFNI's 3dDespike software.
To align a subjects functional data to a standard MNI152 (Montreal Neurological Institute; MNI) template in a single transformation step, we used FSL *convertwarp*, and *applywarp* functions to combine the estimated motion correction, functional-to-structural, and linear and non-linear subject-to-MNI152 transformations into a single operator, which we applied separately to each frame of the original slice time-corrected functional data.
We performed minimal spatial smoothing on the aligned functional data, using a SUSAN algorithm with a 5 mm FWHM kernel, followed by a conservative high-pass filtering of the voxel-wise time series, which removed or attenuated BOLD signal frequencies below 0.0083 Hz (corresponding to fewer than three cycles per task run). Finally, we rescaled all voxel values by a value defined to be 10,000 divided by the global median.
Overview of brain state analysis {#s4-6}
--------------------------------
In our analyses, measuring brain state variability requires determining the spatial structure of canonical whole-brain patterns of BOLD signal associated with distinct task-related processes. The flowchart in [Figure 9](#fig9){ref-type="fig"} depicts an outline of the processing steps used to transform individual preprocessed fMRI time series into the average time courses of brain state expression and brain state variability as well as the relationship between the major processing steps and certain key analyses.
{#fig9}
Deconvolution {#s4-7}
-------------
From each session\'s data we extracted eight voxel-wise average time courses of BOLD activity corresponding to each of the four task conditions when stimulus presentations occurred in wither the left or right visual hemifields. We estimated these time courses with a finite impulse response (FIR) regression model. FIR design matrices were constructed manually and applied to the voxel-wise time series using 3dDeconvolve (AFNI). All trials, including incorrect responses and blinks, for each stimulus type were modeled over an interval consisting of the duration (from initial stimulus presentation to the execution of the memory-guided saccade) plus an additional 22.5 s (15 TRs). The design matrix included nuisance regressors to account for the effects of signal drift, subject motion, and global signal changes as captured by white matter and cerebrospinal fluid (CSF) signals and their derivatives. Signal drift for each run was modeled as a third order Legendre polynomial time series.
Head motion was computed along six affine components corresponding to translation in the three cardinal directions and rotations about three orthogonal axes. In addition we computed a time course of total displacement for each session based on the Euclidean norm of the time derivative of the movement time series at each time point. To account for the prolonged effect of autocorrelated movement on the BOLD signal, we included temporally leading (−1TR) and lagging (+1--2 TRs) copies of each of the seven motion regressors ([@bib15]). Each of the seven motion time courses therefore contributed four motion regressors to the deconvolution design matrix. After deconvolution, we scaled the resulting whole-brain average trial time courses at each voxel, normalizing them to the standard deviation of the regression residuals at the same voxel location.
Estimating the idealized time courses {#s4-8}
-------------------------------------
Idealized voxel-wise trial time courses for the long delay conditions were estimated from the scaled average trial time course estimates. We modeled these separately for each condition and target hemifield using 3dLME (AFNI), a linear mixed-effects framework. Each time point was modeled as a separate categorical fixed effect and we did not include an intercept term in the model. To account for any bias due to the over representation of subjects who participated in more scans, we included subject identity as a random effect component in the regression model. For each trial type we computed the total displacement undergone by each subject\'s brain during the BOLD signal measurement intervals (trial durations plus 15 TRs) and included it as a fixed-effect component in the regression analysis. We calculated subjects\' average age for all of their sessions and, after centering by the global mean, included it as a subject level fixed-effects regressor. We included a mean age by time interaction term to capture age-related differences in the voxel-wise time courses. We included the subjects\' age at each session, after subject level mean-centering, as a second age-related random-effects regressor. Within a given voxel, a single whole trial time course may include independent contributions from visually- and memory-guided saccade events. To account for potential differences due to variability in the number of correct saccades, we included the proportion of unclassifiable and incorrect visually- and memory-guided saccades and their interactions with time as fixed-effect components of the model. We produced idealized trial time courses by generating the voxel-wise model estimates for a subject of mean age, mean in-scanner displacement, and perfect trial performance. This process generated four idealized whole-brain time series corresponding to both long-delay conditions in which targets were located in either the left or right visual hemifield. We used these idealized BOLD time series in our construction of the canonical brain states
Constructing motion templates and other spatial nuisance regressors {#s4-9}
-------------------------------------------------------------------
Including lagged motion regressors during deconvolution accounts for the prolonged linear effects of in-scanner motion up to 2 TRs after a given movement ([@bib15]). To control for any effects of motion that may continue beyond that time and bias measures of brain state variability, we developed a method to account for temporally prolonged linear motion artifacts from fMRI data based on motion template volumes that model the spatial pattern of artifacts associated with the linear effects of motion ([Figure 10a](#fig10){ref-type="fig"}). We normalized the regression coefficients associated with each lagged motion regressor at each voxel by the temporal standard deviation of the voxel's post-deconvolution residuals and computed their resulting mean spatial patterns across all subjects. We used these whole-brain patterns of normalized regression coefficients to construct motion artifact templates. For each of the 28 templates (7 motion components with 4 temporal lead/lags), we subtracted the spatial mean of all voxel values and scaled the resulting volumes to a common vector magnitude. Then, using principle component decomposition, we found a set of 11 motion templates that captured \>90% of the variability in the set, which were then converted back into 3D volumes.
{#fig10}
We verified that the set of motion templates captured components of linear motion related BOLD signal variability that remained after deconvolution by computing SS~motion~, the sum of squared error associated with all motion templates across all TRs in the whole-brain residual BOLD time series. We then computed the ratio SS~motion/~(SS~brain~ + SS~error~) where SS~brain~ is the sum of squared error associated with brain state variability and SS~error~ is unexplained sum of squared error. We refer to this ratio as motion template variance. We found that estimates of mean FD and motion template variance are highly related (t(337)=10.3; p=1.06e-21), indicating that even with rigorous motion controls during deconvolution (see Materials and methods), significant linear motion artifacts remain ([Figure 10b](#fig10){ref-type="fig"}). We found that while brain state variability is not significantly related to mean FD (t(337)=1.39; p=0.17), it is significantly greater in subjects exhibiting greater motion template variability (t(337)=2.01; p=0.045), (c-d). The usage of motion templates as an additional control for motion-related artifacts arose from an abundance of caution and the need for high dimensional nuisance regressors that could be fit simultaneously to each volume alongside the set of brain state patterns. While these preliminary analyses suggest promise for the approach, we acknowledge that it has yet to be fully validated.
Although the relationship between brain state variability and movement was small and inconsistent across different estimates of motion, we sought a more rigorous control by comparing brain state variability in a group of high motion adults to low motion children and adolescents (see Results).
To remove trivial components of whole-brain BOLD signal variability associated with TR-to-TR shifts in the global mean across the entire imaged volume, we included a constant offset template, which consisted of a whole-brain binary mask. To account for TR-to-TR shifts in mean BOLD signal that are limited to the gray matter (the sort of variability which in some resting state studies is removed via global gray matter signal regression) we included a second binary template defined by the brain state mask (see Materials and methods Estimating canonical brain states).
In addition, we constructed a set of spatial gradient templates to account for other trivial modes of whole-brain BOLD signal variability consisting of simple linear spatial gradient patterns. We first created 3 spatial gradient volumes whose voxel values were equal to their x, y, and z coordinates relative to the volume\'s center of mass. We set each voxel that fell outside of a whole-brain MNI mask to zero. We then computed a set of 3 'interaction' templates that corresponded to each pair-wise product of the spatial gradient templates. As a whole, this set of 19 templates constituted the set of spatial nuisance regressors that we used to capture and remove remaining unwanted spatial modes of whole-brain BOLD signal variability associated with motion and trivial variability in global signal.
We included the motion templates and spatial constant and gradient templates as additional nuisance regressors in all analyses that involved projecting whole brain volumes of BOLD signal onto the canonical set of brain state components.
Estimating canonical brain states {#s4-10}
---------------------------------
We derived the canonical brain states from whole-brain patterns of BOLD activity extracted from different time points within the idealized trial time courses. To constrain the brain state patterns to swathes of gray matter that were reliably imaged across sessions, we constructed a mask defined by the intersection of a probabilistic MNI gray-matter mask (threshold≥0.5) and a mask constructed from the group average of the model R-squared maps from each session\'s initial deconvolution (threshold≥0.27, selected to removed unreliably imaged brain regions). Combined these masks served to eliminate white matter; CSF; the hind-brain; cerebellum, the inferior portion of which was not consistently imaged across sessions; artifact-prone basal forebrain; and infero-temporal lobes). The extent of the resulting brain state mask can be observed in ([Figure 2b](#fig2){ref-type="fig"}).
Each canonical brain state consists of a mean and spatial component. The mean component represents the average pattern of whole-brain BOLD activity evoked by a specific task epoch, regardless of target location. The spatial component represents the difference between the patterns of whole-brain BOLD activity evoked by specific task epochs for right side versus left side targets. To reduce the extent to which BOLD signal resulting from a trial\'s initial visually-guided saccade contaminated our estimate of maintenance- and retrieval-related brain states, we maximized the temporal distance between the task epochs by using time courses from long delay trials only.
The canonical mean and spatial VME brain states were derived from the whole-brain pattern of activity occurring within the brain state mask around the time of the visually guided saccade. To account for hemodynamic lag, we extracted four volumes of BOLD activity (corresponding to the left and right side targets from both long delay conditions) from the idealized long delay time courses six seconds (4 TRs) after the visually guided saccade was performed to ensure that most regions would have achieved their peak BOLD response ([@bib20]). We constructed the mean VME brain state by computing the voxel-wise average of normalized BOLD activity across all four volumes. To construct the spatial VME state we separately computed the voxel-wise average of the VME volumes for right and left side targets. The spatial state was defined as the difference --right minus left-- of the resulting volumes.
Canonical maintenance-related brain states were based on the patterns of normalized BOLD signal that occurred during the TR immediately prior to a subjects' execution of the memory-guided saccade. Since brain states were estimated from the long delay trial types only, the maintenance brain state patterns were naturally separated from the BOLD response evoked by the initial stimulus presentation and the subjects\' subsequent visually guided saccade. To completely remove any remaining stimulus and visuomotor contributions, as well as global average signal, we extracted all volumes from the idealized whole-brain trial time courses that occurred before or included a trial\'s initial visually guided saccade and regressed these patterns from the maintenance state patterns. States were then constructed for maintenance intervals as described for the VME states above.
Construction of the retrieval-related mean and spatial brain states followed a similar course as that used in the construction of the VME and maintenance-related brain states. Like the VME states, retrieval-related states were based on the normalized BOLD responses occurring during the 4^th^ TR after the execution of the memory-guided saccade. We removed all components of VME and maintenance activity by regressing out every pattern of activity within the brain state mask that occurred before the TR at which the subject began a memory-guided saccade.
The resulting mean and spatial brain states, as expected, exhibited a high degree of mirror symmetry across the midline plane. However, we observed noise and artifacts were introduced into the brain state patterns by accumulating error in our regression-based orthogonalization procedure. To correct for these artifacts, we leveraged symmetry of the brain state patterns by averaging them with corresponding brain state patterns derived from a left/right mirrored version of the idealized BOLD time series. To do this, we constructed left-right mirrored versions of the idealized trial time courses, performed an identical set of operations to define each brain state and combined them with the original set of brain state patterns. We first applied a mirror matrix to the idealized trial time course volumes. We aligned the mirrored volumes to the standard MNI template using a 12-degrees of freedom affine transformation. We then applied a non-linear transformation that warped the mirrored and aligned volumes to the standard MNI template. The final mean brain state components were constructed by computing the voxel-wise average of the mirrored and non-mirrored mean brain states. We inverted the sign of all voxel values of the mirrored spatial volumes before averaging them with the non-mirrored spatial volumes to produce the final spatial brain state components. In practice the mirroring procedure had only a minor effect on our results, visibly removing noise and improving the relationship that we observed between trial-to-trial behavioral performance and brain state expression ([Figure 4](#fig4){ref-type="fig"}).
Average time courses of brain state expression and brain state variability {#s4-11}
--------------------------------------------------------------------------
We converted the average whole-brain trial time series and whole-brain residual time series into average time courses of brain state expression and variability respectively. For each TR, we extracted the whole-brain pattern of activity that we then vectorized and modeled using a linear regression. Our design matrix consisted of vectorized versions of the six brain states (the mean and spatial components of the VME, maintenance and retrieval states) as well as the 19 nuisance regressors templates described above. For each TR we extracted the regression weights for the six brain states, motion, and nuisance components and ordered them into a time series. When this procedure is performed on the whole-brain **average trial time series**, the result is a time course of expression of each of the brain states during a trial. When performed on the whole-brain **residual time series**, the result is a time course of brain state fluctuations, where positive values indicate that a particular brain state was present to a greater extent than average and negative values indicate that a state was expressed less than average. For each session, temporally z-scored the time series of variability for each brain state component.
Trial-to-trial brain state and behavior relationship {#s4-12}
----------------------------------------------------
For each session we separately transformed reaction time and saccadic error from each of the four main task conditions into z-scores. SE was rectified such that high SE values reflect greater error in memory-guided saccadic endpoints on a trial. We excluded all trials for which measurements of reaction time and endpoints for both visually- and memory-guided saccades were unavailable due to blink artifacts, noisy data, or transient loss of pupil- or corneal reflection-lock.
We related trial-to-trial variability in reaction time and accuracy to variability in the expression of each brain state across a range of times (±15 TRs) relative to the TRs that contained the memory-guided saccades from each trial ([Figure 4](#fig4){ref-type="fig"}). Using all correct trials across all sessions, we extracted our z-scored measurements of brain state fluctuation derived from the whole-brain residual time series. We then constructed a regression model that included terms for the measured values of each brain state at the relative TR. We also included terms for the spatial brain state interaction with target hemifield.
Each model contained terms that varied across trials but did not vary across relative TRs. These included terms for run number (coded as 1--3), target hemifield (coded as −1,1), target location (eccentricity, coded from least to most eccentric as 1--3), and the square of target location term. Because it is possible that RT and SE are correlated on a trial-to-trial basis, a true relationship between brain state variability and RT may result in a trivial relationship between brain state variability and SE, or vice versa. To account for this possibility, in the RT regression models, we included a term for trial-to-trial SE and its square. Similarly, for the SE regression models we included a term for RT and its square. This set of regressor terms served as a null model against which the full brain state model was compared. The trial-wise reaction time and accuracy models were fit using a linear mixed-effects framework (MATLAB) to account for the different numbers repeated measurements for many of the subjects. Subject identity was modeled as a random effect. We used the difference between the ordinary R^2^ values for full and null models at each relative TR to assess the amount of unique behavioral variability accounted for by trial-to-trial fluctuations in the expression of different brain states. At each relative TR we compared the null and full modes using simulated maximum-likelihood estimation procedure with 5000 iterations (MATLAB).
Reaction time simulations {#s4-13}
-------------------------
To perform these simulations, we compiled a distribution of reaction times for all correct memory-guided saccades across our subject database. We then simulated 400 trials with reaction times selected to produce a distribution that was identical to the compiled empirical distribution. To simulate the simple timing-based effect of BOLD signal variability we generated an impulse function for each simulated trial, a vector where all but one element is equal to zero, where each consecutive element refers to 60 ms time bin after the extinction of the central fixation cross (the signal to perform a memory-guided saccade). For each draw from the reaction time distribution we generated new impulse function vector by inserting a 1 into the vector at the index corresponding to the reaction time on that trial. We convolved each of the 400 impulse functions with a canonical HRF modeled at the same 60 ms resolution. This produced a set of HRF time series whose time of peak amplitude varied with reaction time
Amplitude-based simulations were performed similarly but with two key differences: (1) for each trial we inserted 1 into all impulse function vectors at the same time index, corresponding to mean reaction time, for all trials; and (2) we added or subtracted from the 1 a linearly interpolated value between ±0.25 where+0.25 corresponded to the fastest reaction time and −0.25 corresponded to the slowest reaction time. Mixed amplitude and timing based simulations were a hybrid of the two described above. The index of the 1 for each trial\'s impulse function was selected to coincide with the reaction time on that trial. An additional amplitude modulation factor, as above, was added to the impulse index.
Separately for timing-, amplitude-, and timing and amplitude-based simulations, we computed the mean HRF time series across all trials and simulated residual time series by subtracting the mean HRF time series from the individual trial time series. Next we divided the simulated residuals in to fast and slow RT sets, defined by median split and calculated their average. Lastly, we computed the time integral of the mean residual time series for fast and slow trials.
To compare the simulated pattern of high temporal resolution residuals to the actual data we extracted equivalent snippets (beginning at the TR containing the MGS) of the mean VME brain state fluctuation time course for all trials and all subjects. We selected the mean VME state because of its close relationship with the visuomotor processes which makes it most likely to reflect a trivial timing based relationship between RT and brain state expression. As in the simulated data, for each subject we divided the snippets in to fast and slow RT trials based on a median split and then calculated the group average time series. Finally, we interpolated the resulting time series to a matched temporal resolution using shape preserving piece-wise cubic interpolation (MATLAB).
Funding Information
===================
This paper was supported by the following grants:
- http://dx.doi.org/10.13039/100000002National Institutes of Health 5R01MH067924 to David Florentino Montez, Finnegan J Calabro, Beatriz Luna.
- Staunton Farm Foundation to Finnegan J Calabro, Beatriz Luna.
Additional information {#s6}
======================
No competing interests declared.
Conceptualization, Data curation, Software, Formal analysis, Validation, Investigation, Visualization, Methodology, Writing---original draft, Writing---review and editing.
Validation, Investigation, Methodology, Writing---review and editing.
Conceptualization, Resources, Supervision, Funding acquisition, Investigation, Methodology, Project administration, Writing---review and editing.
Human subjects: Participants and/or their legal guardians provided informed consent before participating in this study. Experimental procedures for this study complied with the Code of Ethics of the World Medical Association (1964; Declaration of Helsinki) and the Institutional Review Board at the University of Pittsburgh. Subjects were paid for their participation in the study.
Additional files {#s5}
================
10.7554/eLife.25606.013
Major datasets {#s7}
--------------
The following dataset was generated:
David Florentino MontezFinnegan J CalabroBeatriz Luna2017Data from: The expression of established cognitive brain states stabilizes with working memory development<http://dx.doi.org/10.5061/dryad.68ff1>Available at Dryad Digital Repository under a CC0 Public Domain Dedication
10.7554/eLife.25606.016
Decision letter
Kastner
Sabine
Reviewing Editor
Princeton University
United States
In the interests of transparency, eLife includes the editorial decision letter and accompanying author responses. A lightly edited version of the letter sent to the authors after peer review is shown, indicating the most substantive concerns; minor comments are not usually included.
Thank you for submitting your article \"Stable engagement of cognitive brain states underlies the maturation of working memory\" for consideration by *eLife*. Your article has been reviewed by two peer reviewers, and the evaluation has been overseen by Sabine Kastner as the Reviewing Editor and Senior Editor. The following individual involved in review of your submission has agreed to reveal her identity: Silvia Bunge (Reviewer \#1).
The reviewers have discussed the reviews with one another and the Reviewing Editor has drafted this decision to help you prepare a revised submission.
Summary and Essential revisions:
The reviewers and editor thought that this is an interesting and methodologically rigorous paper examining the neural basis of reduced variability in cognitive performance over the course of development. The authors measure brain state variability while participants across a wide age range perform a memory-guided saccade task, and show that this variability declines with age. They attempt to parse the task trials into different components and argue that it is particularly the delay-period activation that is showing an age-related decrease in brain state variability over time.
The reviewers were enthusiastic about the paper. However, they thought that the methods were often hard to understand, and it would be difficult to replicate the study with its present methods description. Therefore, we are asking you to thoroughly revise particularly the Methods section and improve its clarity. There are many details missing, the analysis pipeline needs better justification. Please find the reviews appended for further suggestions.
Reviewer \#1:
This is an interesting and methodologically rigorous paper examining the neural basis of reduced variability in cognitive performance over the course of development. The authors measure brain state variability while participants across a wide age range perform a memory-guided saccade task, and show that this variability declines with age. They attempt to parse the task trials into different components (the initial encoding, the delay period -- and more specifically the delay-period activity that represents the spatial information -- and the memory-guided saccade) and argue that it is particularly the delay-period activation that is showing an age-related decrease in brain state variability over time. Critically, the authors show that their results cannot be accounted for by reduced head motion. They hypothesize that these results reflect increased neural gain over the course of development.
The Introduction and Discussion sections are very clearly written. I did not follow the methods section all the way through, although it is entirely possible that someone with more familiarity with these methods would find it clearly written. I found the figure legends insufficiently detailed, and wasn\'t always sure what I was looking at. I have pointed to a few parts of the manuscript that would warrant clarification, but would recommend going through it carefully one more time to identify gaps.
I am always a little wary of efforts to separate the encoding and maintenance periods of a working memory task, as it seems artificial -- and orthogonalizing these components seems all the more so. However, I understand why the authors have done this, and have no specific concerns.
Overall, I think that this paper will make a valuable contribution to our understanding of cognitive development.
Reviewer \#2:
This report is from a strong group with a very impressive dataset that sets to show that adolescent working memory is supported by specified brain states during these tasks. Specifically, brain state variability is tightly linked to the variance and development of task performance. The work is certainly novel and the topic is important; however, it is extremely dense and very difficult to follow. The logic and approach here are not very straightforward, and very hard to understand, which would make a future replication of the findings quite difficult. At the same time it\'s also not clear that the construct in question is actually being directly examined by the methods. I will admit that I am only one reviewer that potentially is simply missing something, because the general idea and findings are interesting, I just found it very difficult to evaluate.
More effort needs to be done in describing why what is being done here represents \"brain states.\"
Is there any analysis validating the procedures to decompose the patterns of brain activity. In other words, how do we know what is being described works with regard to isolating the signals? For example, peak activity in the BOLD responses is assumed to occur after 6 seconds, but we know this is highly variable for tasks and brain regions. It\'s just hard to know whether what is being done to do this is actually disentangling these processes. There are simply a lot of assumptions throughout the manuscript, and what seem to be arbitrary decisions regarding the analysis stream, without any clear rationale or validation.
Along the same lines, the authors did a good job at handling various confounds, such as motion, but again, the details and procedures for exactly how this was being done are not there, nor am I aware that the procedures have been validated elsewhere. As the authors\' own analysis shows, getting this right for this type of analysis is quite important.
All figures/plots (including [Figure 1](#fig1){ref-type="fig"}) of the model fits need to include the raw data to be able to visualize the quality of the fits.
Also need to include scales on all of the images. They should also be consistent within an image.
Overall, I thought the work was interesting, but simply hard to follow and difficult to evaluate rigorously.
\[Editors\' note: further revisions were requested prior to acceptance, as described below.\]
Thank you for resubmitting your work entitled \"The expression of established cognitive brain states stabilizes with working memory development\" for further consideration at *eLife*. Your revised article has been favorably evaluated by Sabine Kastner (Senior & reviewing editor), and two reviewers (Silvia Bunge & Damien Fair).
The manuscript has been improved but there are some remaining issues that need to be addressed before acceptance, as outlined below:
Reviewer \#2:
The authors did a relatively strong job at making things more clear. It is still a very dense manuscript, but again the concepts are strong and the report is important. It should be accepted and will be a very nice contribution.
As the authors know, this reviewer is very concerned about motion having an effect on these types of analyses for several reasons, but based on the recent literature (e.g. Siegel et al., 2014), the findings here would be expected based on how movement affects task related signals. While the authors provide some more citations related to their motion correction procedures, they are often older citations that come prior to many of the recent reports on the issues with using traditional translation and rotation numbers to quantify the impact of motion on the BOLD signals. I won\'t belabor that here. In addition, the spatial template procedure here, while novel doesn\'t account for the non-spatial artifacts of motion (see Siegel et al., 2016; Burgess et al., 2016, amongst others). I do not want this to take away from my overall enthusiasm for the work here. It is quite strong and very interesting, I\'m just overly cautious. What would be great for the reviewers to do is two-fold. First, simply point out anywhere that the while you were overly cautious on controlling for motion here, future work will have to done validate the efficacy of the procedures. Second, the analysis for [Figure 7](#fig7){ref-type="fig"} (which I thought was the most convincing actually) should be re-done, but rather than matching on the traditional translation and rotation numbers, the matching should be done using a mean Frame-to-frame displacement (As in Siegel, 2014; Power, 2012). I only say this because matching based on traditional translation and rotation numbers doesn\'t always provide groups as you might expect with regard to motion (this is the problem with many of the papers in the connectivity literature, they are \"matching\" on the wrong parameters). Again, just being overly cautious here. Perhaps this suggestion was already what was used for making high and low motion groups; however, what measures were actually used for this analysis I can\'t tell based on the results or methods sections. If it was done this way already, then, just clarify.
10.7554/eLife.25606.017
Author response
*Reviewer \#1:*
*This is an interesting and methodologically rigorous paper examining the neural basis of reduced variability in cognitive performance over the course of development. The authors measure brain state variability while participants across a wide age range perform a memory-guided saccade task, and show that this variability declines with age. They attempt to parse the task trials into different components (the initial encoding, the delay period -- and more specifically the delay-period activity that represents the spatial information -- and the memory-guided saccade) and argue that it is particularly the delay-period activation that is showing an age-related decrease in brain state variability over time. Critically, the authors show that their results cannot be accounted for by reduced head motion. They hypothesize that these results reflect increased neural gain over the course of development.*
We thank the reviewer for indicating that this important aspect of our study was not clear. We have now relabeled the relevant section of the results to clarify that, critically, we did not observe evidence for a change in *mean* neural gain over the course of development; rather we observed a reduction in neural gain *variability.*
*The Introduction and Discussion sections are very clearly written. I did not follow the methods section all the way through, although it is entirely possible that someone with more familiarity with these methods would find it clearly written. I found the figure legends insufficiently detailed, and wasn\'t always sure what I was looking at. I have pointed to a few parts of the manuscript that would warrant clarification, but would recommend going through it carefully one more time to identify gaps.*
We have now edited the text to provide greater clarity and added a flow chart ([Figure 9](#fig9){ref-type="fig"}) for inclusion in the methods section that provides a broad outline of the concept, process, and goals of the brain state analyses. We hope this figure will provide a convenient simplified framework for interpreting the detailed methods sections and facilitate replication.
*I am always a little wary of efforts to separate the encoding and maintenance periods of a working memory task, as it seems artificial -- and orthogonalizing these components seems all the more so. However, I understand why the authors have done this, and have no specific concerns.*
We agree with Reviewer 1 and acknowledge that the separation of encoding and maintenance periods is somewhat artificial. We now clarify the importance of identifying the brain state patterns associated with visuomotor processes, i.e. visually-guided eye movements that critically distinguish the encoding period from maintenance processes. In addition, in the Discussion section we acknowledge that our task design would not, in principle, allow us to dissociate putative encoding processes from visuomotor processes and therefore refer to the visuomotor state as visuomotor/encoding (VME).
*Reviewer \#2:*
*This report is from a strong group with a very impressive dataset that sets to show that adolescent working memory is supported by specified brain states during these tasks. Specifically, brain state variability is tightly linked to the variance and development of task performance. The work is certainly novel and the topic is important; however, it is extremely dense and very difficult to follow. The logic and approach here are not very straightforward, and very hard to understand, which would make a future replication of the findings quite difficult. At the same time it\'s also not clear that the construct in question is actually being directly examined by the methods. I will admit that I am only one reviewer that potentially is simply missing something, because the general idea and findings are interesting, I just found it very difficult to evaluate.*
We agree that the paper is very dense; we have incurred a significant explanatory burden in attempting to report novel scientific finding as well as the new methods by which they were revealed. Significant changes have been made throughout the body of the manuscript and the methods section to clarify our approach and rationale. We have also included an addition explanatory figure ([Figure 9](#fig9){ref-type="fig"}), which outlines the major goals and analyses that we performed and provides a conceptual flowchart that links the major steps in the analyses to the relevant results figures. We hope that the inclusion of this figure will provide a better conceptual framework by which readers can interpret the individual sections of the Materials and methods.
*More effort needs to be done in describing why what is being done here represents \"brain states.\"*
We have reworked the text to better clarify why our approach defines a brain state and what we mean by using this term. In addition, edits to [Figure 2](#fig2){ref-type="fig"} now elaborate on the method of estimating brain state patterns and depict graphically what is represented by a brain state by juxtaposing the time courses of individual exemplar voxels that intuitively exhibit activity that associating them with a particular component of the task alongside the brain states derived from the idealized BOLD time series.
*Is there any analysis validating the procedures to decompose the patterns of brain activity. In other words, how do we know what is being described works with regard to isolating the signals? For example, peak activity in the BOLD responses is assumed to occur after 6 seconds, but we know this is highly variable for tasks and brain regions. It\'s just hard to know whether what is being done to do this is actually disentangling these processes. There are simply a lot of assumptions throughout the manuscript, and what seem to be arbitrary decisions regarding the analysis stream, without any clear rationale or validation.*
The time courses depicted in [Figure 3](#fig3){ref-type="fig"} provide validation that our procedure isolates signals associated with visuomotor, maintenance, and retrieval. For instance, examination of the time course of the VME states in the long delay trials reveals two peaks of expression, associated with the VGS and MGS events; the maintenance state is expressed maximally between the occurrence of the VGS and MGS; and lastly, the retrieval state is expressed only during the MGS, indicating that we have successfully removed the components associated with the visuomotor processes which occur during the VGS. Importantly, all subjects regardless of age similarly express the canonical brain states, on average.
*Along the same lines, the authors did a good job at handling various confounds, such as motion, but again, the details and procedures for exactly how this was being done are not there, nor am I aware that the procedures have been validated elsewhere. As the authors\' own analysis shows, getting this right for this type of analysis is quite important.*
We dealt with potential motion confounds in several ways, many of which have been employed in other studies. We also develop an additional method of estimating the magnitude of residual motion artifacts that is idiosyncratic to our brain state approach. We thank this reviewer for pointing out the need for additional citations for those approaches that have historical precedent as well as pointing out the need for additional clarification on and validation of the approach that we developed.
First, we exclude sessions in which average displacement per TR exceeded 2.0 mm. Second, we used leading and lagging motion regressors to estimate and account for the prolonged and autocorrelated effects of motion as outlined previously in (Friston et al., 1996). Importantly, it is algebraically equivalent to including higher order derivatives of the motion time series as nuisance regressors, which is a common practice in fMRI time series analysis. Lastly, as noted in Materials and methods, the regression models that estimate the relationship between age and measures of brain state variability include summary measure of in-scanner motion as a nuisance covariate.
We verify brain state variability is greater in a group of younger, low motion subjects and a group of older, high motion subjects. The algorithm that we employ to generate these biased distributions is a minor variation on the mean matching procedures (based on an intersection of histograms approach) that are often used to match firing rates as a control in the electrophysiological experiments, e.g. (Churchland et al., 2010; Churchland et al., 2007; Cohen & Maunsell, 2009). We have now included these references.
Our novel contribution to controlling for motion confounds is the construction of motion templates derived from the estimated motion time series. These templates are primarily useful when fitting individual volumes (TRs) to the set of brain state patterns as they capture residual whole-brain BOLD signal variance known to be associated with movement. Their inclusion slightly improved the measured relationship between trial-to-trial brain state expression and behavior ([Figure 4](#fig4){ref-type="fig"}).
While we do think that this approach, or a variation on it, may provide some conceptual advantage to PCA- or ICA-based methods (particularly because the motion templates are empirically related to movement and require no interpretation), in this study however, we use these simply as additional nuisance regressors in the trial-to-trial brain state/behavior analyses and as a secondary measure of motion-related BOLD signal artifacts to provide an alternative motion-related nuisance regressor. That this method is effective for removing linear effects of motion remaining after deconvolution is demonstrated in [Figure 10b](#fig10){ref-type="fig"}. This figure shows that the proportion of whole-brain variability associated with the motion templates is highly correlated motion estimates.
*All figures/plots (including [Figure 1](#fig1){ref-type="fig"}) of the model fits need to include the raw data to be able to visualize the quality of the fits.*
We have now included the individual data points on these figures.
*Also need to include scales on all of the images. They should also be consistent within an image.*
We have now included a color bar scale on the figures depicting the canonical task-related brain state patterns and note that they are normalized to a common magnitude.
\[Editors\' note: further revisions were requested prior to acceptance, as described below.\]
*Reviewer \#2:*
*As the authors know, this reviewer is very concerned about motion having an effect on these types of analyses for several reasons, but based on the recent literature (e.g. Siegel et al., 2014), the findings here would be expected based on how movement affects task related signals. While the authors provide some more citations related to their motion correction procedures, they are often older citations that come prior to many of the recent reports on the issues with using traditional translation and rotation numbers to quantify the impact of motion on the BOLD signals. I won\'t belabor that here. In addition, the spatial template procedure here, while novel doesn\'t account for the non-spatial artifacts of motion (see Siegel et al., 2016; Burgess et al., 2016, amongst others). I do not want this to take away from my overall enthusiasm for the work here. It is quite strong and very interesting, I\'m just overly cautious. What would be great for the reviewers to do is two-fold. First, simply point out anywhere that the while you were overly cautious on controlling for motion here, future work will have to done validate the efficacy of the procedures. Second, the analysis for [Figure 7](#fig7){ref-type="fig"} (which I thought was the most convincing actually) should be re-done, but rather than matching on the traditional translation and rotation numbers, the matching should be done using a mean Frame-to-frame displacement (As in Siegel, 2014; Power, 2012). I only say this because matching based on traditional translation and rotation numbers doesn\'t always provide groups as you might expect with regard to motion (this is the problem with many of the papers in the connectivity literature, they are \"matching\" on the wrong parameters). Again, just being overly cautious here. Perhaps this suggestion was already what was used for making high and low motion groups; however, what measures were actually used for this analysis I can\'t tell based on the results or methods sections. If it was done this way already, then, just clarify.*
To address the first point, we have now included the following passage in the section that discusses the motion template method:
"The usage of motion templates as an additional control for motion-related artifacts arose from an abundance of caution and the need for high dimensional nuisance regressors that could be fit simultaneously to each volume alongside the set of brain state patterns. While these preliminary analyses suggest promise for the approach, we acknowledge that it has yet to be fully validated. (subsection \"2.3 Constructing motion templates and other spatial nuisance regressors").
In regards to the second suggestion, Reviewer \#2's point about the value of using a traditional estimate of in scanner motion as a nuisance covariate is well taken and we recognize that our use of an idiosyncratic measure of motion would have served as an unnecessary distraction for readers of an already complex paper and made it more difficult to contrast with the extant literature. We now use mean FD to quantify in-scanner motion for all of our analyses (as calculated in Power, 2012). In practice, mean FD is highly correlated (essentially differing by scaling factor) with the values that we had used to quantify motion in our initial analyses (r=0.9977; p\~0; across all sessions). Thus, our results differ only nominally. We selected a conservative motion threshold of mean FD \<= 0.15mm which allowed us to keep a nearly identical same data set. As a bonus, applying this selection criterion resulted in the inclusion of three additional sessions of data that were not in the original analyses. The effect on our results were minimal and are reflected in the slight differences in reported statistics and their corresponding degrees of freedom throughout the paper. In order to have a consistent metric for motion throughout the paper, all relevant figures have been regenerated from the resulting data set and group-level analyses now use mean FD as a motion-related nuisance regressor. In addition we include source citations for the calculation of mean FD.
|
{
"pile_set_name": "PubMed Central"
}
|
5 + -2. Let n(z) = -y*z**2 + 0*z**2 + 5*z**2. Let f(q) be the second derivative of -2*q**3/3 - 5*q. Calculate n(f(h)).
32*h**2
Suppose 2*b = 2*o - 2, b = 2*o + o - 3. Let q(l) = o - 14*l - 1 + 16*l. Let x(n) = -10*n. Determine q(x(a)).
-20*a
Let o(r) = 32*r. Let h(z) = z - 152. What is o(h(y))?
32*y - 4864
Suppose 3*h + 2*h = 10. Let w(g) = -h*g + 6*g - 2*g. Let x(p) = -2*p**2 + 3*p**2 + 2*p**2. Calculate w(x(d)).
6*d**2
Let m(r) = -3*r. Let k(v) = 26*v - 53. Determine m(k(x)).
-78*x + 159
Let p(s) = -3*s**2 + 2. Let b(d) = 15*d**2 - 11. Let q(t) = -2*b(t) - 11*p(t). Let h(j) = -10*j**2. Give h(q(u)).
-90*u**4
Let z(u) = 7*u - 27*u + 8*u. Let q(x) = 2*x**2. Give q(z(s)).
288*s**2
Let b(f) be the first derivative of -f**3 + 1. Let t(k) = 4*k. Determine t(b(a)).
-12*a**2
Let u(f) = f + 1. Suppose 3*q = -11 - 7. Let y(z) = 7*z + 6. Let r(v) = q*u(v) + y(v). Let g(a) = -3*a**2. Determine r(g(b)).
-3*b**2
Let f(m) = m**2. Let q(p) = -403*p**2. What is q(f(u))?
-403*u**4
Let g(c) = -5*c. Suppose 4*a + 2 = 10. Let n(x) = -x + 9 + a*x - 9. Give n(g(y)).
-5*y
Let j(s) = 7*s. Let h(q) = 2*q. Determine h(j(v)).
14*v
Let j(w) = 2*w. Let r(o) be the third derivative of 0 + 0*o**4 - 4*o**2 + 1/10*o**5 + 0*o + 0*o**3. Calculate r(j(f)).
24*f**2
Let y(s) = s. Let u(x) be the second derivative of -x**3 - 11*x. Give y(u(j)).
-6*j
Let y(v) = -v**2 + v - 1. Let k(w) = -2*w - 4*w**2 - 2 + 4 + 5*w**2. Let x(o) = -k(o) - 2*y(o). Let s(a) = 3*a. Give x(s(b)).
9*b**2
Let w(m) = -2*m**2. Let u(s) = -3913*s. Calculate u(w(o)).
7826*o**2
Let f(m) = 2*m. Let k(u) = 7*u + 3. Let y(i) = -160*i - 68. Let z(t) = -136*k(t) - 6*y(t). Calculate f(z(c)).
16*c
Let o be 3 - (-2)/2 - -1. Let p(g) = -5*g**2 + 3*g**2 + o*g**2. Let t(s) = -3*s**2. What is t(p(b))?
-27*b**4
Let i(w) be the first derivative of -11*w**2/2 - 3. Let p(l) = -2*l**2. What is i(p(a))?
22*a**2
Let p(c) = c. Let s(h) = -1222*h**2 - 2*h. What is p(s(w))?
-1222*w**2 - 2*w
Suppose 7*j - 3*j = 0. Let t(i) = -i**2 + 2*i + 2. Let q be t(2). Let h(w) = j*w**2 + w**2 + w**2 - w**q. Let c(u) = -u. Calculate h(c(l)).
l**2
Let d(j) = j + 1. Let r(c) = -18*c - 15. Let w(m) = 15*d(m) + r(m). Let f(v) = v. What is w(f(i))?
-3*i
Let p(o) = o**2. Let k(i) be the third derivative of -i**5/15 + 11*i**2. Determine p(k(b)).
16*b**4
Let f(g) = -g**2. Suppose -2*n + 24 = 2*o, 24 = 2*n - 0*n + 3*o. Let s(a) = n*a**2 - 7*a**2 - 4*a**2. What is s(f(h))?
h**4
Suppose 0*h + h = 6*h. Let y(x) be the third derivative of 0*x + h + 0*x**4 + 2*x**2 - 1/15*x**5 + 0*x**3. Let d(p) = p**2. Give y(d(z)).
-4*z**4
Let r(q) = -27*q. Let y(o) = 8*o**2. What is y(r(d))?
5832*d**2
Let f(r) be the second derivative of -5*r**3/3 + 6*r + 6. Let w(q) = -4*q**2. What is w(f(u))?
-400*u**2
Let v = 8 - 5. Let s(y) = 3*y + y - 5*y + v*y. Let z(x) be the first derivative of x**2/2 + 2. Give s(z(t)).
2*t
Let i(j) = 4*j + 3*j**2 - 2*j**2 - 4*j. Let v(u) = -2*u**2. What is i(v(s))?
4*s**4
Let g(h) = 22*h + 23*h - 33*h. Let n(m) = -m. Determine n(g(u)).
-12*u
Let y(p) = -81*p. Let f(j) = 5*j**2. Give f(y(u)).
32805*u**2
Let v(d) be the first derivative of d**3 + 9. Let t(z) = 2*z**2. Calculate t(v(g)).
18*g**4
Let i(s) = 2*s**2. Let z(f) be the third derivative of -7/60*f**5 - 6*f**2 + 0*f + 0 + 0*f**4 + 0*f**3. Give i(z(b)).
98*b**4
Let z(c) = 45*c**2. Let y(w) = -w**2 - w. Let m(n) = 10*n**2 + 9*n. Let t(i) = -2*m(i) - 18*y(i). What is z(t(a))?
180*a**4
Let l(g) = 2*g. Let c(n) = 63*n + 27. Let v(y) = 9*y + 4. Let o(t) = 4*c(t) - 27*v(t). What is o(l(x))?
18*x
Let b(f) = -f. Let x(r) = -1143*r. Give b(x(y)).
1143*y
Let v(h) = -2*h. Let p(j) = -16*j. Suppose 2*t + 5*a - 26 = 0, 5*t + 3*a - 14 = 32. Let q(u) = 24*u. Let r(n) = t*p(n) + 5*q(n). Calculate v(r(w)).
16*w
Suppose 0*j - 5*j = -10. Let i(x) = -x**2 - x**j + 4*x**2 - x**2. Let n(d) = d**2. Give i(n(q)).
q**4
Let n(t) = -2*t - 5. Let s(r) = -2*r - 6. Let o(b) = 6*n(b) - 5*s(b). Let q(k) = -2*k. Give o(q(d)).
4*d
Let q(g) = -g. Let z = 758 - 508. Suppose u = -4*u + z. Let p(n) = -u - 3*n**2 + 50. What is q(p(v))?
3*v**2
Let f(n) = -10*n**2 + 2*n. Let t(o) = -3*o**2. Determine t(f(c)).
-300*c**4 + 120*c**3 - 12*c**2
Let n(l) = 2*l**2 + 3*l**2 - 2*l**2 - 4*l**2. Let m(w) = 11*w**2. Calculate m(n(k)).
11*k**4
Let z(n) = -n - 4. Let a be z(5). Let u = a + 15. Let b(q) = -6 + u + 3*q. Let c(o) = -3*o. Give b(c(r)).
-9*r
Let k(u) = 2*u. Let s(p) = 7*p**2 + 1. Calculate k(s(n)).
14*n**2 + 2
Let x(u) be the first derivative of 5*u**2/2 + 12. Let f(m) = 5*m**2. Determine x(f(o)).
25*o**2
Let b(t) = 3*t**2. Let i(s) = 99*s - 1. What is b(i(n))?
29403*n**2 - 594*n + 3
Let n(l) = 10*l + 2. Let g(i) = i**2. Calculate n(g(w)).
10*w**2 + 2
Let b(g) = -2*g. Let u(s) = -73*s + 134*s - 68*s. Give u(b(f)).
14*f
Let a(x) = x. Let z(y) = -2*y + 44. Give a(z(k)).
-2*k + 44
Let m(f) be the third derivative of -f**4/24 - 32*f**2. Let o(i) = 5*i. What is o(m(r))?
-5*r
Let v(x) = 9*x. Let g(d) be the first derivative of -d**2/2 - 14. Give g(v(u)).
-9*u
Let n = 11 - 8. Let q(w) = -n*w + 9*w - 3*w. Let g(m) = 2*m. Calculate g(q(b)).
6*b
Let c(j) be the third derivative of 1/30*j**5 + 0 - 2*j**2 + 0*j**3 + 0*j**4 + 0*j. Let l(z) = z**2. Give c(l(f)).
2*f**4
Let a(b) = 15*b**2. Let o(c) = 6*c + 4. Let g(m) = -7*m - 5. Let r(z) = 4*g(z) + 5*o(z). What is r(a(p))?
30*p**2
Let r(n) = -n**2 + 9*n. Let y(j) = -5*j. Give y(r(p)).
5*p**2 - 45*p
Let p(j) = 10*j - 7. Let a(z) = -3*z + 2. Let g(x) = -14*a(x) - 4*p(x). Let v(s) = 4*s**2. What is v(g(y))?
16*y**2
Let v(x) = x. Let a be 8/(-40) + (-16)/(-5). Let j(k) = -3*k - k + a*k. Determine v(j(r)).
-r
Let c(q) = -16*q. Let z(f) = 173*f**2. Give c(z(w)).
-2768*w**2
Let m(c) = 2*c**2. Let y(t) = -t**2 - 10*t. Determine m(y(s)).
2*s**4 + 40*s**3 + 200*s**2
Suppose c = -c. Let p(s) = c - s - s + 0. Let a(l) = -2*l. Give p(a(j)).
4*j
Let f(u) = -u**3 + 5*u**2 - u + 7. Let j be f(5). Suppose -t = -j - 0. Let v(a) = -2*a + 6*a - t*a. Let r(b) = 2*b. Give r(v(l)).
4*l
Let l(t) = -6*t - 8. Let p(b) be the third derivative of 0 + 1/6*b**4 - 3*b**2 + 0*b + 5/6*b**3. Let c(h) = 5*l(h) + 8*p(h). Let w(o) = -o**2. Give c(w(x)).
-2*x**2
Let z(h) = -2*h**2. Let d(j) = -j + 12. Calculate d(z(c)).
2*c**2 + 12
Let r(y) = -2*y**2. Let q(g) = 218*g**2. Determine q(r(i)).
872*i**4
Let f(g) be the first derivative of 11*g**3/3 + 27. Let u(d) = 2*d. Determine u(f(s)).
22*s**2
Let j(r) = 3*r + 7. Let n(o) = -3*o - 6. Let s(f) = 6*j(f) + 7*n(f). Let c(y) be the third derivative of y**4/12 + y**2. Give c(s(g)).
-6*g
Let p = -2 - -3. Let t(y) = -2*y - 5 + y + 5. Let w(f) = 2*f**2 + f. Let c(j) = p*w(j) + t(j). Let z(i) = -2*i**2. What is c(z(r))?
8*r**4
Let b(v) = 2*v**2 - 2*v + 2*v. Let d(x) be the third derivative of 1/60*x**5 + 0 + x**2 + 0*x**4 + 0*x + 0*x**3. Give d(b(y)).
4*y**4
Let i(b) = -2*b. Let c(m) = 9*m - 17*m + 9*m. Calculate i(c(n)).
-2*n
Let d(n) = 2*n**2 + 643 - 643. Let u(m) = m**2. Calculate d(u(g)).
2*g**4
Let g(r) = 15*r**2. Let q(t) = 406*t**2. Calculate q(g(f)).
91350*f**4
Let b(q) = 6*q**2. Let p(z) be the third derivative of z**4/24 + 20*z**2. Calculate b(p(m)).
6*m**2
Let w(c) = 2*c**2. Let v = -8 + 6. Let i be 1/(v - (-10)/4). Let r(d) = 0*d**2 + 0*d**i + d**2. What is w(r(s))?
2*s**4
Let q(z) = -3*z**2 - 4. Let d(r) = -2*r**2 - 3. Let o(g) = -4*d(g) + 3*q(g). Let n(i) = 7*i - 16*i + 9*i - i**2. Determine n(o(p)).
-p**4
Let r(q) = 203*q - 2. Let t(h) = -3*h**2. What is r(t(a))?
-609*a**2 - 2
Let n(j) = -j + 43. Let g(s) = -7*s + 2. Determine n(g(p)).
7*p + 41
Let h(j) = -2*j + 7*j - 3*j. Let q(c) be the first derivative of -2*c**3/3 - 3. Give h(q(u)).
-4*u**2
Let p(y) = 3*y**2. Let z(i) = -585*i**2. What is z(p(r))?
-5265*r**4
Let o = 11 - 2. Let j(a) = -4*a**2 - 7*a**2 + o*a**2. Let w(g) = 5*g. Determine w(j(t)).
-10*t**2
Let m(r) = 16*r - 31*r + 16*r. Let a(h) = -50*h**2. Determine m(a(q)).
-50*q**2
Let s(b) = -3*b**2 + 2*b. Let x(w) = -35*w**2 + 25*w. Let n(v) = 25*s(v) - 2*x(v). Let t(j) = -2*j**2. Give t(n(c)).
-50*c**4
Let i(n) = -2*n. Let b(m) be the second derivative of -m**5/30 - 5*m**2 + m. Let d(h) be the first derivative of b(h). Calculate d(i(q)).
-8*q**2
Let o(x) = 57*x. Let b(f) = 7*f. Determine b(o(w)).
399*w
Let f(w) = 19*w + 26*w + 17*w - 46*w. Let t(l) = -2*l. Determine f(t
|
{
"pile_set_name": "DM Mathematics"
}
|
exports.loadContext = (callback) => {
const context = require.context('./pages', true);
if (module.hot) {
module.hot.accept(context.id, () =>
callback(require.context('./pages', true))
);
}
return callback(context);
};
exports.onRouteChange = (state) => {
if (window.ga) {
window.ga('send', 'pageview', state.pathname);
}
const content = document.querySelector('.mdl-layout__content');
if (content) {
content.scrollTop = 0;
}
};
|
{
"pile_set_name": "Github"
}
|
Long-term prognosis of infratentorial transient ischemic attacks and minor strokes.
This study was performed to gather information about long-term prognosis after infratentorial transient ischemic attacks and minor strokes and about the factors influencing it. We included 226 patients with transient ischemia and 169 patients with a minor stroke of the brain stem/cerebellum consecutively admitted to a neurological department. Medical records and the findings of computed tomography, Doppler ultrasonography, and angiography were evaluated retrospectively. Follow-up information was gathered from the patients and their physicians by questionnaires. Complete follow-up information was available for 381 patients. During a mean follow-up of 3.9 years, 15.7% of the 381 patients suffered a stroke and 6.8% a myocardial infarction; 15% died. Kaplan-Meier estimates revealed a cumulative stroke rate of 5.1% within the first year and a risk of stroke, myocardial infarction, or death of any cause of 9.8%. In a proportional hazards model, the time-dependent risk of stroke was significantly increased by increasing age (p = 0.018), minor stroke (p = 0.0005), hypertension (p = 0.022), previous stroke (p = 0.0006), and carotid artery occlusive disease (p = 0.0065). The probability of stroke, myocardial infarction, or death was influenced by age (p = 0.0001), minor stroke (p = 0.006), diabetes (p = 0.015), previous stroke (p = 0.002), infarct on a computed tomogram (p = 0.041), and carotid artery disease (p = 0.032). Long-term prognosis after brain stem/cerebellar transient ischemic attacks and minor strokes is significantly influenced by age, diabetes, hypertension, previous stroke, and concomitant carotid artery disease. Patients with transient ischemic attacks have a better prognosis than those with minor stroke.
|
{
"pile_set_name": "PubMed Abstracts"
}
|
What will 2018 hurricane season bring to cruising: canceled trips or fair-weather sailing?
By Rosemary McClure
May 29, 2018 | 6:30 AM
Evelyn Padilla and Luis Calzada ride back to their hotel in Hollywood Beach, Fla., as rain from subtropical storm Alberto hits the area on May 29. (Mike Stocker / Tribune News Service)
The 2018 hurricane season officially kicks off Friday, but subtropical storm Alberto on Monday brought heavy rain to the Florida Panhandle and the Gulf Coast, with more expected across the southeastern U.S. this week.
Is that an indicator the upcoming hurricane season might mirror last year's nasty season?
Advertisement
Nearly half a million cruisers were affected last year by hurricanes Harvey, Irma, Jose and Maria. It's enough to make any traveler a bit nervous.
Should you think twice about taking that Caribbean vacation you were planning?
Experts say Alberto's early arrival probably means nothing at all. It's still too early in the season to tell.
"You have to wait until August and September for the heart of hurricane season and the greatest threat for major hurricanes," said Dan Kottlowski, AccuWeather hurricane expert. But there could be "another storm or two that forms June into July."
The experts at the National Oceanic and Atmospheric Administration, or NOAA, issued their annual hurricane season forecast Thursday, only a day before Alberto began closing in on Florida. It basically said the upcoming year will have a normal to slightly-above-normal risk of storms.
"NOAA's forecasters predict a 70% likelihood of 10 to 16 named storms (winds of 39 mph or higher), of which 5 to 9 could become hurricanes (winds of 74 mph or higher), including 1 to 4 major hurricanes," the report said.
An average season in the Atlantic Basin, which officially runs from Friday until Nov. 30, produces 12 named storms, six hurricanes and three major hurricanes. A Category 3 hurricane or stronger is classified as a major hurricane.
By Monday, Alberto, the first named storm of the season, brought heavy rainfall to southern Florida and western Cuba, according to NOAA reports. The center of the storm was expected to move over Alabama and into the Tennessee Valley on Tuesday, and into the Ohio Valley and Great Lakes region on Wednesday and Thursday.
“Last year’s hurricane season brought an enormous amount of devastation to many people and places," said Colleen McDaniel, a senior editor with Cruise Critic. "Many islands in the Caribbean saw significant damage, which also directly impacted tourism dollars that are so important to the region.
"While cruise ships have the benefit of being able to adjust sailings to avoid storms and keep guests out of harm’s way, we did see cruises that were canceled entirely, sailings that were shortened and itineraries that were altered to avoid ports of call in the storm’s path," she said.
Most of the ports that received significant damage have reopened and "are eager to welcome visitors now," she said.
Cruise Critic advises travelers to:
--Purchase travel insurance early; companies will only insure your trip prior to a storm being forecast.
--Consider booking airfare through the cruise line, which might help you adjust your travel plans if a trip is canceled, shortened or extended because of weather.
Advertisement
--Add a few days' buffer to your travel dates if you have important dates on your calendar (such as, a friend's wedding, the first day of school, etc).
Escapes Newsletter
Weekly
From weekend getaways to vacation deals, get all your need-to-know travel news from L.A. Times editors.
|
{
"pile_set_name": "Pile-CC"
}
|
Thermally induced recrystallization of MAPbI3 perovskite under methylamine atmosphere: an approach to fabricating large uniform crystalline grains.
A liquid to solid phase transition of methylammonium lead triiodide (MAPbI3) under methylamine (MA) atmosphere at elevated temperatures was discovered, and used to form high quality and uniform thin films containing large, low defect crystal grains tens of microns in size.
|
{
"pile_set_name": "PubMed Abstracts"
}
|
And Me
Food For Thought Friday: Body Image
Something I have really come to realise lately is that we as women are super hard on ourselves.
If you are anything like me, you can easily see the beauty in other women but struggle to sometimes see it in yourself.
I read some blog posts the other week that were follow ups from the hashtag trend on Twitter #MySwimsuitStyle. For those who haven’t heard of it, the gist of it was encouraging women to post photos of themselves wearing swimsuits showing off their own personal style. And honestly? They’re beautiful. Women of all shapes and sizes. I am envious of their confidence – confidence evokes beauty and they obviously feel it. Or they are brave enough to fake it. Either way, I’m envious and willing to bet that there are many women out there in the very same boat as me.
Isn’t it funny how we can admire it and envy it but we still wouldn’t be caught dead doing it because while we believe in everyone else, we are so critical of ourselves?
I’ve always been pretty self conscious of my body and if I could go back in time to my teenage years and early twenties I would slap myself around the head and give myself a stern talking to.
How many of you echoed that thought looking back? I am sure some of you are nodding along with me.
This was me in Greece 10 years ago and I still wouldn’t wear a bikini. I was totally a cover up with boardies and singlet kinda gal. Slap me. Such a waste.
Motherhood and age have changed my figure and I have a few body hang ups and confidence issues and yet my husband still says I am beautiful. My friends and family tend to agree with him. They don’t see myself the way I see myself when I look in the mirror and isn’t that sad that we can spend our lives with such hang ups? I wish I could see myself the way my husband sees me.
How much longer will I waste being critical of myself and praising the confidence and beauty of others?
It’s time to take action and not just sit here being envious of other women but doing something about gaining the confidence to do it myself.
I have decided I will wear a swimsuit this summer. I will change what I think I hate and I will try and salvage some body confidence.
I want to be able to jump through the waves with my little boys and enjoy myself, not worry about what people think of me (and honestly, no one is looking at me, it’s probably all in my head!). I want to go to the pools and swim and not have this hanging over my head.
I wrote a post about Why The Boys Starting Swimming Lessons has me Breaking Out in a Cold Sweat and I did go in with him once (he HATED every minute of it, my youngest son that is). He clawed and screamed and perhaps exposed a boob to the dads on the sidelines (apologies!) so I didn’t have to endure that again as he wasn’t having a bar of it and we stuck to the paddling pool for the rest of the term which doesn’t require a swimsuit (much to my relief!). But I really want to just get in and enjoy myself and go down the hydroslide (hopefully my boobs stay where they should be by the time I get to the bottom as that was just a tad mortifying!).
I don’t want to waste my thirties repeating the same mistake I did in my twenties.
Post navigation
26 thoughts on “Food For Thought Friday: Body Image”
Well said- you’ve nailed how so many of us feel. I wrote about a similar thing recently- how I wish I could go back to the first time I thought I was too fat- as a skinny teen. We waste so much time feeling bad for no reason- I think seeing real women is key to turning this around- we need to get rid of airbrushed, unrealistic ideals and embrace the fact that we all come in different shapes and sizes.Amy @ HandbagMafia recently posted…Bonds Mama Mania
I hate my body half the time, but I’m also getting so old I’m invisible, so it’s swings and roundabouts really. BUT I will say listen to Crooked Smile – great perspective on those of us not picture perfect (but we’re worth the picture still) and as Dr Dan on Mindy Project says “You’re a woman, and that’s good. Look like a woman.”Lydia C. lee recently posted…It’s a new day full of possibility when the sun rises…
Oh yes, I definitely have body hang ups! Before having my son I was very petite and then bam…my body completely changed. I have worn my swim suit on a few occasions, but with a kaftan over the top and when I take it off, I feel so self concious. I have vowed to not worry about it for the sake of my son and having fun, it’s just not that easy, is it?Eva @ The Multitasking Mummy recently posted…A Great Big Dinosaur Adventure V-Log
You sound very similar to me Eva, I spent my teens and twenties petite and then after kids I remained somewhat slim but my tummy didn’t! This makes for awkward dressing and self conciousness galore. I look pregnant if I don’t dress strategically (think floaty tops) which is why a swimsuit is such a nightmare for me as how do you disguise that in a swimsuit?! I want to not care but I just do.
Great post and great blog. I am in the fitness industry and I still have hang ups with my body. I am a big fan of Taryn Brumfitt from the Body Image Movement. She is doing some great work on this very topic. Thanks for your honesty and I will get into bathers with you this Summer. 🙂Neets recently posted…7 Essential Cycling Tips for Every Rider
I think I probably had a few hang ups in my teens, but for some reason haven’t had any since. I’ve always been pretty fit, and I think that helps, but I rocked a bikini through two pregnancies in my early 40’s, and afterwards as well. There are a few photos that look pretty ordinary, but I just figured that’s how I was at the time, so be it. I’m nearly 50 now and still feel great in bathers. I definitely wouldn’t care what my shape was like now, as long as my body is strong and serves me well. We all worry about stuff that is a waste of our time…I know I do, but the whole body image thing must consume so much potential creativity, positive thought, relationships…everything. We need to let it go. Go out and buy a really cool, flattering pair of bathers as soon as they come into the shops!!!Michelle@myslowlivingadventure recently posted…48 hours in Ubud
I totally agree Michelle, I hate that it bothers me so much that it prevents me from just going out and having fun at the beach with the kids! I bought a swimsuit on special for $10 that was the most flattering I could find, my husband likes it anyway! Funny how I tried on so many expensive ones and ended up with the cheapest. Totally trying to break this cycle of body hate.
I don’t love my body and I wear a bikini 🙂 It’s just what I am used to wearing as I always have done – I’d feel more self conscious at the pool or beach in a one piece or board shorts.Meryl @ Simple Family Home recently posted…Lessons From The Life-Changing Magic of Tidying Up
Life’s too short to have body hang ups, kids have made me get over my fear of hydro slides, love them now. I remember mum sunbathing at home in bikinis, we only have one body and one life got to make the most of it. Porirua hydro slide is fun not fast you and your boys will love it, dark slides on the other hand still getting over fear of darkness and speed. You should see taupo in Summer bikinis on lakefront, lots of tourists who don’t care and they shouldn’t.
Great article and something I can relate to definitely – but have found so less since I’ve been living overseas in germany. I think where we live and the social expectations placed upon you have a huge impact. Like you, pre babies and ,living in NZ which IMO is avert superficial culture, I would hide my figure away and would never wear a singlet (exposing my arms) or a skirt above my knees. Now i look back and like you i think”what a waste – I had an amazing figure”. Now I am 5 years on, with 2 beautiful sons, and 20kgs heavier than pre kiddies – but now I live in Germany and I expose more flesh and am more comfortable in my own skin that I’ve ever been. I want to lose weight but that’s because i want to be healthier. The Germans are very non superficial. Women often don’t wear make up day-to-day and no one really cares what people look like. And yes, older women do not shave! Men walk past a hot woman in the street and don’t even look sideways. Nudity is not a big deal – men and women frequent saunas naked and no one bats an eyelid -it certainly has no sexual connotations to be naked in front of one another. Consequently, I’ve found it huge release – I am quite comfortable going to the supermarket without brushing my hair, no make up and wearing track pants. No one cares, no one takes any notice of what i look like and I certainly don’t feel objectified or not good enough because I’m overweight.
Food for thought……
Sadly I have body hangups even though I really shouldn’t. I don’t normally swim much in summer (we are not much of an outdoor family) but we did swim daily when we headed to the US earlier this year and no-one there seemed to care about their bodies. So I cared less about mine as well. I hope that continues when summer rolls around here again…Kirsty @ My Home Truths recently posted…5 ways to help kids through grief
Your email address will not be published. Required fields are marked *
Comment
Name *
Email *
Website
Sign me up for a once weekly sum up of new posts!
About my blog
I love to have a good laugh (often at my own expense!) and entertain while also delving into the more serious side of life at times too. My blog is a hodge podge of parenting these two little monkeys, humour, funny anecdotes, inspiration, coffee (you will see this as a recurring theme!) and life in general.
|
{
"pile_set_name": "Pile-CC"
}
|
Vampire Couple Salt and Pepper Shakers
Give your dinner a vampire’s kiss of salt or pepper with these wickedly cute table shakers!
The shaker set is truly adorable. One shaker leans in to kiss the other on the neck. Both of the figures are vampires, which is extra sweet because usually vampires have to live forever all alone and that’s sad.
The couple will stay together on your table or countertop thanks to the power of magnetism. (Not, like, animal magnetism. There’s literally a magnet in the middle of the kissing vampire’s face. So they’re pretty much meant to stay posed as you see in the photo.)
Standing around 4 inches tall, it’s just a little something to bring a bit of darkly sweet romance to your home!
|
{
"pile_set_name": "Pile-CC"
}
|
Cellular and ionic basis for T-wave alternans under long-QT conditions.
T-wave alternans (TWA), an ECG phenomenon characterized by beat-to-beat alternation of the morphology, amplitude, and/or polarity of the T wave, is commonly observed in the acquired and congenital long-QT syndromes (LQTS). This study examines the cellular and ionic basis for TWA induced by rapid pacing under conditions mimicking the LQT3 form of the congenital LQTS in an arterially perfused canine left ventricular wedge preparation. Transmembrane action potentials from epicardial, M, and endocardial cells and 6 to 8 intramural unipolar electrograms were simultaneously recorded together with a transmural ECG and isometric tension development. In the presence of sea anemone toxin (ATX-II; 20 nmol/L), an increase in pacing rate (from a cycle length [CL] of 500 to 400 to 250 ms) produced a wide spectrum of T-wave and mechanical alternans. Acceleration to CLs of 400 to 300 ms produced mild to moderate TWA principally due to beat-to-beat alternation of repolarization of cells in the M region. Transmural dispersion of repolarization during alternans was exaggerated during alternate beats. Acceleration to CLs of 300 to 250 ms caused more pronounced beat-to-beat alternation of action potential duration (APD) of the M cell, resulting in a reversal of repolarization sequence across the ventricular wall, leading to alternation in the polarity of the T wave. The peak of the negative T waves coincided with repolarization of the M region, whereas the end of the negative T wave coincided with the repolarization of epicardium. In almost all cases, electrical alternans was concordant with mechanical alternans. Torsade de pointes occurred after an abrupt acceleration of CL, which was associated with marked TWA. Both ryanodine and low [Ca2+]o completely suppressed alternans of the T wave, APD, and contraction, suggesting a critical role for intracellular Ca2+ cycling in the maintenance of TWA. Our results suggest that TWA observed at rapid rates under long-QT conditions is largely the result of alternation of the M-cell APD, leading to exaggeration of transmural dispersion of repolarization during alternate beats, and thus the potential for development of torsade de pointes. Our data also suggest that unlike transient forms of TWA that damp out quickly and depend on electrical restitution factors, the steady-state electrical and mechanical alternans demonstrated in this study appears to be largely the result of beat-to-beat alternans of [Ca2+]i.
|
{
"pile_set_name": "PubMed Abstracts"
}
|
TONIGHT: The Barons play the first of back-to-back games against the Charlotte Checkers (Carolina Hurricanes), who share the Western Conference’s eighth and final playoff spot with Rockford. OKC is 10th in the conference with 73 points, 1 back of Rochester and 2 behind Charlotte and Rockford. This is the 11th meeting of the year with the Checkers. OKC is 2-6-0-2 against Charlotte this season and has dropped all 4 games at the Cox Center. OKC enters tonight off a 5-4 shootout win on Wednesday against Hamilton. Charlotte fell 5-2 at San Antonio last night.
OFF THE TOP:The Barons have gone to a shootout in 3 straight games, defeating Hamilton on Wednesday after consecutive losses in Charlotte on Saturday and Sunday. It’s the first time in club history 3 straight games have ended in shootouts. OKC improved to 2-9 in tiebreakers with Wednesday’s win in a 7-round shootout. Both OKC wins have been at home by 5-4 finals and C.J. Stretch has scored the go-ahead and decisive goal in each of the shootout wins. March was the Barons’ third consecutive winning month. OKC was 6-3-1-3 in March after going 7-2-0-1 in February and 6-5-0-0 in January. The Barons are 15-5-1-4 in the last 25 games dating to Jan. 30 and in that time established the two longest unbeaten streaks in club history: an 8-game string without a regulation or overtime loss from Jan. 30-Feb. 21 and a 7-game unbeaten stretch from Feb. 28-March 11. Fourteen of the last 15 Barons games have been decided by one goal (or 1 goal plus an empty-netter), with the exception a 5-2 win over San Antonio March 11 in Oklahoma City. The Barons are 8-3-1-3 in their last 15. In one-goal decisions, OKC is 16-12-2-9. After tonight, the Barons have 6 games left, 2 on the road at Abbotsford and 4 at home, against Charlotte, Texas and Iowa (2).
GOAL A GAME:Matthew Ford has goals in 6 of the last 7 games and 22 for the season after opening the scoring with a power-play tally on Wednesday. Ford had a club-record 5-game goal streak snapped on Sunday in Charlotte.
CHASE CHIPS IN:Greg Chase, a seventh-round Edmonton Oilers draft pick in 2013, scored the tying goal on Wednesday with 4:25 remaining in his first professional game. The 19-year old signed an ATO on Tuesday after completing his season with the Calgary Hitmen of the Western Hockey League. Chase was the co-leader on the Hitmen in scoring with 35 goals and 50 assists.
HUNT APPROACHES RECORD: Brad Hunt is tied for fourth in points among AHL defensemen with 47, on 11 goals and 36 assists. He already owns the club record for assists in a season by a defenseman (36), surpassing Justin Schultz’s 30 last season. The franchise record for points in a season by a defenseman is 48, set in 34 games by Schultz during his Eddie Shore Defenseman of the Year campaign last season.
ANSWERING THE BELL:Richard Bachman has started 23 of the past 25 Barons games, going 14-5-4 in that span. Bachman is third in the AHL in minutes (2709:26) while facing the most shots (1489) and making the most saves (1352).
ODDS AND ENDS: The Barons have added 8 players on Amateur Tryouts or reassignments from the college or junior ranks since last meeting the Checkers on Sunday. Two of the newcomers, Jujhar Khaira and Greg Chase, made their pro debuts on Wednesday, with Chase scoring the tying goal to force overtime. Steve Pinizzotto and Will Acton have been recalled by Edmonton since Sunday. Twins Kellen and Connor Jones signed ATOs yesterday. They just completed 4-years careers at Quinnipiac University, where they helped lead the Bobcats to the Frozen Four in 2013, where they finished second to Yale.
|
{
"pile_set_name": "Pile-CC"
}
|
Indians vs. Red Sox: Moment of truth — C.C. readies for Game 5
CLEVELAND — With one big start, C.C. Sabathia can erase the memory of his brief but muddled postseason past. With one big start, Cleveland’s ace can pitch the Indians into the World Series for the first time in 10 years. With one big start, the 27-year-old left-hander can set this city on its ear. The opportunity arrives tonight (8:21) for Sabathia, who with the Indians owning a 3-1 advantage in the American League Championship Series, will start Game 5 against fellow Cy Young candidate, Boston’s Josh Beckett. Sabathia struggled through his first two playoff starts, losing the only game the Indians have dropped to the Red Sox in the ALCS, but none of it will matter should he offer up a performance to remember this time around. “(Tonight) is going to be a phenomenal game,” said Cleveland starter Paul Byrd, who pitched Boston to the brink Tuesday night, with a five-inning, two-run effort good enough to earn his second win in two postseason starts. “You have two Cy Young candidates. Now Beckett has to come to our place. “The crowd here has been amazing. I think C.C. will feed off that. I think we’ll see a brand new C.C. Sabathia.” The Indians had better hope so. If the one that dominated hitters and pitched late into games during the regular season arrives, there’s a strong possibility Cleveland will be making reservations for a World Series trip against the Colorado Rockies. If it’s the one that hasn’t been able to find the plate and didn’t give his team a chance to win Game 1, this series could be headed back to Boston, where strange things in favor of the Red Sox tend to occur. The Indians are expecting the first. “He’s very aware how he’s pitched the last couple times,” said third baseman Casey Blake. “I think he’s very eager to get out there again and show up for us.” “I think everybody expects him to be the C.C. we all know,” said catcher Victor Martinez. “I expect a great outing from him.” Though Sabathia has not admitted to as much, the consensus is that his postseason problem has been mental, not physical fatigue from pitching well over his career-high innings count. He’s been too pumped up, which has caused him to overthrow and miss his spots. Getting too emotional was something Sabathia appeared to have conquered as he solidified his standing as a true No. 1 starter this season, but it appears to have resurfaced this postseason and will be tested again on an even bigger stage tonight.
“Just stay calm,” said Sabathia, when asked how he would attempt to level the adrenaline. “I’ve been doing a pretty good job of being able to keep my emotions under control, staying even keel all year. “I’m looking forward to being my normal self (tonight).” With the series on his shoulders and a potent Boston in front of him, that will be a formidable task. “I know that he feels like he needs to do more, and hopefully he won’t feel like that (tonight),” said Cleveland manager Eric Wedge. “He doesn’t need to do more. All he needs to do is just go out there and be himself and pitch the way he’s capable of.” If that happens, odds are good the Indians will be in position to win the game and clinch the AL title. But nothing in baseball is guaranteed, not when your hitters have to face Beckett, a proven big-game pitcher. Cleveland will be up against Boston’s flamethrower for the second time in seven days after bowing to him in Game 1, a Red Sox win that featured Beckett outpitching Sabathia, emphatically. The Indians will need to bring a different approach against the right-hander than they’ve employed in three meetings this year, with Beckett allowing just five runs on 11 hits, while striking out 21 batters over 21 innings. Meanwhile, Beckett, who has been one of the majors’ best pressure pitchers since his World Series heroics for the Marlins in 2003, will be bringing the same one. “I don’t view (postseason starts) any differently than I would my fifth start of the season,” Beckett said. “You’ve got to execute pitches. You have to execute more pitches now, because everybody is locked in this time of year. “I’ll just go out and try to do what I’ve been doing all year.” As they were in the ALCS opener and despite owning a two-game advantage, the Indians will most likely be the underdog to Beckett and the Red Sox tonight. But if they get a big performance from Sabathia, the dogs just might have their day. “We may look like guys that don’t have a lot of superstars and big names, but it’s a real confident group,” Sabathia said. “I just feel like when we go out, if we play our type of baseball, we’ve got a good chance of winning.” Contact Chris Assenheimer at 329-7137 or [email protected].
|
{
"pile_set_name": "Pile-CC"
}
|
In the treatment of various cancers, and in particular prostate cancer, a process called brachytherapy has proved effective. In brachytherapy, small capsules containing radioactive material are implanted in or near to the tumour.
One known form of capsule or canister, commonly used to treat prostate cancer and referred to as a “seed”, is shown in FIG. 7. The capsule 100 comprises a silver rod 102, coated with a radioactive isotope of iodine such as I-125, inside a hollow titanium tube 104. The ends of the tube are welded closed. Resin balls coated with radioactive iodine can be used instead of the silver rod 102. The completed capsule has a width of approximately 1.0 mm and a length of approximately 4.5 mm. The capsules or seeds can be implanted into a patient individually; alternatively, the capsules can be inserted into medical stitching material or suture, which is then inserted into the prostate and left there.
The number of capsules implanted into each patient obviously varies in accordance with the regime of treatment required, but is commonly in the region of 50 to 100. The capsules are normally made by hand, with the welding process used to close the ends of the tube being carried out manually. It will be appreciated that making such a large number of capsules for each patient by hand takes considerable time and expense.
|
{
"pile_set_name": "USPTO Backgrounds"
}
|
Incompatible libidos and stress are the second most common hindrances.
1 in 5 New Zealanders have lied about the number of people they’ve had sex with.
Over a quarter of New Zealanders have cheated on a partner, one in seven have had a long-term affair, and fatigue, incompatible libidos and stress are New Zealand’s biggest deterrents for sex, according to national sex survey results.
More than 1,000 New Zealanders were asked what obstacles they face in their sex lives in the recent online Adulttoymegastore Kiwi Sex Survey conducted by trusted research company Colmar Brunton. The survey results match the demographic profile of New Zealanders for age, gender and region.
Survey respondents were asked what they believe hinders their sex lives. Options included stress, the proximity of children, time restraints, body confidence and more. Respondents were able to select multiple answers, as well as provide “other” reasons not mentioned.
According to the results, fatigue is the most common hindrance to New Zealanders sex lives, as almost one in every two New Zealanders (44 percent) said the feeling of tiredness or exhaustion, or a need to rest because of lack of energy or strength, is their biggest reason for not feeling up for sex.
New Zealand’s other most common sex-obstacles include having an incompatible libido to a partner, and feeling stressed, which affects one-third of Kiwis.
Respondents who listed stress as having a notable negative influence on their sex lives were more likely to be aged between 40 and 59 years old, and living in Auckland.
New Zealand-born sexologist and author, Dr. Shelley Hiestand, said while it’s common for people to have obstacles in their sex lives, there are always ways to alleviate them.
“Fatigue, incompatible libidos and stress would definitely be the top three reasons I hear from clients, however I would say that incompatible libidos is even more of an issue and can be addressed if the couple is willing,” Dr. Shelley explained.
“You know you have incompatible sexual libidos if one person wants more sex and it is not reciprocal. Often a partner just accepts the incompatibility, and ends up looking elsewhere to have their needs met, such as with a sex worker or having an affair.”
New Zealand Cheating Statistics
According to the Adulttoymegastore Kiwi Sex Survey results, a quarter of New Zealanders have cheated on a partner, and one in seven New Zealanders have had a long-term affair.
Of all the regions in New Zealand, survey respondents living in Dunedin/Otago were more likely to have cheated on a partner, while respondents living in Whangarei/Northland were the least likely of the regions to have cheated.
Those who have had a long term affair at some stage of their lives are more likely to be aged 60 or over, unhappy with their sex life, and dissatisfied with the frequency they have sex. The most prominent region was Palmerston North, where 26 percent of survey respondents admitted they have had a long-term affair.
Dr. Shelley said a leading cause of infidelity is unmet sexual needs.
“If one partner has needs and desires which are no longer being met, they have to find other ways to meet them,” Dr. Shelley said.
“For those willing to make it work, being creative with the use of sex toys, bio-identical hormonal therapy, or opening up the relationship to other people through the swing lifestyle, polyamory or open relationships - which requires honesty of communication – can all be viable options.”
Other factors that hinder the sex lives of New Zealanders include time restraints, which affects 28 percent of New Zealanders, and the proximity of dependents and/or children, which affects one in four people.
Dr. Shelley said while life can get busy, it’s important for couples to schedule time for intimacy.
“Scheduling time to spend with your loved one is important. Have a hot bath together, give each other massages, or use sex toys to explore pleasure zones and spice things up,” she explained.
Health issues and disabilities affect the sex lives of one in five New Zealanders, and those affected are more likely to be aged 60 years and older. Dr. Shelley recommends those affected try using adult toys.
“Many of my older clients, or those with disabilities or erectile issues, have found the use of vibrators and other sex toys can help themselves and their partners to have greater sexual satisfaction.”
Dr. Shelley said having hindrances to a person’s sex life isn’t necessarily a problem in every relationship, but it could turn into an issue if not addressed.
“Sexual intimacy is not necessarily the ‘be all, end all’ for every relationship, but if you do have a strong sexual component of your relationship and then it dies or dwindles, it can become an issue.”
“My personal opinion, which is also reflected in the findings of my research that I have done for my doctorate and in my work, is that sex and intimacy is very important physically, emotionally, mentally and spiritually for any person, and also in a relationship. It’s important that people actively try to resolve the issues they’re facing in their sex lives.”
One in six New Zealanders (16 percent) believe they don’t have any notable hindrances to their sex lives.
Low self-esteem 13% - more likely to be female, aged 18-39, and single
Other 9% (Other answers included: Old age, long distance relationship, disabilities, different work schedules, being single, erectile dysfunction or premature ejaculation, and inability to reach orgasm.)
None of these 16%
Infidelity statistics for New Zealand
Over 1 in 4 Kiwis have cheated on a partner (27%)
Most common in Dunedin/Otago (40% of respondents had)
Least common in Whangarei/Northland (17% of respondents had)
Over 1 in 7 New Zealanders have had a long-term affair (15%)
They are more likely to be aged 60 or over, unhappy with their sex life, and dissatisfied with the frequency they have sex.
Most prominent region: Palmerston North, where 26% of respondents had had a long term affair.
1 in 5 New Zealanders have lied about the number of people they’ve had sex with (20%)
35% of New Zealanders have faked an orgasm.
They are more likely to be female, heterosexual, aged 40-49, and in a relationship.
Survey respondents from Palmerston North were more likely to state they’ve faked an orgasm, while Tasman/Nelson/Marlborough respondents were least likely to have done it.
About Dr. Shelley, Sexologist
Dr. Shelley was originally born in New Zealand and now lives in Las Vegas. She has a PhD in Philosophy specializing in Human Sexuality. The focus of her dissertation was the “Anti-Aging Benefits of Sex” and her specific research project focused on the Health Benefits of Open Relationships.
How the research was conducted:
The survey was conducted online and hosted by Colmar Brunton.
A total of 1,008 people were recruited through the Colmar Brunton online research panel, to match the demographic profile of New Zealanders for age, gender and region. The maximum margin of error on a sample size n=1,008 is +3.1%.
Only regions with 30+ respondents were included in the regional data segmentation. Regional data is indicative only.
The survey was in field from 4th – 17th September 2017.
The average interview length was 12 minutes.
Permission to use this content
The results and infographics of the Adulttoymegastore Kiwi Sex Survey can be used by anyone interested in the material, however, in return we request that you properly cite the original source by linking to the Adulttoymegastore Kiwi Sex Survey Infographic so users can access the original full survey results.
We would also appreciate the crediting of our brand name “Adulttoymegastore” by linking to our home page www.adulttoymegastore.co.nz as the author of the original sex survey results, so our team is properly attributed for sourcing and sharing this content.
Janelle is a lover of most things, but exploring new places, enjoying good food, cold beer and lifting weights sit at the top of her list. She enjoys writing about all to do with sex toys, making her the content extraordinaire at Adulttoymegastore!
|
{
"pile_set_name": "Pile-CC"
}
|
Introduction of Sprite by Coca-Cola
The post-World War II years saw diversification in the packaging of Coca-Cola and also in the development or acquisition of new products.Sprite, the citrus flavored soft drink was introduced in 1961. Houston Coca-Cola Company sold fruit flavored drinks under the name ‘Sprite’ during the 1950s, and became the first bottles to market Sprite in 1961.The name ‘Sprite’ came from an earlier advertisement for Coca-Cola. In the 1940s, Coke had used a little man with a big smile in its ads.He had white hair and wore a bottle cap for a hat. Eventually he became known as the Sprite Boy after elf-like creatures called sprites that feature in many folk tales.Sprite got a new entry to its product line with the rollout of the tropical flavored Sprite Remix in the spring 2003.In 2003 ads commercial for Sprite features urban street dancers, skateboarders and bicyclists performing tricks to a hip-hop beat. The youths, dressed in baggy urban clothing, rap about their lives, and the freedom that comes with performing and drinking Sprite.
|
{
"pile_set_name": "OpenWebText2"
}
|
Ravenswood Premiere Quotes: "This Wind Has Hands" and More!
Credit:
ABC Family
One of the things we've come to expect from Pretty Little Liars is snappy dialogue, so it comes as no surprise that its spin-off,Ravenswoodalso brings the banter. Hey, even the world's creepiest town needs some moments of levity to lighten the mood.
Between the murderous ghosts and the repeating curse, the premiere episode found time for all all sorts of quotable moments. We've picked our favorite quips for your enjoyment.
10. Raymond Collins: "I know every tombstone out there."
Sure, that's a normal thing to say.
9. Ms. Grunwald: "College girls can be very unstructured."
That's one word for it...
8. Miranda "This wind has hands!"
Welcome to Ravenswood, darling.
7. Remy: "When he coughs there's dust. It's like beating a rug."
We're pretty sure that's just true of the whole town.
6. Caleb, on Ms. Grunwald: "Every time she looks at me I feel like someone's taking a potato peeler to my skin."
Imagine how the sorority girls felt!
5. Caleb, to Miranda: "You get all your charm from your father's side of the family."
|
{
"pile_set_name": "Pile-CC"
}
|
Rui-Ming Xu
Rui-Ming Xu (simplified Chinese: 许瑞明), is a Chinese physicist, biophysicist and molecular biologist. He is a leading bioresearcher in China.
Biography
Early years
Xu entered the Department of Physics at Zhejiang University in Hangzhou, China in 1980, and obtained his B.Sc. in physics in 1984. In 1984, Xu joined the China-U.S. Physics Examination and Application (CUSPEA) and was qualified and awarded a fellowship, so that he could pursue his further study in physics in the United States.
USA
Xu entered Brandeis University in MA, and obtained his M.A. in 1985 and Ph.D. (advisors were L.F. Abbott and M.T. Grisaru) in 1990 both in physics.
From 1989 to 1991, Xu was a postdoctoral research fellow at the University of Texas at Austin, and worked with Steven Weinberg. From 1991 to 1993, Xu was a postdoctoral associate at SUNY Stony Brook, and his mentor was Chen Ning Yang.
In 1993, Xu visited the Cold Spring Harbor Laboratory and started working there. Xu became the institute's assistant professor in 1996, and was promoted to associate professor and then professor. From 1998, Xu was also a faculty member of the Genetics and Biophysics programs at Stony Brook University. Xu was also an adjunct professor at NYU Langone Medical Center.
Beijing
In July 2006, the Ministry of Science and Technology of P.R.China planned to build a national key laboratory of protein science, named as National Laboratory of Protein Science (NLPS). NLPS is one of the first national key laboratories of life science in China, and the largest national laboratory of protein science in China.
After staying in the USA for 25 years and serving for the Cold Spring Harbor Laboratory for 13 years, Xu went back to China in 2008, and became the Director of the lab, which was a breaking news both inside and outside China. The journal Science also reported this, and further introduced his new lab and career in Beijing.
Currently Xu is the Director of NLPS and a researcher in the Institute of Biophysics (IBP) of the Chinese Academy of Sciences (CAS) in Beijing.
Personal life
Xu has two children Amelia and Christopher who live in Jericho, New York. Amelia is a competitive figure skater and Christopher is a concert pianist.
References
External links
Science Magazine China: 重大蛋白质研究新设施让生物学家如虎添翼
CHINA: Biologists Muscle Up With Major New Protein Facilities
Xu group at IBP of CAS
Xu's profile at Baike.bbioo.com
Scientific Commons: Rui-Ming Xu
Rui-Ming Xu - research profile on BiomedExperts
Category:21st-century American physicists
Category:21st-century American biologists
Category:American people of Chinese descent
Category:Zhejiang University alumni
Category:Brandeis University alumni
Category:Stony Brook University faculty
Category:New York University faculty
Category:Living people
Category:Year of birth missing (living people)
|
{
"pile_set_name": "Wikipedia (en)"
}
|
Courses and Subjects
Courses and Subjects
Downham Market Academy Sixth Form offers the opportunity to study A Levels or vocational courses.
A Levels
A Levels are one of the main routes to higher education and university, although they are also useful if you want to go straight into employment. An A Level is a level 3 qualification consisting of four modules studied over two years.
Vocational Courses
Vocational courses focus on a specific subject area, combining practical and academic study. They are designed to give you the knowledge and skills relevant to a variety of jobs or may offer specialist qualifications for a particular industry sector. They can also be a route to higher education.
BTEC and Cambridge Technicals Level 3 qualifications have the same academic level as A levels and are assessed through coursework.
A BTEC Subsidiary Diploma is the equivalent of one A Level, a BTEC Diploma equates to two A Levels and a BTEC Extended Diploma is equal to three A Levels.
Entry Requirements
A Levels: You will need at least six GCSEs at grade 5 or above (including maths and English*). Three of these should be at grade 6 or above.
Level 3 BTEC Diplomas: You will need five GCSEs at grade 5 or above (including maths and English*).
In addition some subjects at A level require a higher GCSE or specialist skills or knowledge to a certain level (e.g. music requires a grade 5 or better qualification in the instrument to be studied); parents/carers/students should refer to the specific subject course descriptions on the website for details.
*GCSE grade 4 in maths and English is a requirement for entry to university and higher study and it important that students achieve these. In order to ensure students are able to access the widest
possible range of opportunities upon leaving, applications from students without a 4 grade in maths and English may be considered depending on overall results in exceptional circumstances.
The following table lists the choices on offer and the additional specific entry requirements for each one.
|
{
"pile_set_name": "Pile-CC"
}
|
Q:
Does the /3GB switch increase the user address space per process or total processes?
It was my understanding that it's per process, not the total processes. But according to Large memory support is available in Windows Server 2003 and in Windows 2000 (KB283037):
Typically, a process running under
Windows 2000 or Windows Server 2003
can access up to 2 GB of memory
address space (assuming the /3GB
switch was not used) with some of the
memory being physical memory and some
being virtual memory. The more
programs (and, therefore, more
processes) that run, the more memory
you commit up to the full 2 GB of
address space.
That to me says the more programs you run the more chance you will hit the 2GB address space limit i.e. Program A uses 500MB, Program B uses 1GB, so you've only got 500MB of address space for the rest of your programs.
However an MSDN article http://msdn.microsoft.com/en-us/library/ms189334.aspx refers to this as Process Address Space and to me implies that each application gets its own address space, be it 2GB or 3GB, depending what switch is being used in the boot.ini.
So is it per process or total process? And is the knowledge base article wrong (or badly worded)?
(Please note I'm talking about 32-bit systems only)
A:
It's virtual address space per process, as per the MSDN article, and the superb series of articles on this written by Raymond Chen and archived at his blog.
Here is his index page for this series of articles - very well worth a read if you're dealing with large memory support as a senior system admin or a developer.
|
{
"pile_set_name": "StackExchange"
}
|
Rebound thymic hyperplasia after chemotherapy in a patient treated for pulmonary metastases.
A 38-year-old patient presented with an anterior mediastinal mass after chemotherapeutic and surgical treatment for lung metastases from a malignant histiocytoma. Because of the risk for tumour recurrence the thymic mass was resected. Thymic hyperplasia was found on pathological examination. In this case thymic hyperplasia is a rebound phenomenon aflcer chemotherapy. It appears to atrophy during the administration of chemotherapy and regrow afterwards. Surgical resection provides the definitive diagnosis and treatment.
|
{
"pile_set_name": "PubMed Abstracts"
}
|
In what is turning out to be a massive controversy in the state of Andhra Pradesh, the Andhra Pradesh Road Transport Corporation reportedly issued tickets to travellers from Tirupati to the temple town of Tirumala with advertisements of Haj and Jerusalem pilgrimage on the back. Travellers seem to have been agitated by this development and they reportedly raised the issue before the regional manager who stated that a bundle with printed material about non-Hindu pilgrimage had wrongly come to Tirupati. It must be kept in mind that Tirupati is a major Hindu shrine.
As per NDTV, the state-owned road transport corporation’s executive director for operations confirmed that this has been brought to their notice and they are probing into this incident. The officer said, “It is an advertisement of the government issued by the minorities department.” Andhra Pradesh endowments minister Vellampalli Srinivas, also spoke about this development and said that the ticket bundles were published during the days of the TDP government, before the elections. As per him, the bundles were meant for Nellore and Kadapa. The minister said, “No propaganda is allowed in a holy place like Tirumala. We are taking a very serious view of this and will punish the guilty.”
However, there does not seem to be much clarity on how these tickets ended up reaching Tirumala-Tirupati which will now be probed. BJP MLA Raja Singh from Hyderabad has put out videos raising this issue. It must be noted that the Andhra Pradesh CM had faced intense criticism recently when he refused to light lamp at a US event. The Andhra Pradesh BJP had tweeted, “Jagan Mohan Reddy refused to lamp light before inaugurating a program in the US. He just fooled AP Hindus for votes, by visiting temples. He was a Hindu for votes, like Rahul Gandhi.” It had added, “Script was well written by Prashant Kishor. Bengal can learn now.”
In fact, tenure of Jaganmohan Reddy has been riddled with controversies over minority appeasement till now. The allocation of a huge amount of public money for Imams and Christian Pastors in the state government’s budget had also raised eyebrows. “The honorarium to Imams is proposed to be enhanced to Rs 10,000 per month and of Mouzzans to Rs. 5,000 per month. Similarly, it is proposed to provide Pastors with an honorarium Rs. 5,000 per month,” said the finance minister of Andhra Pradesh in the budget speech.
The Jagan Reddy government also came under fire for extravagant state expenditure over Jaganmohan Reddy and his family’s private visit to Jerusalem in Israel. BJP leader, Lanka Dinakar, had come down heavily on the Andhra Pradesh government for releasing funds to the tune of Rs 22.52 lakh on chief minister’s security on the four-day visit. His visit was seen as seen as a thanksgiving visit after YSRCP’s unprecedented success at the Lok Sabha and Assembly polls in Andhra Pradesh held earlier this year.
|
{
"pile_set_name": "OpenWebText2"
}
|
Prognosis for multiple myeloma
Reading time: 2 min
The prognosis for multiple myeloma depends on the stage at the time of diagnosis, plasma cell cytogenetic data (a study of associated diseases caused by structural abnormalities in or irregular numbers of chromosomes) and the disease’s response to treatment. In any case, just because one treatment does not work, it does not necessarily mean that another treatment will not work perfectly and resolve the disease.
Multiple myeloma staging
Staging indicates the spread of myeloma and the degree of possible complications, which helps determine how the disease will progress in each patient.
The most traditional classification is the Durie-Salmon system which categorises myelomas into three stages.
Stage I. Includes patients with a haemoglobin level of at least 10 g/dL or normal values, a normal bone survey and a high level of monoclonal protein.
Stage II. Covers patients who do not meet the criteria for either stage I or stage III multiple myeloma; i.e., they may have bone lesions but they are not advanced.
Each of the three stages is subdivided into A or B, depending on whether or not kidney function is affected.
However, the international staging system (ISS) is currently the most used prognostic system. It only considers the blood levels of just two substances: beta-2 microglobulin (produced by malignant myeloma cells) and albumin (a protein normally found in the blood). Patients with normal albumin levels and low beta-2 microglobulin (B2m) production are classified as stage I; if B2m levels are greater than 5.5 mg/dL then they are stage III (regardless of albumin levels). These results are associated with a better or worse prognosis.
Complications of Multiple Myeloma
Acute. Unlike several other types of cancer, multiple myeloma can affect the body in several different ways. It is very important to remember that not all patients will experience all of the complications. Furthermore, effective treatments are available to mitigate them. The acute complications stem from the fact that several organs may be affected; consequently, myeloma can produce a wide variety of symptoms: low blood cell count, infections, reduced kidney function, bone fractures, high calcium levels and spinal cord compression.
Chronic. A common chronic complication, caused by certain treatments, is numbness in the hands and feet (peripheral neuropathy). Any patients who present these symptoms should discuss them with their medical team as early treatment can provide relief and reduce the intensity.
Subscribe
Thank you for subscribing!
We have received your information. Check your inbox, in a few moments you will receive a confirmation email.
An error occurred and we were unable to send your data, please try again later.
This site contains basic information on different aspects of health prepared by professionals and patients. It also provides generic recommendations which may not, under any circumstances, be used as diagnostic or medical treatment of symptoms or illnesses. The content you may find does not replace the personalized service of healthcare professionals.
|
{
"pile_set_name": "Pile-CC"
}
|
IT Support
Convey Tech Labs Improve productivity and reduce downtime with provides Best IT Support. Highlights: Multi-Language Support, Support Projects, Virtual Labs and more…
We offer a wide range of end-to-end IT solutions including IT Infrastructure Management and IT Support Services. Our focus is to deliver exceptional customer service and high performing business IT solutions through latest technology. At Convey Tech Labs, We committed to deliver innovative solutions that fully meet our clients’ business objectives, technology and cost effective needs.
|
{
"pile_set_name": "OpenWebText2"
}
|
{#sp1 .230}
|
{
"pile_set_name": "PubMed Central"
}
|
/*=============================================================================
Copyright (c) 2001-2011 Joel de Guzman
Distributed under the Boost Software License, Version 1.0. (See accompanying
file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
==============================================================================*/
#if !defined(BOOST_SPIRIT_DIRECTIVE_FEBRUARY_05_2007_0313PM)
#define BOOST_SPIRIT_DIRECTIVE_FEBRUARY_05_2007_0313PM
#if defined(_MSC_VER)
#pragma once
#endif
#include <boost/spirit/home/qi/directive/as.hpp>
#include <boost/spirit/home/qi/directive/encoding.hpp>
#include <boost/spirit/home/qi/directive/hold.hpp>
#include <boost/spirit/home/qi/directive/lexeme.hpp>
#include <boost/spirit/home/qi/directive/no_skip.hpp>
#include <boost/spirit/home/qi/directive/matches.hpp>
#include <boost/spirit/home/qi/directive/no_case.hpp>
#include <boost/spirit/home/qi/directive/omit.hpp>
#include <boost/spirit/home/qi/directive/raw.hpp>
#include <boost/spirit/home/qi/directive/repeat.hpp>
#include <boost/spirit/home/qi/directive/skip.hpp>
#endif
|
{
"pile_set_name": "Github"
}
|
The covering of substrates such as conductors or cores for use in communications with plastic insulating or jacketing materials is generally accomplished with pressure or tubing extrusion tooling. In pressure extrusion, a substrate is moved through a core tube having an opening that is only slightly larger than the substrate. The end of the core tube is positioned within a die cavity and spaced from a land of a die through which the substrate and the plastic extrudate are moved. Pressure extrusion results in a well defined insulative cover which is disposed tightly about the substrate.
In normal pressure extrusion tooling, the moving substrate is exposed to a relatively high melt pressure of the plastic material in a so called "gum space" between the end of the core tube and the die land. Flow of the plastic material is comprised of two components--differential pressure flow and drag flow. The pressure flow is caused by the difference in pressure between the entrance to the land and the exit orifice of the die. Drag flow is defined as the volumetric forward displacement of a viscous material between a stationary and a moving surface such as between the land and the substrate. See. E. I. Bernhardt Processing of Thermoplastic Materials which was published in 1974 by Krieger Publishing Company.
This relatively high melt pressure requires that the inside surface of the core tube be only slightly larger than the outer dimension of the substrate. This avoids any problems in concentricity of the insulation cover and creates a seal which prevents the extrudate from flowing in a direction opposite to the direction of advance of the substrate and into the core tube. Typically, the clearance between the substrate's outer surface and the inner surface of the core tube is 0.001 to 0.002 inch for product sizes in the 0.015 to 0.075 inch range.
Unfortunately, this relatively small clearance prevents any substrate irregularities such as intermittent oversize sections from passing through the tooling. Consequently, particular substrates having a non-uniform cross-section or any spliced, relatively smooth cores cannot be insulated using conventional pressure extrusion techniques. If the core tube is not oversized to accommodate these irregularities, the substrate will break, requiring down time for operator string-up. On the other hand, if the core tube is oversized, the pressure in a conventional pressure extrusion process will cause a backflow of the plastic material into the core tube.
One such substrate having irregularities is that of a conductor of a telephone cord which is used with customer station equipment. A telephone cord conductor generally comprises a polymeric core having a plurality of tinsel ribbons wrapped helically thereabout. Telephone cords are well disclosed in the prior art such as, for example, U.S. Pat. No. 3,037,068 issued May 29, 1962 in the name of H. L. Wessel, and in U.S. Pat. Nos. 2,920,351 and 3,024,497 issued on Jan. 12, 1960 and Mar. 13, 1962 respectively in the names of E. C. Hardesty and D. L. Myers. Because a tinsel conductor is made with something less than a constant cross-section, the core tube must be oversized.
For these kinds of products, the art has resorted to tubing processes in which the leading end of the core tube generally is flush with or extends beyond the die opening. See U.S. Pat. No. 3,554,042 which issued on Jan. 5, 1971, in the name of E. R. Cocco. But in commonly assigned, copending application Ser. No. 229,434 which was filed on Feb. 29, 1981 now U.S. Pat. No. 4,339,298, the downstream end of the core tube is positioned within the die land. In a tubing operation, the clearance between the inner surface of the core tube and the outer dimension of the substrate, such as an array of tinsel conductors, is large enough to permit oversize substrate sections to be passed through the core tube without jamming. Unlike pressure extrusion, tubing relies solely on differential pressure flow and the extrudate is drawn down about the substrate externally of the die.
A tubing process does not always result in the most acceptable product since tubed covers generally have more size variations and irregular surfaces and are not disposed as tightly about the substrate as in a pressure extrusion process. It should be clear that irregular or intermittently oversized substrates which are necessarily tube-insulated or jacketed are done so at some expense to the overall product configuration and/or performance.
This disadvantage of a tubing process has been aggravated because of recent changes in the materials which are used for insulation and jacketing. These changes in materials, at least for cords, have come about because of a somewhat recently introduced cord connection arrangement, which is referred to as modularity. Miniature plugs are connected to each end of a cord to facilitate attachment to jacks in telephone instruments and in wall outlets. For example, see U.S. Pat. Nos. 3,699,498 and 3,761,869 issued Oct. 17, 1972 and Sept. 25, 1973 respectively in the names of E. C. Hardesty, C. L. Krumreich, A. E. Mulbarger, Jr. and S. W. Walden and U.S. Pat. No. 4,148,359 issued Apr. 10, 1979 in the name of E. C. Hardesty. With the introduction of modularity, it became necessary to use a different cord construction because of a need for a smaller cross-section to be compatible with the plugs. In order to reduce the size of the insulated conductor, the tinsel is insulated with a crystalline, relatively high molecular weight plastic material as disclosed and claimed in U.S. Pat. No. 4,090,763 which was issued on May 23, 1978 in the names of W. I. Congdon, J. J. Mottine and W. C. Vesperman and which is incorporated by reference hereinto. A material such as that disclosed in the above-identified Congdon et al application is available commercially from E. I. duPont Company under the trade name Hytrel.RTM. polyester elastomer.
Extrusion of the above-identified plastic material is characterized by rapid changes in melt viscosity and melt strength with slight variations of polymer temperature. For relatively high molecular weight and/or branched polymers such as Hytrel.RTM. polyester elastomer material, the melt viscosity increases significantly as the pressure increases. These characteristics could cause non-uniform wall thickness and polymer flow pulsations unless suitable control is exercised.
The prior art also shows techniques for controlling the engagement of a tubed Hytrel.RTM. plastic extrudate with the core being enclosed. In U.S. Pat. No. 4,206,611, which issued on June 3, 1980 in the names of W. M. Kanotz et al, an extruded tubular covering is held out of contact with an advancing conductor until the extrudate becomes sufficiently form-sustaining by suitable crystallization. Then, when the crystallized insulation is drawn down on the conductor, any tinsel burrs which protrude outwardly are compressed. This results in a conductor having a continuously concentric insulation and a uniform wall thickness.
There are insulating operations other than these which are used to cover tinsel conductors in which problems have developed because of the plastic which is extruded. For example, a low resistance cord may include a plurality of conductors each comprising wires which are stranded together and insulated. Relatively high pressures are required to extrude some plastic materials such as the hereinbefore-mentioned Hytrel.RTM. plastic material. Such plastic materials have a high molecular weight and are polymerically branched, and normal pressure extrusion techniques may cause dramatic melt viscosity and shear stress increases thereby causing melt fracture.
Melt fracture of particular plastic materials during extrusion is a structural breakdown by fracture within a polymer melt where the critical shear stress becomes abnormally independent of extrusion die orifice size. The result is an insulation cover which is extremely irregular and totally unacceptable.
And yet, these plastic materials such as Hytrel.RTM. elastomers have much to offer. They generally are tough and mechanically resistant to many of the conditions encountered by insulated substrates in the field. It is highly desirable to be able to take advantage of these benefits; but to do so, the problem of melt fracture must be overcome.
It should be clear, that there are several problems in the extrusion of particular plastic materials which must be addressed. Moreover, extrusion techniques need to be reexamined to find solutions to problems caused during the covering of non-uniform substrates. Prior art extrusion technology seemingly lacks tooling which is capable of extruding a substantially uniform, substantially concentric wall about a substrate which is irregular or which includes intermittent oversized portions.
|
{
"pile_set_name": "USPTO Backgrounds"
}
|
Q:
how to add occupants/users to a MUC room?
I have created a persistent MUC room using the ejabberd API "create_room_with_opts". I am now adding a user to the room by subscribing the user to the room using "subscribe_room" API with folowwing req and response.
Req:
{
"user": "vishesh@dub/dummy",
"nick": "vish",
"room": "[email protected]",
"nodes": "urn:xmpp:mucsub:nodes:messages,urn:xmpp:mucsub:nodes:affiliations,urn:xmpp:mucsub:nodes:subject,urn:xmpp:mucsub:nodes:presence"
}
Res:
[
"urn:xmpp:mucsub:nodes:messages",
"urn:xmpp:mucsub:nodes:affiliations",
"urn:xmpp:mucsub:nodes:subject",
"urn:xmpp:mucsub:nodes:presence"
]
But when I list the number of occupants it lists as 0. I used "get_room_occupants_number" API which had following req and res.
Request:
{
"name": "roomdub",
"service": "conference.dub"
}
Response:
{
"occupants": 0
}
I am unable to understand why I don't see the user I added? Did I miss any step?
A:
An account can be a room "subscriber", and receive notifications, and can also sends messages to the room. As described in https://docs.ejabberd.im/developer/xmpp-clients-bots/proposed-extensions/muc-sub/
Alternatively (or simultaneously), the account can be a room "occupant", and can see other room occupants' presence, how they join and leave, receives messages, private messages and can also send them. As described in https://xmpp.org/extensions/xep-0045.html
So, this sentence is wrong:
I am now adding a user to the room by subscribing the user to the room
You are not "adding" the user to the room, because after all that concept is not define in any of the protocols I mentioned. You are "subscribing" it to some room events. And doesn't make him an "occupant".
|
{
"pile_set_name": "StackExchange"
}
|
loren naji.JPG
Artist and gallery owner Loren Naji at his gallery.
(Courtesy Loren Naji)
Loren Naji, owner of the Naji Studio Gallery on West 25th Street, was shut down Friday night by an inspector for the Cleveland Fire Department for the lack of an occupancy certificate.
This was the second time in three weeks Naji's gallery has been shut down and cited for building violations.
Friday night, Naji had an opening at his studio gallery with about 50 people in attendance. No alcohol was served during the event. The attendees left peacefully.
Naji was cited May 2, when another opening was shut down by state liquor board officials for serving beer and wine without a permit, prompting an outcry in the arts world over whether law enforcement overstepped its bounds busting an art show, where wine and beer are commonly served.
Friday night, Cleveland Fire Department inspector James Ruffin was at the gallery to shut down the event.
"The bottom line is that Mr. Naji does not have a certificate of occupancy," Ruffin said. "I'm just doing my job."
After the crowd left the gallery, Ruffin further explained the violation.
"If you have a building in the city of Cleveland that is open to the public, there are certain safety guidelines you have to follow. That's why you need a certificate of occupancy. It requires you to adhere to building department codes relating to fire protection systems, electrical codes, storage spaces and exit signs. Mr. Naji had the time to be aware of these requirements."
Naji said he was in the process of obtaining the certificate of occupancy and was assured by a city official he could proceed with his event.
|
{
"pile_set_name": "OpenWebText2"
}
|
Q:
Regular expression for replacement operation
Is there any regular expression that will replace everything except alphanumeric?
My attempt (not working)
string str = "This is a string;;;; having;;; and It also 5555 777has dot (.) Many dots(.....)";
Regex rgx2 = new Regex("^[a-zA-Z0-9]+");
string result1 = rgx2.Replace(str, "");
A:
[^a-zA-Z0-9]+ instead ^[a-zA-Z0-9]+
|
{
"pile_set_name": "StackExchange"
}
|
Vincent Van Gogh sold just one painting in his whole life, to his sister for about 400 Franks, but that didn't stop him to paint 800 more :)
|
{
"pile_set_name": "OpenWebText2"
}
|
Pluto gets a call from Earth telling him he isn’t a planet anymore, so he sets out on a journey through the solar system to find out why in this funny and fact-filled romp that’s perfect for fans of The Scrambled States of America.
|
{
"pile_set_name": "Pile-CC"
}
|
Design and evaluation of multi-indicator profiles for targeted-selective treatment against gastrointestinal nematodes at housing in adult dairy cows.
Targeted-selective treatments against gastrointestinal nematode (GIN) in adult dairy cows require the identification of "cows to treat", i.e. cows whose milk production (MP) would increase after treatment. This study aimed at quantifying the ability of multi-indicator profiles to identify such cows. A randomized controlled clinical trial was conducted at housing in 25 French pasturing dairy herds. In each herd, treated cows received fenbendazole orally, control cows remained untreated. Daily MP was recorded and the MP variation between the pre- and post-visit periods was calculated (ΔMP) for each cow. ΔMP was modelled with control cows data (n=412) (piecewise linear mixed model). Estimated parameters were applied to treated cows data (n=414) to predict the expected ΔMP in treated cows if they had not been treated. Treated cows with an observed ΔMP (with treatment) higher than the expected ΔMP (without treatment) were labelled as "cows to treat". Herds where at least 50% of the young cows were "cows to treat" were qualified as "herds to target". To characterize such cows and herds, the available candidate indicators were (i) at the cow-level: parity, stage of lactation and production level, faecal egg count (FEC), serum pepsinogen level and anti-Ostertagia antibody level (expressed as ODR); (ii) at the herd-level: bulk tank milk (BTM) Ostertagia ODR, Time of Effective Contact (TEC, in months) with GIN infective larvae before the first calving, and percentage of positive FEC. These indicators were tested one-by-one or in combination to assess their ability to characterize "herds to target" and "cows to treat" (Chi-square tests). 115 out of 414 treated cows (27.8%) were considered as "cows to treat", and 9 out of 22 herds were qualified as "herds to target". The indicators retained to profile such cows and herds were the parity, the production level, the BTM Ostertagia ODR and the TEC. Multi-indicator profiles were much more specific than single indicator profiles, induced lower treatment rates, thereby minimizing the selection pressure on parasite populations. Particularly, to target a herd, the specificity was better with the profile "high BTM Ostertagia ODR and low-TEC" than with the BTM ODR value taken into account alone. The targeted-selective treatment of "young cows, belonging to herds with a high BTM ODR at housing and a low TEC" appeared as a pertinent solution, enabling a global approach for the control of GIN infection in which GIN control in heifers is connected to GIN control in adult cows.
|
{
"pile_set_name": "PubMed Abstracts"
}
|
Emails Show Vitriol Toward Sanford Police Chief
SANFORD, Fla. (AP) — In his waning days as Sanford police chief, Bill Lee received blistering emails with every curse word imaginable, criticizing him for not immediately arresting George Zimmerman for fatally shooting 17-year-old Trayvon Martin.
Emails obtained by The Associated Press show he also got requests for media interviews from as far away as Qatar's Al Jazeera network and letters of support from other law enforcement officers nationwide. Emails from the scholarly website, racismreview.com, and the media monitoring site, TVEyes, started showing up in his inbox even though he hadn't subscribed to them.
"The truth will come out later," Lee wrote in an email to a supporter.
Martin's Feb. 26 death in a gated community in the Orlando suburb of Sanford first drew national attention on March 8, the day his relatives held their first news conference to draw attention that Zimmerman hadn't been arrested. Zimmerman wouldn't be charged with second-degree murder until 44 days after the shooting. During that time, protesters around the nation demanded Zimmerman's arrest, and the Sanford Police Department was accused of racism and incompetence.
Zimmerman, 28, pleaded not guilty and was released on a $1 million bond while he awaits trial. He is claiming self-defense under Florida's "stand your ground" law, which allows individuals to use deadly force if they are doing nothing illegal. It relieves them of a duty to retreat if they believe their lives are in jeopardy.
The emails to Lee on that first day of national attention started off forceful but polite.
"When are you going to make an arrest for this crime of murder of a 17-year-old who was armed with Skittles and a can of iced tea?" Mark Anderson wrote from Chicago.
Lee responded with a form letter he used repeatedly, saying his department was conducting a thorough investigation.
The chief was aware that interest in the case was growing by the minute so he sent an email to Sanford's mayor and city manager soon afterward, stating that some of the evidence corroborated Zimmerman's claim of self-defense. He had assured Martin's family that a thorough investigation was being conducted and that the State Attorney's Office supported the decision not to arrest Zimmerman immediately, he wrote in the email to his bosses. The case would later be reassigned to a state attorney in Jacksonville after the local prosecutor recused himself.
By that evening, the emails from around the nation were getting heated and vulgar.
"WTF are you waiting for?" said one email from a person only identified as Jerry. "C'mon man, grow a (expletive) pair and do the right thing!"
The AP obtained the emails through a public records request. The emails cover five nonconsecutive days from March 8 to when Lee took a leave of absence from his $102,000-a-year job on March 22, a day after he had received a vote of no confidence from Sanford city commissioners. He was fired in June after less than a year on the job. Before taking the position, he had been a sheriff's deputy for 27 years.
Lee, 52, refused a request for an interview. His spokeswoman, Sara Brady, said he has been enjoying time with his family.
The emails to Lee streamed in from around the country, portraying Sanford as a small, racist Southern town, akin to Maycomb, Ala., from Harper Lee's "To Kill a Mockingbird," with a redneck Barney Fife-like chief running its police department. In fact, Sanford is an Orlando suburb with a gentrified downtown of coffee shops, art galleries and cafes. Lee has a master's degree in public administration and has received training from the FBI.
The emails urged Lee to resign and asked how he could sleep at night. A few recommended he commit suicide and one called Lee the worst person on Earth.
Tammi Cubilette, assistant director of instructional support services at the Columbia University School of Social Work in New York wrote "Where's your buddy George Zimmerman? Maybe the two of you can play Stand Your Ground with each other and the world will be a little less scummier!"
Cubilette, in a telephone interview on Wednesday, said she stood by her email but realizes information that has since been released shows there were Sanford police detectives who doubted Zimmerman's story and that the investigation was more thorough than initially thought.
Lee feels validated by evidence that has been released since Zimmerman's arrest showing detectives took investigative steps that they had been accused of ignoring, Brady said.
Lee also had his supporters, particularly among other law enforcement officers.
"Hang in there!!! Trial by fire makes for good police chiefs!" Richard Beary, the chief of the University of Central Florida's police force wrote. Lee responded, "I love my dog."
By mid-March, as the fury was building, Lee sent out an email to his officers, warning them to take precautions when responding to calls given that emotions were running high. He also defended his department's actions.
"Be courteous and professional when members of the community may voice their position and disdain for the police department," he wrote. "The investigation of this incident was complete and unbiased. There is no reason for anyone in the police department to feel we have done anything but a professional job in providing police service and trying to rebuild the trust of the community."
|
{
"pile_set_name": "Pile-CC"
}
|
Rescue groups hope next president will adopt nation's first dog from a shelter
Four New York shelters nominating six candidates in campaign to publicize plight of homeless animals
ADVERTISEMENT
As Americans get closer to selecting the next president — and with an election season that has been far from ordinary — a new advocacy campaign is hoping to make way for the first ever rescue dog in the White House.
The Top Dog 2016 campaign — launched by agencies Thunder11 and Buzzhunter — brings together animal rescue organizations from across the country with the mission to encourage the incoming commander in chief to adopt, instead of buy, his or her four-legged best friend.
Among the nine partnering organizations, which each nominate their non-partisan pooches from their shelters, are New York City’s Mr. Bones & Co., LIC Feral & Friends, Sean Casey Animal Rescue, and The Animal Project.
Manhattan’s Mr. Bones & Co., which is located at 1123 Broadway, has nominated not one, but six candidates, including a mom named Lucy and her pups – the Peanuts – named Charlie Brown, Sally, Marcie, Peppermint Patty and Snoopy.
According to Elizabeth Frank, owner of Mr. Bones & Co., bringing a rescue dog into the White House would help teach Americans to not pass up mixed breed animals or which have been abused or abandoned and need time to trust people.
“What better advocate for adopting and rescue than the White House?” Frank said. “If it’s good enough for the president of the United States of America, then it’s good enough for your family.”
LIC Ferals & Friends from Queens decided to select Kramer, a pitbull named after the “Seinfeld” character, as its nominee.
Rescuer Gina Lori said the first family selecting a pitbull would help dispel the bad rap the breed gets for being aggressive toward humans and other animals. The maligned breed, she said, is very loyal and gentle.
“Every single presidential election, the presidential family buys a dog from a breeder and it’s totally a blow to all the rescue efforts that are done across the country,” Lori said. “But if they do decide to adopt, it would be very historic and would be a great victory for us.”
Presidents in the past, including current White House occupant Barack Obama, have opted for pure breed dogs, such as golden and Labrador retrievers, Portugese water dogs, and King Charles spaniels.
However, along with welcoming the first rescue into the Oval Office, the campaign also hopes to raise awareness on the importance of adoption.
“This is truly a labor of love to raise awareness about the fact that millions of animals are looking for homes and the benefits they provide for owners is immeasurable,” said Marco Greenberg, co-creator of the campaign and president of Thunder11. “We want to send a message that it is really important to at least consider adopting a dog.”
Greenberg – who has two rescue dogs and two rescue cats – added that along with sharing a message of both care and compassion to all Americans, getting a rescue to join the First Family would also help aide a crisis that sees about 7.6 million animals enter shelters nationwide.
Along with helping millions of animals find homes, advocating for adopting rescues would also help lift an economic burden on Americans as a whole.
According to the ASPCA, taxpayers pay close to $2 billion each year to help round up, house and in most cases euthanize shelter dogs.
Voters in the campaign can get to know the candidates and choice their favorites via the Top Dog website and, although the pooches are up for the presidential seat, they are all also up for adoption from other families as well.
The other organizations involved include The Brittany Foundation, Dachshund Rescue of Los Angeles and spcaLA in California; the Chicago Canine Rescue Foundation; and PetConnect Rescue in Maryland.
Greenberg said that other organizations are also encouraged to join the campaign.
And although advocacy for shelter animals has dramatically grown in recent years, he believes that getting the next president to opt for adoption, would give the movement a large boost.
“The good news is that we’re heading in the right direction and the bad news is that we still have a really long way to go,” he said. “This is not just about counting votes in a popularity contest, the real mission is to get these dogs into a home.”
|
{
"pile_set_name": "Pile-CC"
}
|
Q:
Leave minimal tip
If you need to pay a certain amount of money with different coins, calculating the exact number of coins to pay the amount is simple, you can use something like:
for i, j in zip(coins, needed):
if amount >= 2*i:
j = amount // i
amount = amount - j * i
print (i ," : ", j)
This works, if i need to pay 98 $, and I have 50, 10, 5 and 1 coins.
But what if I need to pay 98, but I do not have just 100, 50, 20 coins?
(the optimal solution would be to give a 100, and have 2 as loss)
Is there a simple platonic way to solve it? Or I need to compute all the different variations, and search for the minimal loss?
A:
@h4z3 Thanks
One solution which works is, if anyone knows a shorter/better version anticipated thanks
d_amount = amount
x=0
while True:
served_coins = []
served_units = []
for i, j in zip(coins, needed):
if d_amount >= i:
j = d_amount // i
d_amount = d_amount - j * i
served_coins.append(i)
served_units.append(j)
#print d_amount
if (d_amount == 0):
break
else:
x = x + 1
d_amount = amount + x
|
{
"pile_set_name": "StackExchange"
}
|
This publication constitutes the refereed complaints of the 3rd TC3 IAPR Workshop on synthetic Neural Networks in trend acceptance, ANNPR 2008, held in Paris, France, in July 2008.
The 18 revised complete papers and eleven revised poster papers awarded have been rigorously reviewed and chosen from fifty seven submissions. The papers mix many rules from desktop studying, complex information, sign and photograph processing for fixing complicated real-world development acceptance difficulties. The papers are prepared in topical sections on unsupervised studying, supervised studying, a number of classifiers, purposes, and have selection.
WiMAX is bringing a few world wide revolution in broadband instant entry, together with either mounted and cellular handsets. The IEEE 802. sixteen operating team standardized so much facets of WiMAX signaling messages. although, a number of algorithms have been left unspecified commencing the door for strategies in protocol engineering for 802.
This publication constitutes the court cases of the 1st overseas meetings on e-Technologies and Networks for improvement, ICeND 2011, held in Dar-es-Salaam, Tanzania, in August 2011. The 29 revised complete papers offered have been conscientiously reviewed and chosen from ninety preliminary submissions. The papers deal with new advances within the web applied sciences, networking, e-learning, software program functions, computers, and electronic info and knowledge communications applied sciences - in addition technical as functional features.
This publication is a part of a 3 quantity set that constitutes the refereed complaints of the 4th foreign Symposium on Neural Networks, ISNN 2007, held in Nanjing, China in June 2007. The 262 revised lengthy papers and 192 revised brief papers provided have been rigorously reviewed and chosen from a complete of 1,975 submissions.
Consequently, given n predefined prototypes the embedding of one particular graph is established by means of n distance computations with polynomial time. Clearly, the graph embedding procedure described above provides a foundation for a novel class of graph kernels. Based on the mapping ϕP n , one can define a valid graph kernel κ by computing the standard scalar product of two graph maps in the resulting vector space P κ(gi , gj ) = ϕP n (gi ), ϕn (gj ) Note that, in contrast to some other kernel methods, the approach proposed in this paper results in an explicit embedding of the considered graphs in a vector space.
2. The modular diagram of the CRBM system with two visible and four hidden neurons Table 1. 0] How Robust Is a Probabilistic Neural VLSI System Against Environmental Noise 47 the slope of ϕi . On the other hand, multi-channel, uncorrelated noise {ni } are injected into the neurons to make the outputs, {vi } and {hi }, probabilistic. Parameters {wi j } and {ai } are stored as voltages across capacitors, and are adaptable by on-chip learning circuits. The learning circuits can also refresh {wi j } and {ai } to specific values after training.
|
{
"pile_set_name": "Pile-CC"
}
|
[Eurotransplant--new possibility for the Hungarian transplantation].
The year 2010 was a milestone in the history of transplantation in Hungary. The State Secretary for Health Issues announced a program in order to solve the serious problems of organ transplantation: 1) to increase waiting lists, 2) to raise donor numbers, 3) to establish a lung transplant program in the country, 4) to promote education and increase the knowledge base regarding transplantation for the public and the medical profession, and finally, 5) to begin negotiations for Hungary to join Eurotransplant. Joining Eurotransplant has been a priority of the transplant community. Finally, this year saw the Budapest Transplant Center perform 20% of their kidney transplants from living donors, up from a 5% frequency historically, an operation which is available in all four centers from this year.
|
{
"pile_set_name": "PubMed Abstracts"
}
|
Phantom’s Hood
The Phantom’s Hood is back by popular demand! Shroud your face with this seasonal hood.
Outfits
Dress up as the king of Halloween himself with the Mad King’s Outfit, or if you’re feeling more rebellious, grab the Bloody Prince’s Outfit. If neither will do for you, then don the Witch’s Outfit this year!
Miniature Spooky Trio
This set contains the Mini Spooky Ghost, Mini Spooky Spider, and Mini Spooky Skeleton. Combine all three together in the Mystic Forge to create the terrifying Chainsaw the Skeleton miniature!
|
{
"pile_set_name": "OpenWebText2"
}
|
Prediction of progression of radiographic knee osteoarthritis using tibial trabecular bone texture.
To develop a system for predicting the progression of radiographic knee osteoarthritis (OA) using tibial trabecular bone texture. We studied 203 knees with (n = 68) or without (n = 135) radiographic tibiofemoral OA in 105 subjects (90 men and 15 women with a mean age of 54 years) in whom 2 sets of knee radiographs were obtained 4 years apart. We determined medial and lateral compartment tibial trabecular bone texture using an automated region selection method. Three texture parameters were calculated: roughness, degree of anisotropy, and direction of anisotropy based on a signature dissimilarity measure method. We evaluated tibiofemoral OA progression using a radiographic semiquantitative outcome: an increase in the medial joint space narrowing (JSN) grade. We examined the predictive ability of trabecular bone texture in knees with and those without preexisting radiographic OA, with adjustment for age, sex, and body mass index, using logistic regression (generalized estimating equations) and receiver operating characteristic curves. The prediction of increased medial JSN in knees with or without preexisting radiographic OA was the most accurate for medial trabecular bone texture; the area under the curve (AUC) was 0.77 and 0.75, respectively. For lateral trabecular bone texture, the AUC was 0.71 in knees with preexisting OA and 0.72 in knees without preexisting OA. We have developed a system, based on analyzing tibial trabecular bone texture, which yields good prediction of loss of tibiofemoral joint space. The predictive ability of the system needs to be further validated.
|
{
"pile_set_name": "PubMed Abstracts"
}
|
Q:
Declare a different android:name in Manifest
I´m trying to use a global variable. I got this doing the next:
1/ First I create this class
package ar.ncantarini.mapa;
import android.app.Application;
public class MyApplication extends Application {
int id_mascota;
public int getId_mascota() {
return id_mascota;
}
public void setId_mascota(int id_mascota) {
this.id_mascota = id_mascota;
}
}
2/ In some Activity I put:
// Calling Application class (see application tag in AndroidManifest.xml)
final MyApplication globalVariable = (MyApplication) getApplicationContext();
//Set name and email in global/application context
globalVariable.setId_mascota(1);
3/ In other Activity I do:
// Calling Application class (see application tag in AndroidManifest.xml)
final MyApplication globalVariable = (MyApplication) getApplicationContext();
// Get name and email from global/application context
final int name = globalVariable.getId_mascota();
//inside a method
4/ At last, In the Manifest I add the android name tag.
<application
android:name=".MyApplication"
android:allowBackup="true"
android:icon="@drawable/ic_launcher"
android:label="@string/app_name"
android:theme="@android:style/Theme.Holo.Light.NoActionBar" >
<activity
Every thing works well. But my problem is that now I need to move the "MyApplication" class to another packcage calls "Utilities" which is not the
"main" packcage of the android app.
After move this class, I have test change the manifest like that:
1_android:name="Utilities.MyApplication"
2_android:name="Utilities$MyApplication"
3_android:name=".Utilities.MyApplication"
And always I have the " no source find" error. The same problem is in another project when a use de "BroadcasteReceiver" class, and when I put this in a packcage that isnt the main packcage. In my case my main packcage is ar.ncantarini.mapa.
EDIT: here I add a photo of the error I always have, when I move the MyApplication class to the Utilities packcage .
A:
Considering the @MarcosVasconcelos answers, I combine them, and I arrived to a solution. The key is that the name of the package in Android should be in lowercase. that´s all.
|
{
"pile_set_name": "StackExchange"
}
|
Alabama GOP Senate candidate Judge Roy Moore went on an education tweet storm Saturday in response to Breitbart News’s article highlighting Democrat candidate Doug Jones’s support for Common Core-like national standards.
Moore, who would like to see federal involvement of education ended, says parents must take the lead in how best to education their children.
https://twitter.com/MooreSenate/status/937051733817163783
“As the federal government has increased their role in education — spending TRILLIONS of taxpayer dollars along the way — our education system has slipped further and further behind the rest of the world,” Moore tweeted.
https://twitter.com/MooreSenate/status/937051761491283968
“And now, with Democrats like Doug Jones continuing to push failed indoctrination programs like Common Core, we risk allowing education in America to reach a point in which it’s beyond repair,” he added, continuing:
Our Founders knew education worked best when left to parents & local communities, not a massive government bureaucracy that forces one-size-fits-all standards on students robbing them of the educational freedom that’s proven to produce significantly greater results. It’s beyond time we get back to what made America’s education system the envy of the world, and that starts with reject big government liberals like Doug Jones, who’s ideas and policies are proven failures!
https://twitter.com/MooreSenate/status/937051787282079746
https://twitter.com/MooreSenate/status/937051821863993345
Jones told Alabama Political Reporter in July, “[T]here is a role for the federal government in many ways” in education.
The Democrat says the federal government should set national education standards – such as Common Core – that would enforce states’ accountability for any federal taxpayer education dollars received.
Jones also showed that he trusts the federal government’s oversight more than state-run management when he was asked about the prospect of the federal government’s block-granting taxpayer funds to states.
“In Alabama quite frankly people ought to be very jaundiced about letting state officials decide how to spend a block of money,” he said.
Moore and Jones are vying for the U.S. Senate seat previously held by current Attorney General Jeff Sessions. The special election is December 12.
|
{
"pile_set_name": "OpenWebText2"
}
|
Pavel Čermák
Pavel Čermák (born 14 May 1989) is a professional Czech football player.
References
Guardian Football
Category:Czech footballers
Category:Czech expatriate footballers
Category:1989 births
Category:Living people
Category:Czech First League players
Category:FK Baník Most players
Category:FK Viktoria Žižkov players
Category:FC Hradec Králové players
Category:FK Senica players
Category:Slovak Super Liga players
Category:Expatriate footballers in Slovakia
Category:Czech expatriate sportspeople in Slovakia
Category:Association football defenders
|
{
"pile_set_name": "Wikipedia (en)"
}
|
Q:
CSS floating div taking remaining width
I can't seem to make this work and am considering using table for that now.
I have a page with 3 main divs that are all on the same line (floats).
I want the div in the middle (#pages) to take the remaining width since I can toggle() both of the sides divs.
It looks like that :
jsfiddle here : http://jsfiddle.net/5n3rz/
Here's my code :
<div id="project">
<ul>
<li>z</li>
<li>z</li>
<li>z</li>
<li>z</li>
<li>z</li>
<li>z</li>
</ul>
</div>
<div class="minus">
<a href="#" class="close close_project">
<i class="fa fa-caret-left"></i>
</a>
</div>
<div id="pages">
<textarea name="text" id="texta-pages" placeholder="your page"></textarea>
</div>
<div class="minus">
<a href="#" class="close close_notes">
<i class="fa fa-caret-right"></i>
</a>
</div>
<div id="notes">
<textarea name="notes" id="texta-notes" placeholder="your notes"></textarea>
</div>
and here's my current CSS :
everything is height:100%
#project, .minus, #pages, #notes{
height:100%;
float:left;
}
#project{
width:150px;
}
.minus{
background-color:#CCC;
width:20px;
}
#pages{
min-width:calc((100% - 2*20px - 150px)/2);
}
#notes{
width:calc((100% - 2*20px - 150px)/2);
}
I use jQuery to toggle() the project on the left, and the notes on the right.
I want the #pages part to take all the remaining width when I remove one or both divs on its side.
A:
You can use CSS tables to do this.
EDIT:
So this is what you need to modify to get it to work:
FIDDLE
main{
height:90%;
display: table; /* added */
table-layout: fixed; /* added */
width: 100%; /* added */
}
#project, .minus, #pages, #notes{
height:100%;
display: table-cell; /* added; removed float:left */
}
#project{
width:50%; /* modified width */
}
#pages{
width: 100%; /* modified width */
}
#notes{
width:50%; /* modified width */
}
Here is a (simplified) example:
FIDDLE
In this example you can verify that when you delete column one or three (or both) - the middle column fills up the remaining space. (right click inspect element and then select 'delete node' to remove col1 or col3)
The trick here is to give col2 a value of width:100%
Markup:
<div class="container">
<div class="col col1">div1</div>
<div class="col col2">div2</div>
<div class="col col3">div3</div>
</div>
CSS
.container
{
display: table;
table-layout: fixed;
width: 100%;
}
.col
{
display: table-cell;
}
.col1
{
background: pink;
width: 20%;
}
.col2
{
background: orange;
width: 100%;
}
.col3
{
background: brown;
width: 30%;
}
|
{
"pile_set_name": "StackExchange"
}
|
320 F.2d 218
138 U.S.P.Q. 541
THOMSON MACHINERY COMPANY, Byron C. Thomson, Estival Aysen,Roland Clement, Ruby Thibodaux and Victor L.Wintz, Appellants,v.Royal J. LaROSE, Edward P. Clause and LaRose-Clause Company,Inc., Appellees.Royal J. LaROSE, Edward P. Clause and LaRose-Clause Company,Inc., Appellees,v.THOMSON MACHINERY COMPANY, Byron C. Thomson, Estival Aysen,Roland Clement, Ruby Thibodaux and Victor L.Wintz, Appellants.
No. 19495.
United States Court of Appeals Fifth Circuit.
Aug. 8, 1963.
Donald L. Peltier, Robert. D. Morvant, Thibodaux, La., Raymond J. Mawhinney, Washington, D.C., W. D. Keith, New York City, Paul G. Borron, Jr., Plaquemine, La., Peltier & Peltier, Thibodaux, La., Borron, Owen, Borron & Delahaye, Plaquemine, La., Wilkinson, Mawhinney & Theibault, Washington, D.C., Keith, Bolger, Isner & Byrne, New York City, A. Robert Theibault, Washington, D.C., for appellants.
B. F. Garvey, Bartholomew Diggins, Washington, D.C., Edmond L. Deramee, Deramee & Deramee, Thibodaux, La., Diggins & LeBlanc, Garvey & Garvey, Washington, D.C., for defendants-appellees and cross-appellants, LaRose and Clause and others.
Before RIVES and WISDOM, Circuit Judges, and BOOTLE, District Judge.
PER CURIAM.
1
This appeal involves the validity and infringement of two patents,1 the first on a method, and the second on an apparatus for mechanically handling sugar cane stalks during harvesting. The learned district judge, Honorable J. Skelly Wright, now a Judge of the United States Court of Appeals for the District of Columbia Circuit, in a brilliantly worded opinion reported at 197 F.Supp. 636, et seq., held both claims of the method patent invalid and the first three claims of the apparatus patent invalid, but the fourth claim of the apparatus patent valid and infringed.
2
After careful study and consideration, we agree with the result reached by the district court and substantially for the reasons stated in its opinion. The judgment is therefore
3
Affirmed.
1
Both issued to Royal J. LaRose and Edward P. Clause, the first numbered 2,799,984 and dated July 23, 1957, and the second numbered 2,871,645, dated February 3, 1959
|
{
"pile_set_name": "FreeLaw"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.