content
stringlengths 0
1.88M
| url
stringlengths 0
5.28k
|
---|---|
Division of Power
All my guys need FOUR outlets to power the Christmas light display across our entire front yard. That would be one outlet for each of them. A couple of weeks ago, they worked for days, long, 6 hour days to put up the best display in our neighborhood. I am talking about a lit blue stream with blue deer drinking from it, polar bears, igloos, big lit presents, candy canes, and 12 giant lighted Christmas balls hanging in the trees, each bigger than basketballs.
As you drive by, in our living room window, you can see the 14 foot Christmas tree behind the outdoor light display. We also just got a new large chandelier over the summer.
Turns out we can’t have all three on at once. We have to dim the chandelier to have the Christmas tree on, and we can never turn on the porch light if the outdoor Christmas lights are on. You always have to be thinking in this house, if you want to actually see something at night. The piano is in the Christmas tree room, so it really encourages the boys to memorize their music as they can’t see a note of it anyway. It’s such a good thing none of our immediate neighbors put up lights, I am certain we are pulling power from a square block. It’s probably obvious why we have the best light display, as we are the only ones who have one up, or is it that can have one up?
Relationship thought #11 Draw on all of God’s power. Meditate on the things of this world growing strangely dim. God will always shine brighter than all, and He will always and forever more, have the best Christmas light display.
Amen, | https://audrakrell.com/2007/12/14/division-of-pow/ |
When applying for a job or promotion, it’s easy to default to promoting what you do – increase revenues, build teams, manage complex problems or supervise logistics.
But your value as an employee, team member or leader isn’t only what you do, but the benefits derived from that work and service. Consider, instead, the problems you solve and those who care about that solution.
What problem do you solve?
To say you are a military veteran and know how to solve problems is a great understatement. Your entire time in uniform was spent navigating solutions to complex challenges.
As you transition, step back from the “what” of your service and focus on the “how” and “why”? For instance, were you tasked with leading troops through dangerous landscapes? What problem did that work solve? In this example, you would have been responsible for managing their physical, emotional and psychological well-being by safely moving them from one geographic location to another. The problem you solved, then, was to ensure the safety of your troops in completing high-stakes, complex missions.
Who cares about that solution?
Using the example above, obviously, the troops you are responsible for would care about your ability to lead them to safety. But who else? Would the leaders directing the mission be vested in the outcome? What about the families of those military service members you were responsible for – I’m sure they cared that you did your job well.
What is the benefit they derive?
Driving the mission and solving the challenge you were tasked with creates many benefits. For instance, you are advancing a larger initiative and mission. You are ensuring that the men and women you lead are able to continue their military service in a meaningful way. In other words, the benefit of your ability to solve that problem are more than just the impact to the direct troops.
Translating
Now consider how you can “translate” this solution to a civilian narrative. In this example, you could share how your ability to think on your feet, work well under pressure, and enlist the support and endorsement of those who need to follow you, created the opportunity for the bigger mission to be completed. The impact and benefit of the greater mission is that lives are saved and stability is introduced to an unstable community.
When you can confidently and clearly articulate the problem you were tasked to solve, who cared about that problem and the direct benefits to solving that problem. You can relate your military experience to a civilian narrative. Odds are fairly low that you will be asked to help your co-workers navigate treacherous landscape in the corporate world (literally) but you may be called upon to influence and inspire them to challenge the norms and be innovative. Draw the parallels between the two problems to be solved, and your experience, to get the attention of employers. | https://www.military.com/veteran-jobs/career-advice/military-transition/job-seeking-vets-should-highlight-problem-solving-skills.html |
My mother was born in rural Sweden in 1948. She lived with her parents and maternal grandmother in a little red one-bedroom cottage. There was no bathroom, only an outside toilet and the lake nearby.
“I remember how we used to wash our clothes and rugs in the stream when I was a child, I thought it was so much fun! But looking back on it now it can’t have been easy for my mum and grandma, sometimes they had to break the ice to get to the water”
My grandfather was, amongst other things, a butcher. Meat was hung from the ceiling in the barn to tenderize, and stored in large barrels of salt to keep from spoiling. They kept chickens and pigs and fished for perch and pike in the lake. My grandmother was – also amongst other things – a seamstress, and weaved and sewed to meet the family’s needs.
When my mother was 6, her family built a new house on the same land. It had bathrooms, heating and a fridge/freezer that my great grandmother refused to accept. She would not eat anything that had been stored in the fridge, claiming it was dangerous and spoilt the food.
Considering her background, it is not strange that my mum is so tenacious when it comes to foraging and making the most of what nature has to offer. I remember hours of childhood boredom in blueberry & lingonberry woods, picking and picking until we could hardly carry the buckets back. She generally picks far more than her & dad will eat, and thus gives her conserves away left right and centre. Countless jars of lingonberry jam have found their way to the UK in my suitcase, along with bags of dried mushrooms and herbs.
Mum’s staples are: | https://foodand.co.uk/articles/mum/ |
On 17 September 2016, relatives of Manfred von Richthofen, the ‘Red Baron’, and his first victims were brought together for the first time, at Remembering 1916 – Life on the Western Front.
100 years ago to the day, the Red Baron, the ‘ace of aces’ of aerial warfare in the Great War, shot down a British plane in the skies above Northern France, piloted by former Whitgift student, Lionel Morris, with Tom Rees as Observer.
To mark the Centenary of the aerial dogfight, relatives of the three airmen came together to remember and respect this historic event at a special commemorative dinner: Jill Bush, cousin of the 19-year-old pilot, Lieutenant Lionel Morris; Dr Meriel Jones, great-niece of 21-year-old Observer, Tom Rees; and Baron Donat von Richthofen, great-nephew of the Red Baron (who was just 24 years old in 1916).
Von Richthofen had a personal tradition of ordering a small, engraved silver cup to commemorate each of his victories. A replica of the No 1 Victory Cup, commissioned by von Richthofen from his silversmith in Berlin to mark this, the first of what would be 80 ‘kills’, has been produced by silversmith, Mark Fenn, for the Exhibition, and was used to toast the fallen comrades.
A painting was also specially-commissioned for Remembering 1916, a large canvas in oils by the aviation artist, Alex Hamilton, depicting the ‘dogfight’.
The poignant meeting of the relatives of these historic figures was covered by ITV London Tonight, BBC Radio 4’s Today programme, BBC Radio Wales, The Telegraph, The Times and the Daily Mail. | http://www.remembering1916.co.uk/news.aspx?SubCatID=256&PageID=569 |
Archie Sonic Super Special Issue 9 (also known as Sonic Kids 2) is the ninth issue of the Sonic Super Specials series published by Archie Comics.
Featured stories
Zoneward Bound
- Writer: Mike Gallagher
- Pencils: Sam Maxwell
- Inks: Harvey Mercadoocasio
- Colors: Joshua D. Ray and Aimee R. Ray
- Letters: Vickie Williams
- Editor: J. F. Gabrie
- Managing editor: Victor Gorelick
- Editor-in-chief: Richard Goldwater
Part One
Several years in the past, Sonic, Sally, Tails, Antoine and Rotor enjoy a winter morning in Knothole by having a snowball fight. They all gang up on Antoine and Sonic tries to land the finishing blow, but he slips on the ice and chucks a snowball at Rosie Woodchuck. Rosie fusses at Sally, causing the young princess to think about her missing father. Rosie apologizes while Sally's friends try to cheer her up. Hoping to lift Sally's spirits, Rosie suggests that the kids go ice skating at the frozen pond by the wishing well while she makes them snacks.
The kids head over and play ice hockey, but Sonic hogs all the glory. He skates so fast that the ice on the pond's surface melts, forcing the group to abandon their game and wait for Rosie. Thirsty, Sonic pulls up the well's bucket with Rotor's help, but finds a Power Ring frozen inside. The children notice that the pond has frozen over again and start skiing. The abandoned ring glows mysteriously and the group ends up skiing into three different portals that take them to other Zones.
Part Two
On the other side of the portal, Sally lands in a Zone with nothing but a rollercoaster. The ride ends in a free fall, where a ghostly image of King Acorn covered in crystal stands in her path. Sally falls through the image, shattering it, and finds herself encouraged to lead the Kingdom of Acorn and the Freedom Fighters despite her personal distress.
Meanwhile, Antoine, Rotor and Tails fall through a strange machine that causes their limbs to swap bodies. After coming back together in their proper bodies, the group falls down an endless pit. Tails realizes that they've been able to keep themselves together because they work as a team and the Ancient Walkers are impressed by the young Chosen One's wisdom.
Part Three
Sonic arrives in a green Zone that reminds him of his adventures with Mighty and Ray. Sonic runs ahead and finds King Acorn's former Warlord, Julian Kintobor. Julian demands to be called Dr. Robotnik and asks Sonic how he arrived in this Zone. Sonic asks how Julian arrived in the Zone, but Robotnik just attacks him with his Egg Mobile's wrecking ball. Sonic uses his "Sonic Spin" to destroy the hovercraft and decides he just wants to go home.
Sonic is spit out of a portal back to the lake where the other Freedom Fighters are waiting. At Sally's request, Sonic puts the bucket back in the well, not wanting to deal with more Zone hopping. Rosie arrives and offers the group her baked pecan rings. Sonic is apprehensive about dealing with more rings.
My Secret Guardian
- Writer: Mike Gallagher
- Pencils: Manny Galan
- Inks: Jim Amash
- Colors: Joshua D. and Aimee R. Ray
- Letters: Vickie Williams
- Editor: J. F. Gabrie
- Managing editor: Victor Gorelick
- Editor-in-chief: Richard Goldwater
Several years in the past, Rosie and Julayla help Sally and King Acorn pack for their trip to the Floating Island. Sally is excited to spend some time alone with her father, but the king wrestles with his conscience: the true meaning of their trip is to try and find Elias and Queen Alicia, but he refuses to put the burden of a missing mother and brother on Sally's mind. They land their plane on the island, unaware that Knuckles is watching from the bushes nearby.
Sally is enamored by the island and wants to know all about it. Max asks her to explore on her own while he speaks to the Guardian of the island. Despite being upset about her father ignoring her to attend to business, Sally notices someone watching her in the bushes and demands that they show themselves. Knuckles introduces himself and they bond over their training for future duties. They start to argue about whose job will be harder—the leader of the Kingdom of Acorn or the Guardian of the Floating Island—and fight.
They trade hits and Sally accidentally knocks Knuckles over the edge of the island. She tries to lend him a hand, but he glides back to the island and tries to headbutt Sally. She dodges, causing Knuckles to knock over a tree. Hearing the noise, Locke calls for Knuckles. Knuckles fears that talking to Sally counts as breaking the rules of his Guardian training. Sally promises to hide and keep their meeting a secret. Knuckles agrees and both children are reunited with their parents. Sally and Max head off to explore the island while Knuckles watches from above, feeling that they will meet again someday.
Eve of Destruction
- Writer: Mike Gallagher
- Pencils: Art Mawhinney
- Inks: Rich Koslowski
- Colors: Barry Grossman
- Letters: Vickie Williams
- Editor: J. F. Gabrie
- Managing editor: Victor Gorelick
- Editor-in-chief: Richard Goldwater
Several years ago, a young Sonic, Rotor, Antoine, and Sally roughhouse just outside of the Prower family's home in Mobotropolis. Amadeus tells the children to take their game elsewhere and asks his wife, Rosemary, to rest while he's on duty. Rosemary warns that their baby could be born any minute, but Amadeus insists that his duty to oversee the dismantling of the kingdom's military comes first. He goes on to explain that Julian Kintobor has been appointed as the new Minister of Science, filling the vacancy from Sir Charles' resignation after the Great War.
Amadeus bids his wife farewell and heads off to discuss the dismantling with Julian. Sonic spots him and rushes over, asking if he knows if their child will be a boy or a girl. He explains that he doesn't, but asks Sonic to keep an eye on the house in case Rosemary needs help. Meanwhile, Julian gloats about obtaining Sir Charles' lab and his plan to throw King Acorn into the Zone of Silence. He tests the Roboticizer with Snively, placing a citizen inside and roboticizing them. Amadeus happens into the lab at that very moment and confronts them both. Julian tells Amadeus about his plans and reveals his newest creation, the SWATbot, which grabs Amadeus.
Meanwhile, Rosemary goes into labor. Sonic and his friends load her on to Sally's wagon and take her to Dr. Quack. At Quack's office, the children and King Acorn meet the newborn Miles Prower, who they're surprised to find has two tails. King Acorn asks Quack where Amadeus is, when Julian, Snively and a disguised Amadeus enter the room. Rosemary introduced Amadeus to their son, but he doesn't say anything and walks away. Rosemary begins sobbing while Dr. Quack and King Acorn try to comfort her. Julian and Snively leave with the secretly roboticized Amadeus and plan their takeover for later that night. Rosemary falls asleep and Miles begins crying. Sonic takes the baby gives him a hug, cheering him up and promising to become fast friends.
Off Panel
- Writer: Mike Gallagher
- Pencils: Dave Manak
Sonic and J. F. Gabrie talk about the positive reception that Off Panel has been getting and how everybody wanted to make a cameo in it. Gabrie then goes to visit Pam Eklund who shrieks, beats him up and gives him the boot. Gabrie realizes that not everyone may want a cameo and while bandaging him, Sonic suggests that he should have waited until Eklund was out of the shower.
Trivia
- According to Tails, his IQ as a kid was slightly higher than Sonic's IQ: Tails' was 300, while Sonic's was 299.
- Rotor claims they are turned into teenagers in this issue. Their ages are as follows:
- Sonic: 15
- Sally: 17
- Tails: 8
- Antoine: 13/14
- Bunnie: 16
- Knuckles: 18
- The game that Rotor is playing on the cover is Sonic Adventure. | https://sonic.fandom.com/wiki/Archie_Sonic_Super_Special_Issue_9 |
When Shmoop was little, we begged Santa to bring us a pony. Even though we never got one, we still dream about ponies sometimes. Many people who don't own, or even ride, horses are still fascinated by these beautiful animals. How about you? Would you pull off a busy highway just to observe some ponies in a pasture? Even if you didn't know the biographical backstory of "A Blessing" (see the "Speaker" section), you could probably tell that the writer of this poem had some first-hand knowledge of horses, based on the realistic details in the poem. But these ponies are still poetic ponies, and they also play a symbolic role in the poem.
- Lines 4-6: Sometimes horses aren't all that happy to see humans entering their pasture. And who can blame them? Horseback riding is probably more fun for the rider than the horse. Still, many folks will tell you that their horses are kind and loving. The ponies in Wright's poem certainly display "kindness," and they "welcome" the humans "gladly." In this sense, the ponies seem to symbolize the generous beauty of nature and the joy it offers humans who take the time to appreciate it.
- Lines 7-8: To fully appreciate the natural world, humans must first step out of their artificial, man-made environment. As a symbol of this boundary, the barbed wire in line 7 separates the humans from the inviting natural world inhabited by the ponies. The barbed wire also confines the ponies, leaving them "alone" and blocking their desire to join the human visitors.
- Lines 11-12: The ponies demonstrate affection for each other as well as for the human visitors. In fact, the image of swans bowing "shyly" even has symbolic associations of romantic love. Line 12 links the idea of "love" to the idea of "loneliness" (for more on this, see the "Themes" section).
- Lines 13-14: In line 13, the image of the ponies' pasture as a kind of "home" suggests that the humans have been fully accepted into this mysterious world of nature that may represent their true home.
- Lines 15-17: In these lines, the poet develops the relationship between the speaker and the pony through the use of tactile imagery—images related to the sense of touch. As readers, we can almost feel the pony's velvety muzzle nudging us. Notice the specific detail "left hand," which intensifies the image by concentrating our attention in a particular area. Even though the speaker doesn't actually wrap his arms around the pony, he imagines doing it, so we imagine it, too. Can't you just feel the warmth of pony's body?
- Lines 20-21: The tactile imagery continues in these lines. Touched by the "light breeze," the speaker passes along the "caress" by stroking the pony's ear. Line 21 further develops the sensation by comparing the "delicate" skin of the pony's ear to the "skin over a girl's wrist." | http://www.shmoop.com/a-blessing-poem/ponies-symbol.html |
William & Mary Students Participate in Holocaust Remembrance Concert
On Tuesday, January 27th, twenty-six singers from the Botetourt Chamber Singers, Women's Chorus, and W&M Choir joined with a chorus of over 200 singers in the Carpenter Center in Richmond to perform a concert to commemorate International Holocaust Remembrance Day with the Richmond Symphony Orchestra. They were invited to participate in the massed collegiate choir for this sold out event which featured music of many Jewish composers, as well as moving video footage of several area Holocaust survivors, many of whom attended the concert. January 27th marked the 70th anniversary of the liberation of Auschwitz concentration camp.
The students gathered with singers from Virginia Commonwealth University, Old Dominion University, Longwood University, Sweet Briar College, and several other VA institutions as well as with members of the Richmond Symphony Chorus for two days of intensive rehearsals before the performance. The concert included choral music by Leonard Bernstein, Samuel Adler, and a movement of Verdi's Requiem. The Verdi was included in the program because the work was performed many times by prisoners at Terezin. The Richmond Symphony Orchestra played pieces by several Jewish composers including Erwin Schulhoff (1894-1942), Pavel Haas (1899-1944), and Hans Krása (1899-1944) all of whom perished during their internment in a concentration camp.
Sarah VanKirk '15, president of the W&M Women's Chorus said, "It was such a memorable opportunity to be able to perform with so many other singers. The stories of the survivors were tremendously inspiring."
Grayson Kilgo '17, a tenor in the Botetourt Chamber Singers, was equally impressed by the concert. "I was so happy we had a chance to participate. It was a very moving experience."
As witnessed by the personal stories of the survivors, the focus of the performance was on the triumph of the human spirit and the ability of music to make people feel human even in the most degrading and inhuman circumstances.
The William & Mary students were able to participate in this event thanks to the generous support from President Reveley's office, Dean of Arts and Sciences Kate Conley, and Board of Visitor member Lisa Roday.
The full performance will be broadcast on Richmond Public Television later this spring. For more information, please see the Richmond Times-Dispatch articles on the International Holocaust Remembrance Day and the "Voices of Survival" concert. | https://www.wm.edu/as/news/as-news-archive/2015/musc-holocaust-memorial.php |
TECHNICAL FIELD
BACKGROUND ART
CITATION LIST
Patent Literature
SUMMARY OF INVENTION
Technical Problem
Solution to Problem
Advantageous Effects of the Invention
DESCRIPTION OF EMBODIMENTS
[Principle of Impurity Determination Process]
[Alternative to Spectrum of Target Component]
[Impurity Separation Process in the Case Where a Plurality of Impurities Are Present]
[Configuration and Operation of Embodiment for Carrying Out Impurity Determination Process According to Previously Described Principle]
REFERENCE SIGNS LIST
The present invention relates to a chromatogram data processing system for processing three-dimensional chromatogram data collected by repeatedly performing a spectroscopic analysis or mass-spectrometric analysis of a sample containing a component separated by a chromatograph (e.g. liquid chromatograph) or a sample introduced by a flow injection method. More specifically, it relates to a data processing system for determining the presence or absence of an impurity or other similar components superposed on a peak originating from a target component appearing on a chromatogram.
With a liquid chromatograph in which a multichannel detector, such as a photo diode array (PDA) detector, is used as the detector, three-dimensional chromatogram data having the three dimensions of time, wavelength and absorbance can be obtained by repeatedly acquiring an absorption spectrum for an eluate from the exit port of a column, with the point in time of the injection of the sample into the mobile phase as the base point. Similarly, with a liquid chromatograph (LC) or gas chromatograph (GC) in which a mass spectrometer is used as the detector, three-dimensional chromatogram data having the three dimensions of time, mass-to-charge ratio and signal intensity can be obtained by repeatedly performing a scan measurement over a predetermined mass-to-charge-ratio range in the mass spectrometer. The following description deals with the case of a liquid chromatograph using a PDA detector, although the case is the same with a chromatograph using a mass spectrometer as the detector.
FIG. 8A
FIG. 8B
0
is a model diagram showing three-dimensional chromatogram data obtained with the aforementioned liquid chromatograph. By extracting absorbance data at a specific wavelength (e.g. λ) from the three-dimensional chromatogram data a wavelength chromatogram showing the relationship between the measurement (i.e. retention time) and the absorbance at that specific wavelength as shown in can be created. Furthermore, by extracting data which show the absorbance at a specific point in time (measurement time) from the three-dimensional chromatogram data, an absorption spectrum showing the relationship between the wavelength and the absorbance at that point in time can be created.
In such a liquid chromatograph, a quantitative analysis of a known target component is normally performed as follows: A wavelength chromatogram at an absorption wavelength corresponding to that target component is created. On this wavelength chromatogram, the beginning point Ts and ending point Te of a peak originating from the target component are located. The area value of that peak is calculated, and the quantitative value is computed by comparing that area value with a previously obtained calibration curve.
There is no problem with such a quantitative determination of a target component if the peak which has appeared on the extracted wavelength chromatogram originates from only that target component. However, a peak is not always composed of only a single component (target component); it is often the case that a signal originating from an impurity unintended by the analysis operator (or more broadly, any component other than the intended one) is superposed on the peak. If the analysis operator performs the quantitative calculation without noticing such a situation, the result of the quantitative determination will be inaccurate. Accordingly, an impurity determination process (or peak purity determination process) for examining whether a peak located on a chromatogram has originated from only the target component or additionally contains an impurity is often performed in advance of the quantitative calculation.
To date, various methods have been proposed and practically used as the impurity determination process for a peak on a chromatogram. However, the actual situation is such that none of the conventional methods is a decisive solution since each method has both advantages and disadvantages.
For example, in the impurity determination method described in Patent Literature 1, the absorption spectrum obtained at each point in time of the measurement is differentiated with respect to wavelength at a maximum (or minimum) absorption wavelength of the target component to calculate a wavelength differential coefficient, and a differential chromatogram showing the temporal change of the wavelength differential coefficient is created. Whether or not a peak originating from the target component on the wavelength chromatogram contains an impurity is judged by determining whether or not a peak waveform similar to the one which appears on a normal chromatogram is observed on the differential chromatogram. This method is excellent in that whether or not an impurity exists can be determined with a high level of reliability by comparatively simple computations. However, in principle, there is the case where an impurity cannot be detected, as will be hereinafter described.
FIGS. 9A-9C
show examples of the relationship between the absorption spectrum originating from a target component (solid line) and the absorption spectrum originating from an impurity (broken line).
FIG. 9A
FIG. 9A
FIG. 9B
0
0
In the previously described conventional impurity determination method, as shown in , the wavelength differential coefficient of the absorption spectrum curve of the impurity at wavelength λ where the extreme point of the absorption spectrum originating from the target component is located (i.e. the wavelength at which the wavelength differential coefficient is zero) is used for the impurity determination. As shown in , if the wavelength at which the absorption spectrum of the impurity is maximized does not coincide with wavelength and therefore the spectrum curve has a certain slope at wavelength λ, the impurity can be detected. However, as shown in , if both the extreme point of the absorption spectrum originating from the target component and that of the absorption spectrum originating from the impurity appear at the same wavelength, the wavelength differential coefficient of the absorption spectrum curve of the impurity becomes almost zero, so that the impurity cannot be detected.
FIG. 9C
Furthermore, as shown in , if the curve of the absorption spectrum originating from the impurity has a low slope (which can be horizontal in an extreme case) in the vicinity of the extreme point of the absorption spectrum originating from the target component, the impurity-originated peak which appears when the differential chromatogram is created may become extremely low and obscured by noise components, so that it will be ultimately impossible to detect the impurity.
In the case where the sample is introduced by a flow injection analysis (FIA) method without using the column and detected with a PDA detector or similar device, the obtained data will also be three-dimensional data having the three dimensions of time, wavelength and absorbance. Such data are practically equivalent to the three-dimensional chromatogram data collected with a liquid chromatograph. Therefore, three-dimensional data collected by the FIA method should also be included with the “three-dimensional chromatogram data” in the present description.
Patent Literature 1: WO 2013/035639 A
The present invention has been developed to solve the previously described problem. Its objective is to provide a chromatogram data processing system capable of correctly and stably determining the presence or absence of the superposition of an impurity on a target peak on a chromatogram even in such a case where the presence or absence of the superposition of the impurity cannot be easily and correctly determined by the previously described conventional impurity determination method.
The present invention developed for solving the previously described problem is a chromatogram data processing system for processing three-dimensional chromatogram data having time, signal intensity and another third dimension collected for a sample to be analyzed, the system including:
a) a filter creator for calculating one auxiliary vector orthogonal to a principal vector which is a multidimensional vector expressing a spectrum which shows or can be regarded as the relationship between the third dimension and the signal intensity for the target component to be observed, and for designating the auxiliary vector as a filter for impurity extraction; and
b) an impurity presence information acquirer for calculating the inner product of a process-target multidimensional vector and the auxiliary vector designated as the filter, the process-target multidimensional vector expressing a process-target spectrum obtained or derived from the three-dimensional chromatogram data obtained for the sample to be analyzed, and for determining the presence or absence of an impurity other than the target component in the process-target spectrum based on a result of the calculation.
For example, the “third dimension” in the present context is the wavelength or mass-to-charge ratio, while the “three-dimensional chromatogram data” are a net of data obtained by repeatedly acquiring an absorption spectrum with a multichannel detector or similar detector, or a set of data obtained by repeatedly acquiring a mass spectrum with a mass spectrometer, for a sample containing various components temporally separated by a column of a chromatograph (LC or GC). The “three-dimensional chromatogram data” may also be a set of data obtained with a multichannel detector or mass spectrometer for a sample introduced by the HA method without being separated into components, instead of the sample which has passed through the column of a chromatograph.
1
2
3
In the chromatogram data processing system according to the present invention, a spectrum which shows the relationship between the third dimension and the signal intensity (e.g. an absorption spectrum or mass spectrum) is expressed by a vector and handled as the multidimensional vector. For example, consider the case of an absorption spectrum. An absorption spectrum is a set of values showing the absorbance at discrete wavelengths. Therefore, the absorption spectrum can be expressed as (a(λ), a(λ), a(λ), . . . , a(λn)), and a multidimensional vector with a(λm) as its elements can be defined, where a(λm) represents absorbance at wavelength m(λ=1 . . . n).
With I denoting the process-target multidimensional vector which expresses the process-target spectrum at a specific point in time of the measurement, A denoting the multidimensional vector which expresses the spectrum of the target component, and B denoting the multidimensional vector which expresses the spectrum of the impurity, the process-target multidimensional vector can be expressed as a vector operation by the following equation (1):
I=A+B
(1)
Suppose that vector B expressing the spectrum of the impurity is decomposed into vector Ba which is parallel to vector A expressing the spectrum of the target comp a d vector Bo which is orthogonal to vector A. Suppose there is also another multidimensional vector F orthogonal to vector A. Since any two mutually orthogonal vectors have an inner product of zero, the inner product of the vectors F and Ba equals zero. Accordingly, the if product of the process-target multidimensional vector I and vector F equals that of the vectors Bo and F. That is to say, the following equation (2) holds true:
I·F=Bo·F
(2)
Since the length of vector Bo is proportional to that of vector B expressing the spectrum of the impurity, the right-hand side of equation (2), Bo·F, is also proportional to the length of vector B. Accordingly, the inner product of the vectors on the left-hand side of equation (2), I·F, is also proportional to the length of vector B expressing the spectrum of the impurity. This means that the inner product of the vectors I·F can be used as an index value u which represents the amount of impurity. Accordingly, in the chromatogram data processing system according to the present invention, the filter creator calculates an auxiliary vector F orthogonal to the principal vector A expressing the spectrum of the target component, and designates it as the filter for impurity extraction. The impurity presence information acquirer calculates the inner product of vector I expressing the process-target spectrum obtained or derived from the three-dimensional chromatogram data and vector F designated as the filter, and determines Whether or not an impurity exists based on the result of the calculation.
As one typical mode, the impurity presence information acquirer may be configured so that, for each of the process-target spectra obtained at the respective points in time of the measurement with the passage of time, it calculates the inner product of vector I expressing the spectrum and vector F designated as the filter, observes the change in the value of the inner product along the time series, and determines that an impurity other than the target component exists when, for example, a waveform similar to a chromatogram peak has appeared.
In the chromatogram data processing system according to the present invention, the filter creator calculates, as the filter for impurity extraction, the auxiliary vector orthogonal to the multidimensional principal vector. There are many vectors orthogonal to a given vector in a multidimensional vector space. Accordingly, the filter creator should preferably determine the direction of the auxiliary vector F so that the cosine similarity index between vector Bo originating from the spectrum of the impurity and the auxiliary vector F designated as the filter for vector Bo will be maximized, i.e. as close to “1” as possible. By this operation, the SN ratio of the index value u of the amount of impurity expressed by equation (2) becomes maximized or close to that level, which improves the correctness in the determination on the presence or absence of non-target components.
To calculate the cosine similarity index, vector Bo needs to be calculated. This can be analytically determined as follows:
Bo=I−αA
(3)
I·A
A·A
α=()/()
When the inner product of the vectors I and F is calculated in the previously described manner for each of the process-target spectra obtained at the respective points in time of the measurement, the vector F obtained at each point in time of the measurement may be used or alternatively, one or more representative vectors F may be used.
For example, as one embodiment, the filter creator may determine an average vector of a plurality of vectors which are the filters for impurity extraction created at the respective points in time of the measurement, and the impurity presence information acquirer may use the average vector in calculating the inner product for each vector which expresses the process-target spectrum obtained at each point in time of the measurement.
By this configuration, a vector which is robust against noise can be used as the filter for impurity extraction, so that the presence or absence of an impurity can be correctly determined even when a noise component is superposed on the data.
An another embodiment, the filter creator may select a vector haying the largest norm from among a plurality of vectors which are the filters for impurity extraction created at the respective points in time of the measurement, and the impurity presence information acquirer may use the selected vector in calculating the inner product for each vector which expresses the process-target spectrum at each point in time of the measurement.
If there are a plurality of impurities, the auxiliary vector which is the filter for impurity extraction at each point n time of the measurement will be a mixture of the signals originating from a plurality of spectra. In such a case, a vector obtained by a simple averaging operation may not correctly show the presence of the impurities. Accordingly, as still another embodiment, the filter creator may compute a cluster mean for a plurality of vectors which are the filters for impurity extraction created at the respective points in time of the measurement, and the impurity presence information acquirer may use the vector of the cluster mean in calculating the inner product for each vector which expresses the process-target spectrum at each point in time of the measurement.
For obtaining the cluster mean, the k-mean clustering, me shift or similar methods can be used. It is also possible to use a smoothing filter in which time-series fluctuations are taken into account, such as the moving average, bilateral filter, Kalman filter or particle filter.
As still another embodiment of the chromatogram data processing system according to the present invention, the filter creator may designate, as the filter for impurity extraction, a vector obtained by multiplying the vector expressing the spectrum of the target component by a predetermined constant and subtracting the multiplied vector from the vector expressing the process-target spectrum. In other words, from equation (3), the vector which expresses the filter in this case is Bo itself.
In this case, the impurity presence information acquirer may calculate the secondary norm of the vector created as the filter for impurity extraction by the filter creator and use the secondary norm in place of the inner product to determine the presence or absence of an impurity in the process-target spectrum. This enables an easy and fast calculation of the index value of the amount of impurity. This is particularly advantageous in the previously described case of calculating the index value of the amount of impurity for each of the process-target spectra obtained at the respective points in time of the measurement with the passage of time.
The chromatogram data processing system according to the present invention may preferably be configured so that, if it is determined by the impurity presence information acquirer that an impurity is present, a spectrum expressed by the vector created as the filter for impurity extraction by the filter creator, e.g. the vector expressed by is designated as a residual spectrum, and the processes performed by the filter creator and the impurity presence information acquirer are repeated using the residual spectrum as the process-target spectrum.
By this configuration, when there are a plurality of impurities mixed in the sample, even if all of them cannot be detected by a single processing operation by the filter creator and the impurity presence information acquirer, the remaining impurities can be detected in a stepwise manner while the process is repeated a plurality of times.
In the chromatogram data processing system according to the present invention, it is basically preferable to use, as vector A, a vector which truly expresses the spectrum of the target component. However, in general, the exact spectrum of the target component is often unknown. Accordingly, it is common to use a spectrum which can be regarded as the target component, and not the true spectrum of the target component.
As one preferable mode, the fitter creator may regard, as the spectrum of the target component, a spectrum based on data obtained within a period of time which is estimated to include the target component free of impurities among the three-dimensional chromatogram data obtained for the sample to be analyzed, and create a vector expressing this spectrum as the principal vector, i.e. vector A. The position where a target component free of impurities is estimated to be present may be located by an analysis operator, although the position may automatically be determined by automatically examining the shape of the chromatogram peak.
As another mode, the filter creator may designate, as the principal vector (i.e. vector A), a spectrum having the largest norm when expressed in the form of a vector among the spectra based on the three-dimensional chromatogram data obtained for the sample to be analyzed.
This makes it possible to perform the impurity determination process without previously determining the spectrum of the target spectrum.
Basically, vector A should be a vector which expresses an impurity-free spectrum. However, in some cases, the spectrum contains an impurity, and the consequently created filter for impurity extraction also contains the impurity. In such a case, plotting the inner product of the vectors I F in time-series order results in a peak appearing before and after the point in time of the measurement at which the spectrum selected as vector A is obtained. This is due to the fact that the influence of the additional deduction of the impurity from the spectrum selected as vector A appears before and after the point in time of the measurement. This fact can also be used to determine whether or not an impurity is present at a certain point in time of the measurement or within a specific range of time.
Thus, the chromatogram data processing system according to the present invention may be configured so that:
the filter creator designates, as the spectrum of the target component, a spectrum based on data obtained within a specific period of time among the three-dimensional chromatogram data obtained for the sample to be analyzed, multiplies a vector expressing the spectrum of the target component by a predetermined constant, and designates, as the filter for impurity extraction, a vector obtained by subtracting the multiplied vector from the vector expressing the process-target spectrum; and
the impurity presence information acquirer designates, as a residual spectrum, a spectrum expressed by the vector created as the filter for impurity extraction by the filter creator for each of the spectra obtained within a predetermined range of time including the specific period of time, and determines Whether or not an impurity is present within the specific period of time by determining whether or not a peak appears before and after the specific period of time on a chromatogram created for the predetermined period of time based on the residual spectrum.
The chromatogram data processing system according to the present invention can correctly and stably determine whether or not an impurity is contained in a target peak on a chromatogram created based on three-dimensional chromatogram data collected with a chromatograph in which a multichannel detector (e.g. PDA detector) or mass spectrometer is used as the detector. In particular, the presence or absence of a superposition of an impurity can be correctly and stably determined even in the case where it is difficult to appropriately determine the presence or absence of the superposition of the impurity by the previously described impurity determination method which uses the differential chromatogram.
One embodiment of the chromatogram data processing system according to the present invention is described with reference to the attached drawings.
FIG. 8B
FIG. 8A
As described earlier, the present chromatogram data processing system has the function of determining Whether or not an impurity is contained in a peak on a chromatogram (see ) created based on three-dimensional chromatogram data (see ) which have been collected, for example, by using a liquid chromatograph having a PDA detector. Initially, the principle of the impurity determination process in the chromatogram data processing system according to the present invention is described.
In the present impurity determination process, both a set of process-target spectra sequentially obtained with the passage of time (in the following description, a “spectrum” means an absorption spectrum with the horizontal axis indicating wavelength and the vertical axis indicating absorbance; however, as already noted, the description similarly holds true for a mass spectrum or other types of spectra) and a spectrum of the target component are used to create a graph with a high SN ratio which shows the temporal change in an index value of the amount of impurity other than the target component. Whether or not an impurity is contained in a peak on the chromatogram is determined by examining whether or not a chromatogram-peak-like signal exists on the graph.
FIG. 8A
Suppose that vector I expresses a process-target spectrum at a certain point in time of the measurement, and vector A expresses a spectrum of the target component (or a spectrum which can be regarded as a spectrum of the target component). Typically, the process-target spectrum is a spectrum which shows the absorbance at a certain point in time extracted from three-dimensional chromatogram data (such as shown in ). However, as will be described later, when the impurity separation process is repeated, the spectrum which has undergone the separation process is handled as the process-target spectrum.
FIG. 3
1
2
3
1
2
3
In the present description, the spectrum as shown in is regarded as a set of absorbance data at discrete wavelengths within a predetermined wavelength range. The absorption spectrum is expressed by (a(λ), a(λ), a(λ), . . . , a(λn)), where a(λm) represents absorbance at wavelength m (λ=1 . . . n). A spectrum in this notation can be expressed as a vector in an n-dimensional space. In other words, this spectrum is a multidimensional vector with a(λ), a(λ), a(λ), . . . as its elements. Similarly, the spectrum of the impurity is expressed by vector B. As already noted, vector I which expresses the process-target spectrum can be express d by equation (1):
I=A+B
(1)
FIG. 4
FIG. 4
shows the two-dimensional vector space, which is a simple version of the n-dimensional vector space. The relationship of I, A and B given by equation (1) is as shown in .
Suppose that vector B expressing the spectrum of the impurity is decomposed into vector Ba which is parallel to vector A expressing the spectrum of the target component and vector Bo which is orthogonal to vector A. Suppose also there is another multidimensional vector F orthogonal to vector A. Since vector Ba is parallel to vector A while vector F is orthogonal to vector A, vectors F and Ba are orthogonal to each other. Since any two mutually orthogonal vectors have an inner product of zero, the inner product of vectors F and Ba is equal zero. Accordingly, the inner product of the multidimensional vector I to be processed and vector F is equal that of the vectors Bo and F. That is to say, the already mentioned equation (2) holds true:
I·F=Bo·F
(2)
Since the length of vector Bo is naturally proportional to that of vector B expressing the spectrum of the impurity, the right-hand side of equation (2), Bo·F, is proportional to the length of vector B, i.e. the amount of impurity. Accordingly, the inner product of the vectors on the left-hand side of equation (2), I·F, can be used as an index value u which represents the amount of impurity. In this operation, vector F is used for extracting impurity components from vector I representing the process-target spectrum. Therefore, vector F is designated as the filter for impurity extraction. For example, in the case of determining whether or not an impurity is superposed on a peak originating from a target component which appears on an appropriate type of chromatogram such as a waveform chromatogram), it is possible to conclude that an impurity is present if a chromatogram-peak-like waveform has appeared on a graph Which shows the temporal change in the index value u (=inner product I·F) over the range from the beginning point to the ending point of the peak.
In an n-dimensional vector space having an extremely large value of n, there are virtually infinite number of vectors orthogonal to vector A expressing the spectrum of the target component. In the case of using the inner product I·F as the index value u of the amount: of impurity, it is preferable to determine the direction of vector F expressing the filter for impurity extraction as follows:
Consider the case where white noise is superposed on vector I expressing the process-target spectrum. The signal component which is included in the inner product due to this white noise is independent of the deflection angle of vector F and is proportional to the length of this vector. The closer to the angle the angle made by vector Bo originating from the impurity and vector F becomes, the greater influence the signal component included in the inner product due to the white noise has on the extraction of the impurity component. This conversely means that vectors F and Bo should be as parallel to each other as possible in order to increase the SN ratio of the signal originating from the impurity in the inner product I·F. In other words, it is preferable to determine the direction of vector F relative to Bo so that their cosine similarity index becomes the maximum or as close to the maximum as possible. To this end, it is naturally necessary to determine vector Bo, which can be analytically calculated by the already mentioned equation (3):
Bo=I−αA
(3)
I·A
A·A
α=()/()
In some cases, a variation occurs in the spectrum of the impurity due to a pH change of the sample liquid (in the case of a liquid chromatograph), non-linearity of the detector or other factors, which may cause a variation in the index value u of the amount of impurity expressed as the inner product I·F and the consequent occurrence of a peak-like false waveform on the graph of the index value u. However, the variation of the spectrum due to the aforementioned factors shows a certain definite pattern of change, so that the change in the waveform which occurs in the index value u can be discriminated from the change in the waveform due to the mixture of an impurity. Accordingly, when displaying the result of the impurity determination process, it is preferable to show both the index value u of the amount of impurity expressed by the inner product of vectors I·F (or a graph showing the temporal change in the index value u d the spectrum expressed as vector Bo so that an analysis operator can visually examine the result and determine whether or not an impurity is truly superposed and what characteristics the spectrum of the detected impurity has
The spectrum expressed by the thereby displayed vector Bo is not the intact spectrum of the impurity; it is the spectrum from which the vector component Ba parallel to vector A has been removed. Accordingly, attention must be paid to the fact that, when a spectrum of a pure substance recorded in a database is additionally displayed or a database search is performed in order to identify an impurity based on the spectrum concerned or compare this spectrum with another one, it is necessary to previously remove the component parallel to vector A from the spectrum of the pure substance.
In the case where the graph showing the temporal change in the index value u expressed by the inner product over a certain period of time is created in the previously described manner, the process-target spectrum exists at each point in time of the measurement within that period of time, and the inner product is calculated for each of those spectra. The vectors I and F with the time element taken into account are hereinafter denoted by I(t) and F(t), respectively, to show that these vectors I and F include time as one element. Vector I(t) which expresses the process-target spectrum exists at every point in time of the measurement, whereas vector F(t) which expresses the filter is not always necessary for each point in time of the measurement. There are the following two major forms of F(t) which can be used in calculating the inner product I(t).F(t) at each point in time of the measurement:
(1) Vector F(t) calculated at each point in time of the measurement is directly used; i.e. the inner product I(t)·F(t) is calculated by multiplying vector I(t) which expresses the process-target spectrum obtained at each point in time of the measurement by F(t).
(2) Instead of directly using vector F(t) calculated at each point in time of the measurement, a vector F(t)′ for the calculation of the inner product is computed from the values of F(t) obtained at the respective points in time of the measurement. For example, an average of the values of vector F(t) obtained at a plurality of points in time of the measurement within a predetermined period of time is calculated as vector F, and vector I(t) which expresses the process-target spectrum obtained at each point in time of the measurement is multiplied by vector F to calculate the inner product I(t)·F. By this method, vector F which expresses an average filter having a high level of robustness against noise can be obtained.
If a plurality of impurities are contained, vector F(t) at each point in time of the measurement will be a complex mixture of signals originating from the spectra of the plurality of impurities, since those impurities do not appear at the same timing. In such a case, the previously described simple averaging of the vectors does not provide a vector F which expresses a proper filter. Therefore, it is preferable to use the so-called “mean clustering” or similar method instead of the simple averaging. For obtaining the cluster mean, commonly known techniques can be used, such as the k-mean clustering or mean shift methods, as well as various kinds of smoothing filters in which time-series fluctuations are taken into account, such as the moving average, bilateral filter, Kalman filter or particle filter (sequential Monte Carlo method).
In the previous description, vector A which expresses the spectrum of the target component is used to calculate the index value u of the amount of impurity. However, in many cases, the exact spectrum of the target component is unknown. Furthermore, acquiring this spectrum requires a considerable amount of time and labor. Accordingly, in practice, it is preferable to create a pseudo spectrum of the target component from the signals obtained by the analysis on the sample (i.e. from the spectra obtained at the respective points in time of the measurement). One example is as follows:
FIG. 7
In general, the concentration of an impurity is lower than that of the target component. Therefore, as shown in , the peak width of an impurity on a chromatogram is narrower than that of the target component. From this fact, it s highly likely that the spectra obtained at the respective points in time of the measurement by the analysis include both a spectrum which is composed of the spectrum of the target component with the spectrum of an impurity mixed, and a spectrum which is purely composed of the target component. Accordingly, for example, it is possible to extract, from the chromatogram data obtained by the analysis, a piece of data included within a specific range of time which is most likely to include the target component with no mixture of impurities, and to regard a spectrum obtained from the extracted data as the spectrum of the target component. It is also possible to smooth the data obtained by the analysis along the temporal direction before extracting the data from a specific range of time, or average the data obtained by the analysis within a specific range of time, and regard the thereby obtained spectrum as the spectrum of the target component. The range of time for the extraction of the data may be specified by the analysis operator. Alternatively, it may be selected in such a manner that the period of time which includes the peak of an impurity is located by a determination process (which will be described later) and a range of time within which impurities are least likely to be present is automatically selected based on the result of the determination.
If the analysis is merely aimed at determining the presence or absence of the superposition of an impurity and it is unnecessary to accurately determine the content of the impurity, it is of no consequence that the peak which occurs in the graph showing the temporal change in the index value u of the amount of impurity is split into two the reason for this splitting will be described later). In such a case, it is possible to allow for the mixture of impurities or the fluctuation of the spectrum, and simply select, as the spectrum of the target component, the spectrum having the highest SN ratio from among the spectra obtained by the analysis, which is normally a spectrum having the largest norm when expressed in the form of a vector.
FIGS. 5A and 5B
FIGS. 5A and 5B
Hereinafter described with reference to is a situation which occurs in the case where the filter for impurity extraction created from a spectrum obtained at a certain point time of the measurement or a plurality of spectra obtained at the points in time of the measurement within a certain range of time is a filter created from a spectrum which contains an impurity and is not a single-component spectrum. show one example of the chromatogram waveform and a waveform which shows the temporal change in an index value of the amount of impurity based on a residual spectrum.
1
42
2
42
FIG. 5A
FIG. 5B
FIG. 5A
FIG. 5B
The index value denoted by P in is a curve showing the inner product I(t)·F plotted against time, with the filter-expressing vector F calculated under the condition that the spectrum obtained at the measurement point in time of (this spectrum contains an impurity mixed in the target component) is regarded as the spectrum of the target component. By comparison, the index value denoted by P in is a curve showing the inner product I(t)·F plotted against time, with the filter-expressing vector F calculated under the condition that the spectrum obtained at the measurement point in time of (this spectrum is purely composed of the target component with no impurity contained) is regarded as the spectrum of the target component. In , two peaks are located before and after the measurement point in time at which the spectrum of the target component is selected. This is the aforementioned splitting of the peak. In this case, it is difficult to determine the amount of impurity, since the shape of the peaks on the graph showing the temporal change in the inner product I(t)·F does not correctly represent the amount of impurity. However, this situation is also useful; i.e. when two peaks are located before and after the target component on the graph of the inner product I(t)·F, it is possible to consider that a spectrum which contains an impurity has been designated as the spectrum of the target component. On the other hand, as shown , when a spectrum which contains no impurity is selected as the spectrum of the target component, a peak with a Gaussian waveform appears on the graph showing the temporal change in the inner product I(t)·F. This peak can be considered to be correctly representing the amount of impurity.
FIGS. 5A and 5B
In the example shown in , the number of impurities is one. As already noted, the number of impurities mixed in the sample is not always one; there may be a plurality of impurities. Consider the case where two impurities b and c are present in addition to the target component a, with the amount of impurity c being extremely lower than the amount of the target component a or impurity b. In the case where the average of the values of F(t) at the respective points in time of the measurement within a predetermined range of time is used as vector F which expresses the filter for impurity extraction, the average vector is approximately identical to vector F which is calculated for a spectrum expressed by vector I(t) composed of vector A which expresses the spectrum of the target component a and vector B which expresses the spectrum of the impurity b. In that case, vector F is parallel to vector Bo. In this situation, if vector B is orthogonal to vector C which expresses the spectrum of the impurity b, it is absolutely impossible to detect vector C on the graph showing the temporal change in the inner product I(t)·F. Even in the case where vector F(t) which expresses the filter for impurity extraction is calculated for each point in time of the measurement, the extremely low peak originating from the impurity c is difficult to detect; for example, if the peak originating from the impurity c is superposed on the base portion of the peak originating from the impurity b or similar portion where the signal significantly fluctuates, the peak will be extremely difficult to detect. Accordingly, in such a case, i.e. when the mixture of a plurality of impurities is expected and each of them needs to be defected, it is preferable to follow the hereinafter described procedure:
As can be understood from the aforementioned equation (3), I−αA represents the amount of impurity. Therefore, the process expressed by equation (2), i.e. the process of multiplying the process-target spectrum by the filter can be considered to be an impurity separation process. The spectrum expressed by vector I−αA or I(t)−αA can be considered to be a residual spectrum which remains after the removal of the target component or one or more impurities. If the sample contains a plurality of impurities, it is preferable to perform the impurity separation process in such a manner that I(t)−αA (the vector expressing the residual spectrum) calculated in the nth process is used as vector I(t) expressing the process-target spectrum for the (n+1)th process. Such a method is hereinafter called the “multistage spectrum residue method”.
FIG. 6
1
4
1
3
1
3
1
shows signal waveforms based on the residual spectrum obtained when the multistage spectrum residue method is performed. In this figure, “O” denotes the original chromatogram waveform, while Q-Q denote |I(t)| for n=1−4, respectively. Q should have a small peak similar to the one observed in Q. However, in the waveform of Q, it is difficult to visually locate the peak observed in Q, which should also be contained in Q. However, such a peak of the impurity which cannot be initially located can be detected by using the previously mentioned multistage spectrum residue method.
In the multistage spectrum residue method, it is preferable to determine the presence or absence of an impurity at each stage by examining whether or not a peak is present in the difference between |I(t)| obtained in the nth process and |I(t)| obtained in the (n+1)th process (“spectrum residue difference”).
FIG. 6
2
2
3
2
4
3
2
3
For example, in , for the impurity which is detected as a convex portion in the left part of the curve denoted by Q, it is difficult to determine, from Q, whether the peak is a true peak or a noise fluctuation, since the signal of Q is mixed in Q. However, in the waveform denoted by Q obtained by removing Q, it is possible to recognize the presence of a component which is evidently located on only the left side. It should be noted that, in this removing operation, although the peak origin is the same, the peak height is multiplied by a certain constant, since there is a difference between vector F expressing the Q-based filter and vector F expressing the Q-based filter. Accordingly, in the actual removing operation, it is preferable to determine the most suitable constant by the least square method focused on only the peak portion, and perform the removing process after multiplying each intensity value by that constant. Needless to say, instead of the simple least square method, a commonly known peak-height deduction method which can deal with baseline fluctuations may be used; for example, the least square method may be applied on a waveform obtained by calculating the second derivative of F(t), or the peak height may be deduced using a matched filter with the kernel created by normalizing the extracted peak.
4
FIG. 6
By repeating the previously described process until a residual signal waveform which has no noticeable peak as in the waveform denoted by Q in is obtained, the target component and the impurities can be completely separated even when there is a plurality of impurities. In the case where a measurement signal obtained for a sample containing m kinds of substances is processed by the multistage spectrum residue method, the impurity separation process only needs to be repeated m+1 times to separate the m kinds of substances, exclusive of the occurrence of false impurity peaks due to the pH fluctuation, low-linearity of the detector or other factors.
FIGS. 1 and 2
FIG. 1
Next, one embodiment of the liquid chromatograph provided with a chromatogram data processing system according to the present invention is described with reference to . is a schematic configuration diagram of the liquid chromatograph in the present embodiment.
1
12
11
13
13
14
14
14
15
14
15
15
16
2
In an LC unit for collecting three-dimensional chromatogram data, a liquid-sending pump suctions a mobile phase from a mobile-phase container and sends it to an injector at a constant flow rate. The injector injects a sample liquid into the mobile phase at a predetermined timing. The sample liquid is transferred by the mobile phase to a column . While the sample liquid is passing through e column , the components in the sample liquid are temporally separated and eluted from the column . A PDA detector is provided at the exit end of the column . In the PDA detector , light is cast from a light source (not shown) into the eluate. The light which has passed through the eluate is dispersed into component wavelengths, and the intensities of those wavelengths of light are almost simultaneously detected with a linear sensor. The detection signals repeatedly produced by the PDA detector are converted into digital data by an analogue-to-digital (AID) converter and sent to a data processing unit as three-dimensional chromatogram data.
2
21
22
23
24
24
3
4
2
3
4
The data processing unit includes: a chromatogram data storage section for storing three-dimensional chromatogram data; a chromatogram creator for creating, from three-dimensional chromatogram data, a wavelength chromatogram which shows the temporal change in the absorbance at a specific wavelength; a peak detector for detecting a peak in the wavelength chromatogram; and an impurity determination processor for determining whether or not an impurity is present in a target peak specified by an analysis operator among the detected peaks. This impurity determination processor is the functional block which performs the previously described characteristic process. Additionally, an input unit and display unit are connected to the data processing unit . The input unit is operated by the analysis operator to enter and set items of necessary information for the data processing, such as the absorption wavelength of the target component. The display unit is used for displaying various items of information, such as a chromatogram, absorption spectrum and the result of impurity determination.
2
3
4
A portion or the entirety of the functions of the data processor and control unit (no shown) can be realized by running a dedicated controlling and processing software program installed on a personal computer or workstation. In this case, the input unit includes the keyboard, pointing device (e.g. mouse) and other devices which are standard equipment of personal computers or workstations, while the display unit is a commonly used liquid crystal display or similar device.
FIG. 2
Next, the characteristic data processing operation in the liquid chromatogram of the present embodiment is described with reference to the flowchart shown in .
1
15
2
21
22
23
3
FIG. 8A
A chromatographic analysis for a target sample is performed in the LC unit . Three-dimensional chromatogram data (see ) showing the temporal change in the absorption spectrum within a predetermined wavelength range are sent from the PDA detector to the data processing unit and stored in the chromatogram data storage section . In the chromatogram data creator , a wavelength chromatogram at the specific wavelength or within the specific wavelength range is created based on the stored three-dimensional chromatogram data. The peak detector performs a process for detecting a peak on the chromatogram. Using the input unit , the analysis operator designates one of the detected peaks and issues a command for executing the impurity determination process, whereupon a process which will be hereinafter described is performed:
24
21
1
Initially, for each point in time of the measurement within the range between the beginning point ts and the ending point te of the designated peak, the impurity determination processor reads the chromatogram data (spectrum data) from the chromatogram data storage section (Step S), whereby vector I(t) which expresses the process-target spectrum is prepared (where t is within a range from ts to te).
24
2
Next, the impurity determination processor sets the spectrum of the target component for calculating vector A (Step S). As stated earlier, there are several methods for setting the spectrum of the target component. If the spectrum of the target component is already stored in a database or other data sources, that spectrum can be simply retrieved. In the present example, to deal with the situation where the spectrum of the target component is unknown and the automatic, repetitive setting of the spectrum is necessary, the technique of selecting the spectrum having the largest norm is used, since this technique requires no manual operation or judgment by the analysis operator and is capable of high-speed processing. According to this technique, the absorption spectrum obtained at the point in time of the measurement at which the largest index value of the amount of impurity u=I(t)·F has been obtained as a result of the previously performed process is directly set as the spectrum of the target component for the next process In this manner, vector A which expresses the spectrum of the target component is also prepared.
2
1
In the first processing, i.e. when the process of Step S is performed for the first time, the secondary norm of vector I(t) prepared in Step S is calculated, and the spectrum of the target component at the point in time of the measurement at which the secondary norm is maximized is selected. Naturally, it is possible to allow the analysis operator to manually specify the spectrum of the target component. Furthermore, as described earlier, it is also possible to search for spectra which do not contain impurities, and to set the spectrum of the target component having the largest index value of the amount of impurity or the largest value of the secondary norm among the spectra which do not contain impurities.
3
After the process-target spectrum (vector I(t)) and the spectrum of the target component (vector A) have been determined, the filter for impurity extraction is determined in the previously described manner, and the inner product I(t)·F is calculated to remove the spectrum of the target component from the process-target spectrum and thereby determine the residual spectrum spectrum which reflects the amount of impurity (Step S). In the present example, with the importance attached to the speed of computation, the method in which I(t)−αA at each point f the measurement is directly used as vector F(t) is adopted. In this case, the computing formulae can be transformed into simple forms; the calculation of the inner product of the vectors I(t)·F, i.e. the index value u of the amount of impurity, can be substituted by the simple calculation of the secondary norm of I(t)−αA. Naturally, various modified methods mentioned earlier may also be used, such as the average value or moving average of vector F(t), instead of determining vector F(t) at each point in time of the measurement.
4
5
2
Whether or not a peak originating from an impurity is present is judged by determining whether or not a peak is present in the difference between the residual spectrum determined in the previously described manner and the residual spectrum obtained in the preceding process cycle, i.e. in either the secondary norm of the spectrum residue difference or the square root of the index value of the amount of impurity (√(I(t)·F)) obtained by the calculation in each cycle (Step S). For white noise, the square root of the amount of impurity or the secondary norm of the spectrum residue difference shows a constant distribution. Therefore, the presence or absence of a peak can be confirmed by examining whether or not there is any value deviating from a certain range based on the average and standard deviation of those values. Needless to say, other methods which e ploy commonly known algorithms for detecting a chromatogram peak may also be used to confirm the presence or absence of the peak. If it is determined that an impurity peak is present, the process returns from Step S to Step S to repeat the setting of the spectrum of the target component and the removal of the spectrum of the target component. That is to say, the previously mentioned multistage spectrum residue method is carried out.
5
4
4
6
On the other hand, in Step S, if it is determined that no impurity peak is present, the ultimate result of the impurity determination process is shown on the display unit based on the already obtained determination results, and if the presence of an impurity has been confirmed, the residue difference of each spectrum is also shown on the display unit (Step S). Therefore, the analysis operator cannot only determine whether or not an impurity is superposed on the target peak but also comprehend the amount of impurity.
It should be noted that the previous embodiment is a mere example of the present invention, and any change, addition or modification appropriately made within the spirit of the present invention will evidently fall within the scope of claims of the present application.
For example, the detector used in the chromatograph for obtaining three-dimensional chromatogram data to be processed by the chromatogram data processing system of the present invention does not need to be a PDA detector or similar multichannel detector; it may alternatively be an ultraviolet visible spectrophotometer, infrared spectrophotometer, near-infrared spectrophotometer, fluorescence spectrophotometer or similar device capable of high-speed wavelength scanning. A liquid chromatograph mass spectrometer using a mass spectrometer as the detector is also available.
The chromatograph may be a gas chromatograph instead of the liquid chromatograph. As already noted, the present invention can also be evidently applied in a system which processes the data obtained by detecting the components in a sample introduced by the FIA method without being separated into components, using a PDA detector, mass spectrometer or other detectors, instead of the data obtained by detecting the sample components separated by the column of the chromatograph.
1
. . . LC Unit
11
. . . Mobile-Phase Container
12
. . . Liquid-Sending Pump
13
. . . Injector
14
. . . Column
15
. . . PDA Detector
16
. . . Analogue-to-Digital (A/D) Converter
2
. . . Data Processor
21
. . . Chromatogram Data Storage Section
22
. . . Chromatogram Creator
23
. . . Peak Detector
24
. . . Impurity Determination Processor
3
. . . Input Unit
4
. . . Display Unit
BRIEF DESCRIPTION OF DRAWINGS
FIG. 1
is a schematic configuration diagram of one embodiment of the liquid chromatogram provided with a chromatogram data processing system according to the present invention.
FIG. 2
is a flowchart showing the operation process for the impurity determination in the liquid chromatograph of the present embodiment.
FIG. 3
shows one example of the absorption spectrum obtained at a certain point in time of the measurement.
FIG. 4
illustrates the principle of the impurity determination process in the present invention.
FIGS. 5A and 5B
show one example of the chromatogram waveform and a waveform which shows the temporal change in an index value of the amount of impurity based on a residual spectrum in the liquid chromatograph of the present embodiment.
FIG. 6
shows one example of the change in the waveform showing the temporal change in the index value of the amount of impurity in the case where the impurity separation process is repeated a plurality of times in the liquid chromatograph of the present embodiment.
FIG. 7
illustrates the impurity determination process in the liquid chromatograph of the present embodiment.
FIG. 8A
FIG. 8B
is a model diagram of the three-dimensional chromatogram data obtained with a liquid chromatograph, and is a waveform chromatogram.
FIG. 9A-9B
show examples of the absorption spectrum for explaining a problem concerning a conventional impurity determination process which uses the differential chromatogram. | |
We research the power of the propositional proof system R(lin) (Resolution over Linear Equations) described by Ran Raz and Iddo Tzameret. R (lin) is the generalization of R (Resolution System) and it is known that many tautologies, which require the exponential lower bounds of proof complexities in R, have polynomially bounded proofs in R (lin). We show that there are the sequences of unsatisfiable collections of disjuncts of linear equations, which require exponential lower bounds in R (lin). After adding the renaming rule, mentioned collections have polynomially bounded refutations.
Keywords
Resolution Systems, Resolution over Linear Equations, Refutation, Proof Complexity, Hard-Determinable Formula
To cite this article
Anahit Chubaryan, Armine Chubaryan, Arman Tshitoyan, Refutation Of Hard-Determinable Formulas In The System “Resolution Over Linear Equations” And Its Generalization, Pure and Applied Mathematics Journal. Vol. 2, No. 3, 2013, pp. 128-133. doi: 10.11648/j.pamj.20130203.13
Reference
S. R. Aleksanyan, A. A. Chubaryan "The polynomial bounds of proof complexity in Frege systems", Siberian Mathematical Journal, vol. 50, № 2, pp. 243-249, 2009.
S.R.Buss, "Some remarks on lenghts of propositional proofs", Archive for Mathematical Logic, 34, 377-394, 916-927. 1995.
An. Chubaryn, «Relative efficiency of proof systems in classical propositional logic, Izv. NAN Armenii, Mathematika, 37,N5, pp 71-84, 2002.
An.Chubaryan, Arm.Chubaryan, H.Nalbandyan, S.Sayadyan, "A Hierarchy of Resolution Systems with Restricted Substitution Rules", Computer Technology and Application, David Publishing, USA, vol. 3, № 4, pp. 330-336, 2012.
S.A.Cook, A.R.Reckhow, "The relative efficiency of propositional proof systems", Journal of Symbolic Logic, vol. 44, pp. 36-50, 1979.
J.Krajicek, "On the weak pigeonhole principle", Fund. Math. 170, 123-140, 2001.
P. Pudlak "Lengths of proofs" Handbook of proof theory, North-Holland, pp. 547-637, 1998.
Ran Raz, Iddo Tzameret, "Resolution over linear equations and multilinear proofs", Ann. Pure Appl. Logic 155(3), pp. 194-224, 2008.
G. S. Tseitin "On the complexity of derivation in the propositional calculus", (in Russian), Zap. Nauchn. Semin. LOMI. Leningrad: Nauka, vol. 8, pp. 234-259, 1968. | http://pamjournal.org/article/141/10.11648.j.pamj.20130203.13 |
Sweet Potato chips. Drizzle with oil, toss, and spread them in a single layer on sheets. Add sweet potatoes and toss until evenly coated in spice mixture. Place sweet potatoes on wire racks in a single layer.
In large bowl, toss potatoes with oil, salt and pepper. Toss slices in a touch of olive oil to lightly coat, then sprinkle with salt. Sprinkle the chips lightly with Diamond Crystal® Kosher Salt. You can cook Sweet Potato chips using 6 ingredients and 4 steps. Here is how you cook it.
Ingredients of Sweet Potato chips
- It's 4 of medium size sweet potato.
- You need 1 tsp of salt.
- You need 1 tsp of Red chilli powder.
- You need 2 tbsp of singhara atta (used in fast recipe).
- Prepare As required of Oil for frying.
- You need 1 tsp of Chaat masala.
Then move the chips to a bowl, or plastic bag to store. Sweet potato chips are a great healthy side dish that can go with just about anything! They are so easy and so much better than store-bought (deep fried) chips. This Baked Sweet Potato Chips Recipe is a go to for a fun and delicious but HEALTHY side dish.
Sweet Potato chips instructions
- First remove sweet potato skin, and cut into round shape, now wash it..
- After wash potato dry it, now add salt, Red chilli powder and singhara atta, now mix it well,and keep aside for 10 minutes..
- Now put oil in pan, heat oil and add chips into hot oil,fry in medium flame until it's golden brown..
- Now chips is ready,serve it with chaat masala..
These Sweet Potato Chips are coated with a delicious spice blend and then baked and not fried. Homemade sweet potato chips are so good - made with only a handful of ingredients - sweet potatoes, olive oil, and spices. You can serve them with your favorite homemade dipping sauce. I like to serve them with homemade smoked paprika aioli sauce - that sauce is amazing and such a perfect combination with the sweet potatoes. Sweet potato chips sound like a nutritious snack, but store-bought versions can still pack significant amounts of fat and sodium. | https://www.iranni.xyz/2020/08/easiest-way-to-make-tasty-sweet-potato.html |
Introduction {#sec1-2041669519863077}
============
Passport officers at airports and national borders are widely required to verify the identity of travellers by comparing their faces to passport photographs. People seeking to avoid detection at such security controls may attempt to do so by acting as impostors, using valid identity documents that belong to other persons who are of sufficiently similar facial appearance. In psychology, this task has been studied extensively as unfamiliar face matching (for reviews, see [@bibr17-2041669519863077]; [@bibr24-2041669519863077]; [@bibr41-2041669519863077]). In experiments in this field, observers are typically required to match pairs of face photographs, which are presented in isolation on blank backgrounds, and have to decide whether these depict the same person or two different people.
This general approach has been successful for isolating and understanding a range of important factors, such as *observer* characteristics. For example, pairwise face-matching experiments have been used to assess individual differences in performance (e.g., [@bibr3-2041669519863077]; [@bibr10-2041669519863077]; [@bibr11-2041669519863077]; [@bibr28-2041669519863077]), to compare untrained observers with passport officers ([@bibr45-2041669519863077]; [@bibr48-2041669519863077]) and different groups of professionals, such as facial review staff and facial examiners ([@bibr44-2041669519863077]; see also [@bibr37-2041669519863077]; [@bibr46-2041669519863077]), and to assess observers familiar and unfamiliar with the target identities ([@bibr12-2041669519863077]; [@bibr40-2041669519863077]), as well as those with impairments in face matching ([@bibr47-2041669519863077]). Similarly, such controlled laboratory experiments have been employed to study how the characteristics of *stimuli* affect face matching, by exploring factors such as image quality (e.g., [@bibr1-2041669519863077]; [@bibr42-2041669519863077]), the addition of paraphernalia and disguise ([@bibr21-2041669519863077]; [@bibr27-2041669519863077]; [@bibr48-2041669519863077]), and variation in viewpoint ([@bibr16-2041669519863077]), camera distance ([@bibr33-2041669519863077]), and facial appearance across photographs (e.g., Bindemann & [@bibr8-2041669519863077]; Megreya, [@bibr32-2041669519863077]).
While this research has advanced understanding of face matching considerably, these paradigms provide a limited proxy for studying how the environment and social interaction might affect this task. In real-life environments, passport officers may, for example, resort to nonfacial cues, such as body language, to support identification decisions ([@bibr38-2041669519863077]; [@bibr39-2041669519863077]). Similarly, environmental factors, such as the presence of passenger queues, might impair identification by exerting time pressure on passport officers (see, e.g., Bindemann, [@bibr6-2041669519863077]; [@bibr18-2041669519863077]; [@bibr48-2041669519863077]) or competition for attention (see, e.g., [@bibr4-2041669519863077]; Bindemann, [@bibr9-2041669519863077]; [@bibr29-2041669519863077]). The impact of such factors is likely to be huge but not captured by current laboratory paradigms and practically impossible to study in real life owing to the importance of person identification at passport control.
As a compromise, a few studies have moved beyond highly controlled laboratory paradigms to study this task in simplified field settings (e.g., [@bibr26-2041669519863077]; [@bibr31-2041669519863077]; [@bibr45-2041669519863077]). [@bibr45-2041669519863077], for example, examined passport officers\' matching accuracy under live conditions, in which target identities were presented in person and compared with a face photograph on a computer screen. Such paradigms are valuable for assessing whether limitations in face-matching accuracy are also observed in interpersonal interaction but are logistically challenging. Moreover, such setups do not adequately capture the complexity of real-life passport control environments and cannot provide the control that experimenters might desire to manipulate environment and social interaction factors accurately for psychological experimentation.
In this project, we propose a potential solution to these problems, by examining face matching in virtual reality (VR). In recent years, this technology has developed rapidly to provide affordable high-capability VR equipment. With VR, viewers can be immersed in detailed, interactive, and highly controllable three-dimensional (3D) environments that conventional laboratory experiments cannot provide. However, this approach is completely new to face matching. In this article, we therefore report an exploratory series of experiments to investigate the potential of VR for increasing our understanding of face matching. Our overall aim is to provide a foundation for further face-matching research with VR, by demonstrating that this approach can capture the face processes that are currently studied with more simplistic laboratory approaches.
In VR, people are represented by animated 3D avatars, on which we superimpose the two-dimensional (2D) faces of real persons. In the first phase of experimentation, we assess the quality of the resulting person avatars in a tightly controlled laboratory task, in which these 3D avatars are presented back as isolated 2D images, to establish that these capture the identities from which they were derived (Experiments 1 to 3). In the second phase of the study, we compare identity matching of these avatars with two established laboratory tests of face matching (Experiments 4 and 5). In the final phase, identification of avatars is then assessed in an immersive 3D VR airport environment (Experiments 6 and 7).
Phase 1: Avatar Face Construction and Validation {#sec2-2041669519863077}
================================================
We begin with a description of the construction of the person avatars for our experimentation. The initial stimulus sets consisted of 129 male and 88 female professional German sportspeople. As these identities were required to be unfamiliar to our participants, a pretest was carried out to ensure these people were not generally recognisable to U.K. residents. A list of the identities was presented to 20 students who were asked to cross the names of anyone who they would recognise. Identities familiar to two or more people were excluded. From those who remained, 50 male and 50 female identities were selected for avatar creation. We employed two full-face portrait photographs for each of these sportsmen and women, which were obtained via Google searches.
The person avatars for this study were created by combining these face photographs with an existing database of person avatars (see [www.kent.ac.uk/psychology/downloads/](http://www.kent.ac.uk/psychology/downloads/)avatars.pdf) with graphics software (Artweaver Free 5). The internal features of the face were cut as a selection from the photograph and overlaid onto the base avatar's graphics file. The size of the selection was altered to best map the features onto the positions of the base avatar's features. This was then smoothed around the edges and skin colour adjusted to blend in with the base avatar. Note that the 3D structure of the avatar faces could not be adapted to that of the face photographs, as extraction of such shape information is limited from 2D images. This may be suboptimal for modelling face recognition, to which both texture and shape information contribute (e.g., [@bibr34-2041669519863077]). However, face recognition is also tolerant to dramatic manipulations of shape (see Bindemann, Burton, Leuthold, & [@bibr5-2041669519863077]; [@bibr22-2041669519863077]), and texture appears to be more diagnostic for face identification and face matching (see, e.g., [@bibr15-2041669519863077]; [@bibr20-2041669519863077]; Itz, Golle, Luttmann, [@bibr23-2041669519863077]). Therefore, our method for combining the 2D photographs with animated 3D avatars captures the most diagnostic information for identification. In addition, to mitigate for the fact that we could not incorporate original shape information, the same base avatar was employed for both face photographs of each identity. However, avatar elements such as clothing were changed to create two unique appearances for each instance of a person. Therefore, for each of the 100 identities retained, two avatars were created. For the experiments reported here, this pool of avatars provided sufficient stimuli to create identity-match pairs consisting of two avatars of the same person and identity mismatch pairs consisting of two avatars from different people.
As an initial step, we sought to confirm that the resulting avatars adequately capture the identities of the face set. For this purpose, we recorded a 2D face portrait of each finished identity avatar. These images were constrained to reveal the internal facial features only (i.e., not hairstyle) and sized to 438 (w) × 563 (h) pixels at a resolution of 150 ppi. In addition, a 2D full-body image, which showed a frontal view of the avatar with arms outstretched, was also recorded and sized to 751 (w) × 809 (h) pixels at a resolution of 150 ppi. The procedure for avatar construction is illustrated in [Figure 1](#fig1-2041669519863077){ref-type="fig"}.
{#fig1-2041669519863077}
Experiment 1 {#sec3-2041669519863077}
============
The aim of Experiment 1 was to assess whether the production process of the avatar faces sufficiently captures the images and identities on which these are based. If so, then observers should be able to match these identities in a pairwise comparison. This was assessed with a face photograph-to-avatar matching test with three conditions. These comprised trials on which an avatar face portrait was paired with the original source face photograph (same-image identity match), trials on which an avatar face portrait was paired with a different face photograph of the same person (different-image identity match), and trials on which the avatar face portrait was paired with a face photograph of a different person (identity mismatch). Participants were asked to match these stimulus pairs according to whether they depicted the same person or two different people.
Method {#sec4-2041669519863077}
------
### Participants {#sec5-2041669519863077}
Thirty Caucasian participants (12 men, 18 women) with a mean age of 21.6 years (*SD* = 3.7 years), who reported normal or corrected-to-normal vision, were recruited at the University of Kent for course credit or a small payment. This sample size is directly comparable to face-matching studies using a broad range of paradigms (e.g., [@bibr1-2041669519863077]; [@bibr30-2041669519863077]; [@bibr47-2041669519863077]).
### Stimuli and Procedure {#sec6-2041669519863077}
Each participant was presented with 80 trials across 2 blocks, with each block comprising the following image-type trials. First, 10 same-image identity-match pairs were produced, which consisted of a 2D avatar face portrait and the high-quality face photograph used to create that avatar. Second, 10 different-image identity-match trials were included, in which the 2D avatar face portrait was combined with a different photograph of the same person. These trials did not consist of any of the identities shown in the same-image identity-match trials. Finally, 20 mismatch trials were created. In these, the 2D avatar face portrait was paired with a photograph of a different person, which was chosen by the experimenter (H. M. T.) for its general visual similarity.
The stimuli of the second block consisted of the same identity pairings as the first block (i.e., 10 same-image identity matches, 10 different-image identity matches, 20 mismatches) but with the reverse image-type pairings, as demonstrated in [Figure 1](#fig1-2041669519863077){ref-type="fig"}. For example, if an observer saw Avatar Face Portrait A paired with Photograph B for an identity in Block 1, then in Block 2 for the same identity, Avatar Face Portrait B was paired with Photograph A. Thus, all participants saw each identity twice during the course of the experiment but each image (avatar face portrait or face photograph) only once. All of these images were presented on a white background, with the avatar face portrait to the left and the face photograph to the right of centre. Both images were sized to 70 mm (w) × 90 mm (h) and were presented 50 mm apart.
In the experiment, each trial began with a 1-second fixation cross, followed by a stimulus pair, which remained on screen until a matching decision had been made. Participants were asked to decide as accurately as possible whether a stimulus pair depicted the same person or two different people, by pressing one of two corresponding buttons on a standard computer keyboard. The experiment was presented using PsychoPy ([@bibr36-2041669519863077]), and stimulus identities were rotated around the conditions across observers. Block order was counterbalanced.
Results {#sec7-2041669519863077}
-------
To assess performance, the percentage of accurate responses was calculated for all conditions. This is shown in [Figure 2](#fig2-2041669519863077){ref-type="fig"}, which also illustrates individual performance. A one-factor analysis of variance (ANOVA) of these data showed an effect of trial type, *F*(2,58) = 37.83, *p* \< .001, η~p~^2^ = .57, with paired-samples *t* tests (with alpha corrected to .017 \[.05/3\] for three comparisons) indicating higher accuracy on same-image identity-match trials (*M* = 92.3%, *SD* = 9.4) than different-image identity-match trials (*M* = 53.3%, *SD* = 18.3) and mismatch trials (*M* = 64.9%, *SD* = 18.7), *t*(29) = 13.73, *p* \< .001, *d* = 2.65 and *t*(29) = 6.58, *p* \< .001, *d* = 1.83, respectively. The difference in accuracy between different-image identity-match trials and mismatch trials was not reliable, *t*(29) = 1.87, *p* = .07, *d* = 0.62.
{#fig2-2041669519863077}
Considering the low accuracy for different-image identity-match trials and mismatch trials, a series of one-sample *t* tests was also conducted to determine whether accuracy was above chance (i.e., 50%) for the conditions. This was the case for same-image identity matches, *t*(29) = 24.79, *p* \< .001, *d* = 6.32, and identity mismatches, *t*(29) = 4.38, *p* \< .001, *d* = 1.12, but not for different-image identity matches, *t*(29) = 1.00, *p* = .33, *d* = 0.25. The data sets for all experiments reported here are available online as supplemental material.
Discussion {#sec8-2041669519863077}
----------
This experiment shows that matching of avatar faces to their source face photographs is highly accurate, which indicates that image-specific identity information from these source images is captured well. By contrast, matching of avatar faces to a different photograph of the same person was difficult and did not reliably exceed the chance benchmark of 50%. Accuracy was also fairly low for identity mismatches, comprising pairings of avatar faces with face photographs of a different person. The low accuracy in these conditions is potentially problematic for adopting VR to study unfamiliar face matching, but it is possible that this is caused by the inclusion of same-image identity matches. While this condition was included here to assess the production process of the stimuli, it is typically not included in face-matching experiments (see, e.g., [@bibr19-2041669519863077]). Considering that these same-image stimulus pairs inevitably display much greater similarity than different-image identity matches and mismatches, the inclusion of this condition may have served to attenuate the perceived differences between these critical identity conditions, resulting in a reduction in accuracy. To address this possibility, only different-image identity matches and mismatches were employed in Experiment 2.
Experiment 2 {#sec9-2041669519863077}
============
This experiment further assesses whether the production process of the avatars captures the identities on which these are based. In contrast to Experiment 1, this was assessed with only two conditions, comprising different-image identity matches and identity mismatches, to minimise the influence that same-image identity matches might exert on the classification of these conditions.
Method {#sec10-2041669519863077}
------
### Participants {#sec11-2041669519863077}
Thirty Caucasian participants from the University of Kent (10 men, 20 women), with a mean age of 19.6 years (*SD* = 1.5 years), participated in exchange for a small fee or course credit. None of these participants had participated in Experiment 1.
### Stimuli and Procedure {#sec12-2041669519863077}
Stimuli and procedure were identical to Experiment 1, except that same-image identity matches were excluded. All observers completed 2 blocks of 40 trials, comprising 20 different-image identity matches and 20 mismatches pairs in each block. As was the case in Experiment 1, Block 2 consisted of the reverse image-type stimulus pairings for the identities in Block 1. Once again, all trials began with a 1-second fixation cross and were presented in a randomised order, block order was counterbalanced, and accuracy of response was emphasised.
Results {#sec13-2041669519863077}
-------
The percentage accuracy data for Experiment 2 are illustrated in [Figure 3](#fig3-2041669519863077){ref-type="fig"}. A paired-sample *t* test of these data showed that accuracy was comparable for different-image identity-match trials (*M* = 57.9%, *SD =* 16.4) and mismatch trials (*M* = 59.3%, *SD =* 15.4), *t*(29) = 0.25, *p* = .80, *d* = 0.08. In addition, one-sample *t* tests revealed that performance in both conditions was above the chance level of 50%, with *t*(29) = 2.64, *p* = .01, *d* = 0.67 and *t*(29) = 3.28, *p* = .003, *d* = 0.84 for match and mismatch trials, respectively.
{#fig3-2041669519863077}
Discussion {#sec14-2041669519863077}
----------
Experiment 1 revealed that the avatars capture the face source photographs sufficiently for accuracy on same-image identity-match trials to be high. Experiment 2 complements these findings by showing that accuracy for different-image identity matches and mismatches exceeds chance when these same-image trials are excluded. Different-image identity matches are a fundamental requirement for studying the identification of unfamiliar faces, to ensure that this task is not solved by using simple image-matching strategies (see, e.g., [@bibr13-2041669519863077]; [@bibr25-2041669519863077]). The data from Experiment 2 therefore provide initial evidence that avatar stimuli have the potential to provide a suitable substrate to study face identification processes in VR.
Experiment 3 {#sec15-2041669519863077}
============
The two preceding experiments in this initial avatar validation phase have compared avatar face portraits with source photographs. These demonstrate that such avatar portraits capture the facial characteristics of their respective source photographs and can also be matched to a different photograph from which they were created to an above chance level. This final validation experiment separates these two image types to investigate whether performance of avatar-to-avatar facial comparisons is consistent with performance of photograph-to-photograph comparisons.
Method {#sec16-2041669519863077}
------
### Participants {#sec17-2041669519863077}
Thirty Caucasian participants from the University of Kent (1 man, 29 women), with a mean age of 19.2 years (*SD* = 2.0 years), participated in exchange for course credit. None of these participants had participated in any of the preceding experiments.
### Stimuli and Procedure {#sec18-2041669519863077}
The stimuli for this experiment consisted of the same 20 match and 20 mismatch identity pairings of Experiment 2, presented in 2 blocks (80 trials in total). However, rather than combining an avatar face portrait with a source photograph, Avatar Face Portraits A and B were paired together in one block of trials, while source Photographs A and B were paired together in a second block. As with the previous experiments, all trials began with a 1-second fixation cross and were presented in a randomised order. Block order was counterbalanced across participants, and accuracy of response was emphasised.
Results {#sec19-2041669519863077}
-------
To compare performance across image type, the mean percentage accuracy of correct match and mismatch responses was calculated for all conditions. These data are illustrated in [Figure 4](#fig4-2041669519863077){ref-type="fig"}. For avatar-to-avatar comparisons, accuracy was higher for match trials (*M* = 66.2%, *SD* = 19.1) than mismatch trials (*M* = 56.0%, *SD* = 15.4). The opposite pattern was observed for photograph-to-photograph comparison trials, with higher accuracy for mismatch trials (*M* = 87.0%, *SD* = 10.3) than for match trials (*M* = 83.2%, *SD* = 13.7). A 2 (image type: source photograph, avatar) × 2 (trial type: match, mismatch) within-subjects ANOVA of these data did not show a main effect of trial type, *F*(1, 29) = 0.55, *p* = .47, η~p~^2^ = .02, but revealed a main effect of image type, *F*(1, 29) = 219.55, *p* \< .001, η~p~^2^ = .88, and an interaction between factors, *F*(1, 29) = 13.67, *p* \< .001, η~p~^2^ = .32. A simple main effect of image type was found for match, *F*(1, 29) = 54.31, *p* \< .001, η~p~^2^ = .65, and mismatch trials, *F*(1, 29) = 135.51, *p* \< .001, η~p~^2^ = .82, due to higher accuracy for photograph than avatar matching. No simple main effects of trial type were found within avatar matching, *F*(1, 29) = 3.29, *p* = .08, η~p~^2^ = .10, and photograph matching, *F*(1, 29) = 1.17, *p* = .29, η~p~^2^ = .04.
{#fig4-2041669519863077}
One-sample *t* tests showed that match and mismatch accuracy for photographs exceeded chance (50%), *t*(29) = 13.22, *p* \< .001, *d* = 3.37, and *t*(29) = 19.67, *p* \< .001, *d* = 5.01, respectively. Importantly, this was also the case for match and mismatch trials with avatar portraits, *t*(29) = 4.62, *p* \< .001, *d* = 1.18 and *t*(29) = 2.13, *p* = .04, *d* = 0.54.
Finally, accuracy for source photographs and avatar faces correlated on both match trials, *r* = .752, *p* \< .001, and mismatch trials, *r* = .415, *p* \< .05, indicating that matching of both stimulus types reflects the same underlying cognitive processes.
Discussion {#sec20-2041669519863077}
----------
In contrast to Experiments 1 and 2, which examined photograph-to-avatar matching, the current validation experiment demonstrates that avatar faces also can be successfully matched to each other. Avatar matching was more difficult than matching pairs of face photographs, but this is unsurprising considering that the photographs reflect the original identity images. In addition, identities for mismatches were paired up based on avatar similarity, which should increase the difficulty of this task relative to matching of photographs also. Despite this, performance for avatar-to-avatar and photograph-to-photograph matching correlated well, indicating that both reflect the same underlying processes. The next phase of this study will explore this further, by comparing avatar matching with two established tests of face matching.
Phase 2: Matching Avatars Versus Matching Face Photographs {#sec21-2041669519863077}
==========================================================
The experiments of Phase 1 demonstrate that avatar identification is a difficult task and also indicate that avatar matching reflects similar processes to matching of face photographs. To examine this further prior to implementation in a VR environment, we sought to correlate matching of avatar face pairs with two tests of unfamiliar face matching in Phase 2, comprising the widely used Glasgow Face Matching Test (GFMT; Burton, White, & McNeill, 2010) and the newer Kent Face Matching Test (KFMT; [@bibr19-2041669519863077]). Of these tests, the GFMT represents a best case scenario to assess face-matching accuracy, by providing highly controlled, same-day photographic pairs of faces. The KFMT, on the other hand, provides a more challenging matching test, in which face pairs consist of a controlled face portrait and an uncontrolled image. Despite these differences, performance on the GFMT and KFMT correlates well. Here, we investigate whether such correlations exist also between these tests and the matching of avatar face pairs.
Experiment 4 {#sec22-2041669519863077}
============
This experiment compared performance on the GFMT and KFMT, which required matching of photographs of faces, with the matching of pairs of avatar faces. Overall, performance should be best with the optimised stimuli of the GFMT than the more challenging KFMT. In addition, accuracy for the KFMT should be similar to avatar-to-avatar face matching, considering that both tests are based on different-day face images. The main aim here, however, was to correlate performance on these tasks to explore whether these capture the same identification processes.
Method {#sec23-2041669519863077}
------
### Participants {#sec24-2041669519863077}
The participants consisted of 30 Caucasian individuals (8 men, 22 women), with a mean age of 21.2 years (*SD* = 3.3 years), who were paid a small fee or given course credit. None of these participants had participated in the preceding experiments.
### Stimuli and Procedure {#sec25-2041669519863077}
### *The GFMT* {#sec26-2041669519863077}
The GFMT face pairs consist of images of faces taken from a frontal view displaying a neutral expression. Both images in a face pair are taken with different cameras and, in the case of identity matches, approximately 15 minutes apart. Each face image is cropped to show the head only and converted to greyscale with a resolution of 72 ppi. The dimensions of the faces range in width from 70 mm to 90 mm and in height from 85 mm to 125 mm and are spaced between 40 mm and 55 mm apart on screen. This study employed 20 identity match and 20 mismatch trials from the GFMT (for more information, see [@bibr14-2041669519863077]). Example stimuli are shown in the top row of [Figure 5](#fig5-2041669519863077){ref-type="fig"}.
{#fig5-2041669519863077}
#### The KFMT {#sec27-2041669519863077}
Face pairs in the KFMT consist of an image from a student ID card, presented at a maximal size of 35 mm (w) × 47 mm (h), and a portrait photo, sized at 70 mm (w) × 82 mm (h) at a resolution of 72 ppi, spaced 75 mm apart. The student ID photos were taken at least 3 months prior to the face portraits and were not constrained by pose, facial expression, or image-capture device. The portrait photos depict the target's head and shoulders from a frontal view while bearing a neutral facial expression and were captured with a high-quality digital camera. In this study, 20 identity match and 20 mismatch trials from the KFMT were employed (for more information, see [@bibr19-2041669519863077]). Example stimuli are shown in the second row of [Figure 5](#fig5-2041669519863077){ref-type="fig"}.
#### Avatar face pairs {#sec28-2041669519863077}
These stimuli are the same as those shown in Block 1 of Experiment 3 and consisted of 40 face pairs (20 identity matches, 20 mismatches), each depicting two avatar face portraits. For identity-match trials, the avatar faces in a pair were based on different source photographs, whereas two different identities were shown in identity-mismatch pairs. These faces were cropped to remove external features, such as hairstyle, and shown at a size of 70 mm (w) × 90 mm (h) and spaced 50 mm apart. Example stimuli are shown in the third row of [Figure 5](#fig5-2041669519863077){ref-type="fig"}.
These three face-matching tasks (GFMT, KFMT, avatar pairs) were administered in separate blocks of 40 trials, which were presented in a counterbalanced order across participants. The procedure for all tasks was identical and presented using PsychoPy ([@bibr36-2041669519863077]). Thus, each trial begun with a 1-second fixation cross presented on a computer screen and was followed by a face pair, which participants were asked to classify as an identity match or mismatch as accurately as possible. Trial order was randomised within the blocks.
Results {#sec29-2041669519863077}
-------
To compare performance across the three face-matching tasks, the mean percentage of correct match and mismatch responses was calculated for each participant. These data are illustrated in [Figure 6](#fig6-2041669519863077){ref-type="fig"}. For match trials, the cross-subject mean accuracy was higher for the GFMT (*M* = 78.7%, *SD* = 13.2) than the KFMT (*M* = 67.8%, *SD* = 14.6) and the avatar face pairs (*M* = 68.7%, *SD* = 13.3). The same pattern was observed for mismatch trials, with higher accuracy for the GFMT (*M* = 71.8%, *SD* = 18.4) than the KFMT (*M* = 59.0%, *SD* = 14.4) and the avatar face pairs (*M* = 52.5%, *SD* = 16.6).
{#fig6-2041669519863077}
A 3 (task: GFMT, KFMT, avatar pairs) × 2 (trial type: match, mismatch) within-subjects ANOVA of these data confirmed a main effect of trial type, *F*(1, 29) = 8.83, *p* = .006, η~p~^2^ = .23, due to higher accuracy on match than mismatch trials. A main effect of task was also found, *F*(2, 58) = 34.70, *p* \< .001, η~p~^2^ = .55. Paired-samples *t* tests (with alpha corrected to .017 \[.05/3\] for three comparisons) showed that accuracy was higher on the GFMT than both the KFMT, *t*(29) = 6.09, *p* \< .001, *d* = 1.24, and the avatar pairs, *t*(29) = 7.87, *p* \< .001, *d* = 1.57. There was no difference in accuracy between the KFMT and avatar pairs, *t*(29) = 1.58, *p* = .13, *d* = 0.32. The interaction of task and trial type was not significant, *F*(2, 58) = 2.35, *p* = .11, η~p~^2^ = .08.
A series of one-sample *t* tests was also conducted to determine whether accuracy was above chance (i.e., 50%) for the conditions. This was the case for match and mismatch trials on the KFMT, *t*(29) = 6.69, *p* \< .001, *d* = 1.70 and *t*(29) = 3.42, *p* = .002, *d* = 0.87, and on the GFMT, *t*(29) = 11.90, *p* \< .001, *d* = 3.03 and *t*(29) = 6.51, *p* \< .001, *d* = 1.66. For avatar face pairs, accuracy was also above chance for match trials, *t*(29) = 7.68, *p* \< .001, *d* = 1.96, but not for mismatch trials, *t*(29) = 0.83, *p* = .42, *d* = 0.21. A by-item inspection of these data shows a very broad range in accuracy for avatar mismatch face pairs, which suggests that mean chance performance masks items that are consistently classified correctly and also items that are classified consistently as incorrect. We return to further analysis of these data after Experiment 7, to demonstrate that these by-item differences for avatar stimuli are stable.
Overall, the mean percentage accuracy data show that accuracy on the GFMT is higher than for the KFMT and the avatar faces, which appear to be more evenly matched. While such general differences between these tasks were expected, the question of main interest in this experiment was whether performance on these tests is correlated. For match trials, Pearson's correlations were obtained for the GFMT and KFMT, *r* = .580, *p* \< .001, the GFMT and the avatar faces, *r* = .406, *p* = .03, and the KFMT and the avatar faces, *r* = .336, *p* = .05. Similarly, mismatch accuracy correlated for the GFMT and avatar faces, *r* = .550, *p* = .002, and the KFMT and the avatar faces, *r* = .407, *p* = .03. The correlation for mismatch trials on the GFMT and the KFMT did not reach significance, *r* = .333, *p* = .07.
Discussion {#sec30-2041669519863077}
----------
This experiment correlated matching of avatar faces directly with two laboratory tests of face matching to determine whether identification of the avatars taps into the same processes as identification of real faces. Overall, accuracy was best with the highly optimised face pairs of the GFMT and comparable for the KFMT and the avatar faces. This finding makes good sense considering that the stimuli of the KFMT and those that were used to create the avatar face pairs captured identities across different days and more variable ambient conditions. Moreover, the similarity in performance across these tests suggest that low accuracy with the avatars reflects a difficulty in face matching that is comparable to the matching of challenging different-day face pairs (see [@bibr19-2041669519863077]; see also [@bibr32-2041669519863077]). Despite these differences in accuracy between the GFMT, KFMT, and the avatar faces, performance correlated well across the three tasks. This indicates that such avatar face pairs can provide a substitute to the matching of real faces for experimentation in VR.
Experiment 5 {#sec31-2041669519863077}
============
The preceding experiments examine the matching of isolated face pairs. In contrast, identity matching in the VR environment requires comparison of a *person* with a face photograph. The inclusion of such body information reduces face size. This may affect identification, though it is unclear whether this would attenuate (see, e.g., Bindemann, Fysh, Sage, Douglas, & [@bibr7-2041669519863077]) or improve accuracy (see [@bibr1-2041669519863077]). To explore this question under strictly controlled conditions, we conducted a further experiment in which the avatar matching stimuli comprised a whole person and a face photograph. As in Experiment 4, performance on this task was also compared with the GFMT and KFMT.
Method {#sec32-2041669519863077}
------
### Participants {#sec33-2041669519863077}
Thirty Caucasian participants from the University of Kent (11 men, 19 women), with a mean age of 21.0 years (*SD* = 2.9 years), participated for a small fee or course credit. None of these participants had participated in any of the preceding experiments.
### Stimuli and Procedure {#sec34-2041669519863077}
Stimuli and procedure were identical to Experiment 4, except for the following changes. The avatar matching stimuli comprised the same identities but now consisted of the image of a whole avatar (i.e., showing the entire body and the face) and an avatar face (for an illustration, see the bottom row of [Figure 5](#fig5-2041669519863077){ref-type="fig"}). The whole avatar was sized to a height of 155 mm, with a body width of 35 mm (from hand to hand, 115 mm). This resulted in the face on the whole avatar to have dimensions of 20 mm (w) × 30 mm (h). By comparison, the isolated avatar face image in each stimulus pair measured 70 mm (w) × 90 mm (h) and was presented 30 mm apart from the whole avatar.
Results {#sec35-2041669519863077}
-------
The percentage accuracy data for this experiment are presented in [Figure 7](#fig7-2041669519863077){ref-type="fig"}. For match trials, accuracy was higher for the GFMT (*M* = 89.3%, *SD* = 10.1) than the KFMT (*M* = 66.5%, *SD* = 20.5) and the avatar stimulus pairs (*M* = 53.8%, *SD* = 18.1). This pattern was also observed with identity mismatches, with highest accuracy for GFMT pairs (*M* = 72.7%, *SD* = 23.6), followed by the KFMT (*M* = 67.2%, *SD* = 15.4) and the avatar pairs (*M* = 52.2%, *SD* = 15.1).
{#fig7-2041669519863077}
A 3 (task: GFMT, KFMT, avatar) × 2 (trial type: match, mismatch) within-subjects ANOVA did not reveal a main effect of trial type, *F*(1, 29) = 1.47, *p* = .24, η~p~^2^ = .05, but showed a main effect of task, *F*(2, 58) = 75.27, *p* \< .001, η~p~^2^ = .72, and an interaction, *F*(2, 58) = 9.32, *p* \< .001, η~p~^2^ = .24. Simple main effects analysis was carried out to interpret this interaction. A simple main effect of trial type within the GFMT task was found, *F*(1, 29) = 9.53, *p* = .004, η~p~^2^ = .25, due to higher match than mismatch accuracy. There was no simple main effect of trial type within the KFMT, *F*(1, 29) = 0.01, *p* = .91, η~p~^2^ \< .01, or avatar tasks, *F*(1, 29) = 0.10, *p* = .76, η~p~^2^ \< .01.
In addition, a simple main effect of task within match trials was found, *F*(2, 28) = 98.89, *p* \< .001, η~p~^2^ = .88. Paired-samples *t* tests (with alpha corrected to .017 \[.05/3\] for three comparisons) showed accuracy on the GFMT was higher than for both the KFMT and the avatar task on match trials, *t*(29) = 7.51, *p* \< .001, *d* = 1.39 and *t*(29) = 13.39, *p* \< .001, *d* = 2.39, respectively. The KFMT was also performed more accurately than the avatar task on match trials, *t*(29) = 3.49, *p* = .002, *d* = 0.65.
Similarly, a simple main effect of task within mismatch trials was also found, *F*(2, 28) = 32.84, *p* \< .001, η~p~^2^ = .70. Paired-samples *t* tests (with alpha corrected to .017 \[.05/3\] for three comparisons) showed accuracy was higher on the GFMT and KFMT than the avatar task for this trial type, *t*(29) = 6.48, *p* \< .001, *d* = 1.02 and *t*(29) = 5.99, *p* \< .001, *d* = 0.97, respectively. There was no difference in mismatch trial accuracy between the GFMT and KFMT, *t*(29) = 1.47, *p* = .15, *d* = 0.27.
Finally, a series of one-sample *t* tests was also conducted to determine whether accuracy was above chance (i.e., 50%) for the conditions. This was the case for match and mismatch trials on the GFMT, *t*(29) = 21.41, *p* \< .001, *d* = 5.46 and *t*(29) = 5.26, *p* \< .001, *d* = 1.34, and the KFMT, *t*(29) = 4.41, *p* \< .001, *d* = 1.12 and *t*(29) = 6.13, *p* \< .001, *d* = 1.56. In contrast, accuracy for the avatar pairs did not exceed chance for match trials, *t*(29) = 1.16, *p* = .26, *d* = 0.30, nor mismatch trials, *t*(29) = 0.79, *p* = .43, *d* = 0.20. However, a by-item inspection of these data again shows a very broad range in accuracy, suggesting that mean performance masks consistent correct and incorrect classifications of avatar items (further analysis provided after Experiment 7). Moreover, Pearson correlations revealed that match accuracy correlated across all combinations of the GFMT, KFMT, and the avatar stimuli, all *r*s ≥ .474, all *p*s ≤ .008, as did accuracy for mismatch trials, all *r*s ≥ .514, all *p*s ≤ .004.
Discussion {#sec36-2041669519863077}
----------
This experiment replicates the main findings of Experiment 4, by revealing that performance for matching GFMT, KFMT, and avatar faces correlates consistently. This provides further evidence that identification across these tasks is based on similar processes. However, in contrast to Experiment 4, which displayed only avatar faces, matching avatar faces to whole persons was more difficult in Experiment 5, and accuracy was low. We attribute this poor performance to the size of the whole body stimuli, which resulted in a compression of the facial information (see bottom row of [Figure 5](#fig5-2041669519863077){ref-type="fig"}). This raises the question of whether these avatars provide sufficient information for person identification during immersion in a VR airport environment. This was examined in the final phase of this study.
Phase 3: Face Matching in VR {#sec37-2041669519863077}
============================
In the final phase, we examined avatar identification in VR, by constructing a passport control desk in an airport arrivals hall. This environment comprised an airport lounge, with seating and rope queue barriers to channel travellers to a passport control booth. Visual cues were incorporated to convey clearly to participants that this is an airport environment, such as departure boards and a waiting aeroplane within view of the passport control desk area. This environment is illustrated in [Figure 8](#fig8-2041669519863077){ref-type="fig"}.
{#fig8-2041669519863077}
Participants were immersed in this environment and asked to take on the role of passport officers in the control booth, by processing a queue of travellers by identity matching a face photograph to an avatar's appearance (see inset of [Figure 8](#fig8-2041669519863077){ref-type="fig"}). Animated avatars queued in line and then approached the booth individually to be processed. After participants made an identification decision, the avatar would then walk away, with stimuli classified as identity matches proceeding past the booth and towards an exit at the back of the airport hall, while stimuli classified as mismatches would walk into a waiting area to the side of the control point.
Experiment 6 {#sec38-2041669519863077}
============
In Experiment 6, we employed this airport environment to investigate face matching in VR. We employed the same avatar identities as in the preceding experiments and specifically sought to examine the accuracy levels that participants achieve in this task.
Method {#sec39-2041669519863077}
------
### Participants {#sec40-2041669519863077}
Thirty Caucasian participants from the University of Kent (7 men, 23 women), with a mean age of 21.6 years (*SD* = 4.1 years), took part for a small fee or course credit. None of these participants had participated in the preceding experiments. Owing to the use of VR equipment, no persons with epilepsy or who were liable to motion sickness were recruited. Before immersion in VR, participants were briefed about potential side effects of using VR, such as discomfort from wearing the headset and symptoms of motion sickness, and health and safety procedures.
### Stimuli and Procedure {#sec41-2041669519863077}
The stimuli consisted of the same avatar-face pairings that were employed in Experiment 5, comprising 20 matches and 20 mismatches. These were displayed in the VR environment using Vizard 5 and an Oculus Rift DK2 headset, with a resolution of 960 × 1,080 pixels per eye with 100° field of view and an image refresh rate of 75 Hz.
On immersion in the VR environment, participants found themselves seated in the passport control booth, which was equipped with a desk and desktop PC. A group of 40 avatars then arrived in the airport hall and queued at the control desk, with one avatar at a time approaching the participants. As each avatar approached, their *passport photograph* would appear on the screen of the desktop PC. Participants were asked to compare this image with the face of the presenting avatar, and make identity-match or mismatch decisions via button presses on a computer mouse. Once a response was registered, the avatar would move past the control desk to exit the airport hall (if classified as a match) or would depart to the side of the airport hall into a waiting area (if classified as a mismatch). At this point, the next avatar would approach the control desk, prompting the start of the next trial. Presentation of avatars was randomised. Accuracy of response was emphasised, and there was no time restriction for task completion.
Results {#sec42-2041669519863077}
-------
The percentage accuracy data for this VR experiment are illustrated in [Figure 9](#fig9-2041669519863077){ref-type="fig"}. A paired-sample *t* test showed that accuracy was higher on match trials (*M* = 59.3%, *SD* = 13.0) than mismatch trials (*M* = 39.2%, *SD* = 12.0), *t*(29) = 5.29, *p* \< .001, *d* = 1.59. In addition, one-sample *t* tests showed that performance was above chance (50%) on match trials, *t*(29) = 3.94, *p* \< .001, *d* = 1.00, but below chance on mismatch trials, *t*(29) = 4.93, *p* \< .001, *d* = 1.26. However, by-item inspection of these data again shows a very broad range in accuracy for mismatch stimuli (further analysis provided after Experiment 7).
{#fig9-2041669519863077}
Cross-experiment analyses were conducted to examine how performance for this face-to-avatar matching in VR compared with the still image avatar matching of Experiment 4 (face-to-face matching: match accuracy *M* = 68.7%, *SD* = 13.3; mismatch accuracy *M* = 52.5%, *SD* = 16.6) and Experiment 5 (face-to-body matching: match accuracy *M* = 53.8%, *SD* = 18.1; mismatch accuracy *M* = 52.2%, *SD* = 15.1). A 3 (stimulus type: face-to-face, face-to-body, face-to-avatar) × 2 (trial type: match, mismatch) mixed-factor ANOVA showed main effects of trial type, *F*(1, 87) = 22.73, *p* \< .001, η~p~^2^ = .21, and stimulus type, *F*(2, 87) = 16.25, *p* \< .001, η~p~^2^ = .27, and an interaction between these factors, *F*(2, 87) = 4.47, *p* = .01, η~p~^2^ = .09.
To interpret this interaction, simple main effects analyses were carried out. A simple main effect of trial type was found for face-to-face matching (Experiment 4), *F*(1, 87) = 12.34, *p* \< .001, η~p~^2^ = .12, and face-to-avatar matching (Experiment 6), *F*(1, 87) = 19.20, *p* \< .001, η~p~^2^ = .18, both due to higher match than mismatch accuracy. There was no simple main effect of trial type for face-to-body matching (Experiment 5), *F*(1, 87) = 0.13, *p* = .72, η~p~^2^ \< .01.
In addition, a simple main effect of stimulus type within match trials was found, *F*(2, 87) = 7.52, *p* \< .001, η~p~^2^ = .15. Paired-samples *t* tests (with alpha corrected to .017 \[.05/3\] for three comparisons) showed that face-to-face matching was performed more accurately than both face-to-body matching, *t*(58) = 3.62, *p* \< .001, *d* = 0.92, and face-to-avatar matching, *t*(58) = 2.75, *p* = .008, *d* = 0.70. There was no difference in accuracy between these latter two stimulus types on match trials, *t*(58) = 1.35, *p* = .18, *d* = 0.34.
A simple main effect of stimulus type within mismatch trials was also found, *F*(2, 87) = 8.02, *p* \< .001, η~p~^2^ = .16. Paired-samples *t* tests (with alpha corrected to .017 \[.05/3\] for three comparisons) showed accuracy was higher for both face-to-face and face-to-body matching over face-to-avatar matching, *t*(58) = 3.56, *p* \< .001, *d* = 0.91 and *t*(58) = 3.68, *p* \< .001, *d* = 0.94, respectively. No difference in accuracy was found between face-to-face and face-to-body matching on mismatch trials, *t*(58) = 0.08, *p* = .94, *d* = 0.02.
Discussion {#sec43-2041669519863077}
----------
The results from this experiment indicate an increase in task difficulty when face matching is performed in VR. The accuracy of avatar matching, particularly on mismatch trials, was considerably lower in the VR environment than when the same stimuli were presented in 2D and in isolation in Experiments 4 and 5. Considering this low accuracy, we modified our paradigm for a final experiment in an attempt to improve performance.
Experiment 7 {#sec44-2041669519863077}
============
In this experiment, we attempted to optimise the VR paradigm to improve face-matching performance. We replaced the Oculus Rift DK2 headset with an HTC Vive, which provides greater screen resolution (960 × 1,080 pixels per eye vs. 1,080 × 1,200 pixels per eye). The HTC Vive is also equipped with handheld controllers to enable participants to interact better with the environment. We utilised the controllers to allow participants to hold the passports of travellers in the VR environment. This enabled participants to bring these closer to their own face, thus increasing the size and resolution of these images for comparison, as well as to hold the passport photos next to the travellers to facilitate face matching (see [Figure 10](#fig10-2041669519863077){ref-type="fig"}). As a final change, we rerecorded the face image for the photo identities in VR. The software models convexity by elongating face shape as viewing distance decreases. As a result of this, the avatar face stimuli were narrow in appearance in the preceding experiments, particularly near the chin region. We rerecorded these images from greater distance to produce a more natural, rounded appearance (see inset of [Figure 10](#fig10-2041669519863077){ref-type="fig"}). We then examined whether face-matching performance in the VR environment was improved as a result of these changes.
{#fig10-2041669519863077}
Method {#sec45-2041669519863077}
------
### Participants {#sec46-2041669519863077}
Thirty Caucasian participants from the University of Kent (7 men, 23 women) with a mean age of 20.3 years (*SD* = 2.8 years) participated for a small fee or course credit. None of these participants had participated in the preceding experiments. No persons with epilepsy or who were liable to motion sickness were recruited. All participants were given a health and safety briefing prior to immersion in the VR.
### Stimuli and Procedure {#sec47-2041669519863077}
The stimuli consisted of the same avatar identities as in Experiment 6, but the images for the passport photographs were rerecorded at a great viewing distance to produce faces with a more natural, rounded face shape (see inset of [Figure 10](#fig10-2041669519863077){ref-type="fig"}). The size of these images was maintained at 438 (w) × 563 (h) pixels at a resolution of 150 ppi. The procedure was identical to Experiment 6 except that the Oculus Rift DK2 headset was replaced with an HTC Vive, which has an improved resolution of 1,080 × 1,200 pixels per eye with 110° field of view with a faster image refresh rate of 90 Hz. In addition, two handheld controllers were utilised as controls for this experiment.
On each trial, the passport face image was no longer presented on the desktop PC in the control booth but was inserted into a passport-style card, which could be picked up by participants using a handheld controller. This enabled participants to hold the passport images closer to their own eyes or next to the avatar's head to facilitate identity comparison. The handheld controllers were also employed to record participants' responses, with button presses on the right-hand controller indicating identity matches and on the left-hand controller indicating mismatches.
Results {#sec48-2041669519863077}
-------
As in all preceding experiments, accuracy was higher for match trials (*M* = 77.3%, *SD* = 12.6) than mismatch trials (*M* = 48.2%, *SD* = 12.6), *t*(29) = 7.28, *p* \< .001, *d* = 2.28, as illustrated in [Figure 11](#fig11-2041669519863077){ref-type="fig"}. In addition, match accuracy was reliably above chance level (i.e., 50%), *t*(29) = 11.90, *p* \< .001, *d* = 3.03, whereas mismatch accuracy was not, *t*(29) = 0.80, *p* = .43, *d* = 0.20. Again, however, by-item inspection of the mismatch data shows broad differences between items (further analysis provided after this experiment).
{#fig11-2041669519863077}
To determine whether the adjustments to the VR paradigm successfully reduced the difficulty of the task, a 2 (environment: Experiment 6, Experiment 7) × 2 (trial type: match, mismatch) mixed-factor ANOVA was conducted. This showed a main effect of trial type, *F*(1, 58) = 79.67, *p* \< .001, η~p~^2^ = .58, due to higher accuracy on match trials than mismatch trials. A main effect of environment was also found, *F*(1, 58) = 63.27, *p* \< .001, η~p~^2^ = .52, reflecting higher accuracy in Experiment 7. The interaction between trial type and experiment was not significant, *F*(1, 58) = 2.65, *p* = .11, η~p~^2^ = .04.
Discussion {#sec49-2041669519863077}
----------
This experiment demonstrates that the improvements to the VR paradigm enhanced accuracy. This improvement was particularly marked on match trials, where accuracy reached 77%. Mismatch performance was enhanced too but remained particularly difficult in the VR paradigm, at 48% accuracy. This is a limiting factor for research on unfamiliar face matching, considering the important role that these trials hold for person identification at passport control in the real world (see, e.g., [@bibr17-2041669519863077]). However, previous research on face matching demonstrates that considerable variation in accuracy can exist across items, to the point where some items may be consistently classified incorrectly (see [@bibr19-2041669519863077]). In turn, this raises the possibility that even though mean performance on mismatch trials does not exceed 50%, a substantial proportion of these may nonetheless be classified with high accuracy. A cursory analysis of such by-item differences was provided in Experiments 4 to 7, which revealed broad differences in accuracy between individual items. To explore whether these by-item differences are stable, we performed correlational comparisons across Experiments 4 to 7.
Comparison of Items Across Experiments {#sec50-2041669519863077}
======================================
To analyse accuracy for individual items, the mean accuracy for each stimulus pair was compared across experiments (i.e., for face-to-face pairs in Experiment 4, face-to-body in Experiment 5, and face-to-avatar in Experiments 6 and 7). These scores are illustrated in [Figure 12](#fig12-2041669519863077){ref-type="fig"} and reveal considerable variation in accuracy across items. In Experiment 4, for example, this variation is such that accuracy for individual match items ranges from 40% to 93% and from 20% to 90% for mismatch items. These differences were even more marked by Experiment 7, in which by-item accuracy ranged from 7% to 97% for match stimuli and from 3% to 97% for mismatch stimuli. This range in accuracy indicates that some items were consistently classified correctly, whereas other yielded consistently incorrect decisions. A reliability analysis was conducted across Experiments 4 to 7, with Cronbach's alpha showing accuracy for match items, α = .66, to be more consistent than accuracy for mismatch items, α = .55. However, despite the variation in item accuracy, strong positive correlations were obtained for by-item accuracy across Experiments 4 to 7 (see [Table 1](#table1-2041669519863077){ref-type="table"}).
{#fig12-2041669519863077}
######
Mean Accuracy and Correlations Between Experiments Across All Avatar Items.

Trial type Experiment *M* *SD* Correlation coefficients (*r*)
------------ ------------ ------ ------------------------------------------------------------ ------------------------------------------------------------ ------------ ---- --
Overall 4 60.7 19.9 --
5 53.0 22.9 .552\*\*\* --
6 49.2 20.7 .539\*\*\* .484\*\* --
7 62.8 27.7 .627\*\*\* .553\*\*\* .647\*\*\* --
Match 4 68.7 15.8 --
5 53.8 18.4 .499[\*](#table-fn1-2041669519863077){ref-type="table-fn"} --
6 59.3 18.5 .255 .515[\*](#table-fn1-2041669519863077){ref-type="table-fn"} --
7 77.4 21.2 .394 .423 .741\*\*\* --
Mismatch 4 52.6 20.7 --
5 52.2 27.1 .639\*\* --
6 39.1 17.9 .566\*\* .571\*\* --
7 48.2 25.9 .613\*\* .752\*\*\* .342 --
\**p* \< .05. \*\**p* \< .01. \*\*\**p* \< .001.
For match items, by-item accuracy correlated well for each progression towards face matching in VR. Accuracy when matching two avatar face portraits (Experiment 4) positively correlated with the accuracy of matching one of these avatar face images with an avatar body image (Experiment 5), *r* = .499, *p* = .03. When this avatar face-body matching was conducted in VR (Experiment 6), accuracy correlated with its still image counterpart (Experiment 5), *r* = .515, *p* = .02. Item accuracy in the original VR paradigm (Experiment 6) also correlated strongly with item accuracy when the VR paradigm was improved in Experiment 7, *r* = .741, *p* \< .001. However, all other correlations between experiments were nonsignificant, all *r*s ≤ .423, all *p*s ≥ .06.
Accuracy for many mismatch items was lower than for any of the match items across all experiments but correlated strongly across all comparisons between Experiments 4 to 7, all *r*s ≥ .566, all *p*s \< .009, except between the two VR experiments (Experiments 6 and 7), *r* = .342, *p* = .14. We attribute this discrepancy to the improvement gains possible from Experiment 6 to Experiment 7, which was much greater for some items compared with others.
Overall, the finding that accuracy for items is highly consistent across experiments under the conditions investigated here provides a potential solution to the poor mean accuracy in the mismatch condition. To model the real world of passport control, match trials should occur with much greater frequency than mismatch trials in experiments on unfamiliar face matching (see, e.g., [@bibr2-2041669519863077]; [@bibr18-2041669519863077]; [@bibr35-2041669519863077]; [@bibr43-2041669519863077]). One way to address the poor mean accuracy across mismatch items in VR here could therefore be to select the mismatches with the highest by-item accuracy for further experimentation. Ultimately, however, we think that this problem will be addressed also through future development of higher quality avatars, which will enhance accuracy of avatar facial identification.
General Discussion {#sec51-2041669519863077}
==================
This study explored the feasibility of conducting face-matching experiments in VR. This exploratory study is the first of its kind in this field and was conducted in three phases. The first phase investigated whether avatar faces can provide suitable replacements for face photographs, by asking participants to perform avatar-to-photograph identity matching. Accuracy was high when stimuli displayed avatar faces alongside the photograph from which these were derived (Experiment 1). This image-specific identity matching indicates that the avatars successfully captured their source face photograph. Matching accuracy also exceeded chance on mismatch trials, in which two different identities were shown (Experiments 1 and 2), and with different-image identity matches, in which an avatar face was shown alongside a different source photograph of the same identity (Experiment 2). This indicates that the avatars captured not only the source image but also the identity of these targets. The final validation experiment in this first phase investigated whether accuracy when matching avatar-to-avatar would be consistent with the matching of pairs of photographs (Experiment 3). Despite avatar matching being a more difficult task than photograph matching, participant accuracy exceeded chance and correlated for the two image types. The experiments in this phase therefore demonstrate that our avatar stimuli can provide a suitable substrate to study such face identification processes in VR.
The second phase sought to validate the avatar stimuli further by correlating performance in avatar-to-avatar matching with two established tests of face-to-face matching (the GFMT, see [@bibr14-2041669519863077]; and the KFMT, see [@bibr19-2041669519863077]). Avatar matching correlated consistently with these face tests, both when pairs of avatar faces were shown (Experiment 4) and when an avatar face was paired with a whole avatar body (Experiment 5). This indicates that matching of avatars and of real face photographs reflect similar cognitive processes.
In the final phase, we examined avatar identification with a VR airport environment, in which participants took up the role of passport officer at a control point. A first run of this paradigm proved difficult, with average accuracy for identity-mismatch trials below chance level (Experiment 6). The application of higher resolution VR equipment, and modifications to the experimental paradigm that allowed participants to view avatar faces more flexibly, improved accuracy (Experiment 7). However, accuracy on mismatch trials remained near chance. We therefore performed a by-item analysis to determine whether individual mismatch trials were classified consistently. This analysis revealed strong correlations across Experiments 4 to 7, indicating that by-item classification was robust across experiments. This by-item data also revealed that some mismatch trials were classified consistently with low but some also with high accuracy. Considering that mismatches should occur with much lower frequency than match trials when one seeks to mimic real-world conditions (see, e.g., [@bibr2-2041669519863077]; [@bibr18-2041669519863077]; [@bibr35-2041669519863077]; [@bibr43-2041669519863077]), the by-item data could therefore provide a basis for selecting mismatch stimuli that give rise to high (or low) accuracy for further experimentation.
Overall, these data provide proof of principle for the use of VR for face-matching research. While the generation of VR explored here does not yet meet real-world detail, realism, and identification accuracy, the rapid development of this technology provides a promising outlook for future research. This opens up many avenues for face-matching research, by facilitating the study of new environment and social interaction factors that may be relevant in real-world operational settings. With regard to passport control, for example, it is possible that nonfacial cues, such as body language, draw attention to potential impostors and could also support identification decisions ([@bibr38-2041669519863077]; [@bibr38-2041669519863077]). Similarly, environmental factors, such as the mere presence of passenger queues, might impair identification by exerting time pressure on passport officers (see, e.g., [@bibr6-2041669519863077]; [@bibr18-2041669519863077]; [@bibr48-2041669519863077]). Crowd dynamics, such as animated body language throughout queues might also signal impatience to passport officers and exert further pressure. Crucially, such factors cannot be captured well by current laboratory paradigms and are practically impossible to study in real life owing to the importance of person identification at passport control. The current study demonstrates the feasibility of VR for studying and understanding such phenomena, which can only improve as the technology continues to develop.
We note that our study still represents a relatively simple approach for the implementation of such experiments. For example, we created our avatar faces by a rather simplistic process that was based on the superimposition of 2D photographs on existing avatar structures. In future, we anticipate that the 3D scanning of faces and the rigging of this information into avatars as well as further development of VR technology will result in person stimuli and environments that provide increasingly closer representations of reality. This should support experimentation by further enhancing identification of identity matches and mismatches. Ultimately, we expect VR to become an important research tool for investigating face perception in complex and realistic environments, with increasing collaboration between researchers and developers accelerating advancement in this field.
Supplemental Material
=====================
###### Supplemental material for Facial Identification at a Virtual Reality Airport
######
Click here for additional data file.
Supplemental Material for Facial Identification at a Virtual Reality Airport by Hannah M. Tummon, John Allen and Markus Bindemann in i-Perception
Declaration of Conflicting Interests {#sec52-2041669519863077}
====================================
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding {#sec53-2041669519863077}
=======
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This research was supported by an Economic and Social Research Council South East Doctoral Training Centre Studentship to H. M. T. (No. ES/J500148/1).
Supplemental Material {#sec54-2041669519863077}
=====================
Supplemental material for this article is available online at: <http://journals.sagepub.com/doi/suppl/10.1177/2041669519863077>.
| |
OAHU, Hawaii - After being crowned the 2015 #SUWS Women’s Champion at the US Open of SUP in Huntington Beach, Candice Appleby finished the season unbeaten after another win, but she had to work hard here in Hawaii after being pushed all the way by the rising young star Fiona Wylde. Fiona, who had won the Long Distance Race Saturday, put all the pressure on Candice who found enough to win the final two Heats in the Sprint racing to give her the Overall win by just 6 seconds.
It was Starboard and Werner Paddles athlete Fiona Wylde who took the win in Heat 1 forcing Candice Appleby to come out fighting in Heat 2. The Southern Californian certainly did that and her power and determination meant after she rounded the Final Buoy was able to drop into a bump alongside Fiona & Angie but crucially stay on her feet and surf her way all the way to the finish giving her the win by some distance.
In Heat 3 Fiona took an early lead heading into Buoy 1 with Candice back in fifth place. Heading into the second buoy, Candice started to make her move and stroke by stroke she was able to claw her way back into the top pack with Fiona & Angie Jackson. Rounding the final buoy level with Fiona and Angie, Candice was the only one able to catch the first bump back toward the finish line and finish all guns blazing and power to the win. With the Overall Result calculated with the sprint and long distance courses measured comparatively, Fiona won the long distance by 8m38s, based on the average timing of the results, Candice won the Sprints by 8m44s giving her the win by just 6 seconds.
With Fiona winning the long distance race on Saturday and Candice winning the Springs on Sunday, the two ladies were in a tie for the overall win, but due to a count back on the timing, it was Candice who was able to come on top with the overall win for the final event of the season.
Click HERE for more SUP Racing news.
For the men, it was ratings leader Kai Lenny (Naish), who was able to secure his third #SUWS Title after winning Stop #4 on the 2015 World Series with a win in the Sprint racing in a climatic finish at Turtle Bay Resort. With Starboard SUP's Connor Baxter putting all the pressure on by winning the Long Distance on Day 1, Kai needed to be in top form to secure his Seventh World Title (3x Stand Up World Series Racing & 4x Stand Up World Tour Surfing). With 2-3 foot surf making for some interesting Sprint Racing, The twenty-three year-old Hawaiian stormed into a lead right from the start. Winning Round 1, Heat 1 easily, he went straight into Semi-Final 1 where once again he powered off the start line and into a comfortable lead and straight into the Final.
Up against some of the World’s Best paddlers in the Final it was neck and neck straight from the starting horn with Naish teammate Casper Steinfath. The Naish riders seeming to find another gear and storm out in front. Hitting the Buoy 1 first, Kai was able to find a small bump from the Outside Buoy and extend his lead from Casper. The Final saw the athletes having to complete two laps of the Turtle Bay Course and with Casper Steinfath falling on the way back out for the start of the second lap, it left Kai Lenny to complete his final lap unchallenged and win by a considerable distance. With Casper being overtaken by the chasing pack it was Kody Kerbox, who impressed all day with his skill at finding bumps to come from behind and secure second place being pushed by Bullet Obra and Jake Jensen with Connor Baxter and Zane Schweitzer further back going down on the wave after they rounded the Final Buoy.
Stay tuned for the full recap of the event and video highlights coming soon!
To see more news and highlights from the 2015 Stand Up World Series, Click HERE. | https://www.supconnect.com/news/candice-appleby-wins-world-series-final-on-a-tie-break-kai-lenny-secures-overall-win-for-the-men |
for Wanda K. Tyner:
My art reflects my adventurous spirit and passion for exploring and storytelling. I enjoy the focus and precision it takes to design, combine colors, textures, shapes and techniques to create kiln-formed glass art that can be bright, bold, subtle, or textured.
I am fascinated with the scientific properties of glass and the endless possibilities through kiln-forming. I incorporate the design principles of balance, focus and harmony while creating unique fused glass artwork and functional food-safe pieces.
I love infusing the vibrant colors from the world in which we live into my art. My distinctive designs are inspired as I cut glass into geometric and abstract shapes juxtaposing wild, chaotic design elements with serene backgrounds and symmetry creating pattern, movement and form. I manipulate and form the glass shapes with flow, dimension and texture inside my kiln.
Every piece of my artwork has a story that represents nature, a specific subject, a client’s vision, an emotion, an exploration of combining colors and patterns, or my own whimsy
Pictures of my studio with kilns, cutting table, saws, and of course, glass. | https://wktyner.com/about/artist-statement/ |
This Sunday Elder Music column was launched in December of 2008. By May of the following year, one commenter, Peter Tibbles, had added so much knowledge and value to my poor attempts at musical presentations that I asked him to take over the column. He's been here each week ever since delighting us with his astonishing grasp of just about everything musical, his humor and sense of fun. You can read Peter's bio here and find links to all his columns here.
I’m sorry if we missed anyone special to you this year, but for extended times, both Ronni’s and my computers went toes up themselves, thus we were off the air and some may have slipped by without our noticing them. There were other trying periods for both of us as well.
MONTSERRAT CABALLÉ was a Spanish soprano best known for singing bel canto roles – Donizetti, Bellini and Rossini. She was also known as one of the best interpreter of Verdi’s music. She made her debut in true fairy tale fashion, where she came on to sing the lead role after being an understudy to Marilyn Horne and was showered with accolades.
Montserrat was one of the finest singers of the 20yh century and performed in pretty much all the major opera houses of the world. She was also instrumental in introducing José Carreras to the world. From the Puccini opera Manon Lescaut, Montserrat performs the aria In Quelle Trine Morbide. (She was 85)
♫ Montserrat Caballé - Manon Lescaut ~ In Quelle Trine Morbide
RICHARD GILL was an Australian conductor of choral, operatic and orchestral works. He was also a musical educator and a great advocate for musical education for children. (76)
CHARLES NEVILLE was a member of the Neville Brothers, one of the best bands on the planet. He played saxophone in the group and he also had a separate career playing modern jazz. (79)
VINCE MARTIN was a singer, songwriter and guitarist who was popular during the folk music era of the sixties. He often played as a duo with Fred Neil, with whom he also recorded. (81)
DEAN WEBB was the mandolin player for The Dillards, a bluegrass band that expanded the repertoire of the genre by adding electric instruments and playing rock songs as well as traditional fair. (81)
TONY JOE WHITE was a singer/songwriter and guitarist of the first order. He was more a cult figure than someone in the mainstream but he had a few hits over the years. His biggest, Polk Salad Annie, came very early in his career. This song was covered by many, including a fine version by Elvis.
Other songs of his included Rainy Night in Georgia, also covered extensively, and Steamy Windows, a hit for Tina Turner. He kept performing and recording until the end, including a fine blues album released recently. Tony Joe performs High Sheriff of Calhoun Parrish, from early in his career. (75)
♫ Tony Joe White - High Sheriff of Calhoun Parrish
HENRY BUTLER was a jazz pianist and also an acclaimed photographer in spite of being blind since early childhood. He was yet another in the long list of great Louisiana pianists. (69)
DENNIS EDWARDS was an R&B and soul singer noted for joining the Temptations after their fine lead singer David Ruffin left. He later had a solo career as well as joining David and Eddie Kendricks, the other notable singer from the group. (74)
RANDY SCRUGGS was a guitarist, songwriter and record producer whose songs have been covered by many country stars. He also played guitar on even more artists’ records. He was the son of renowned blue grass player Earl Scruggs and helped to introduce modern sensibilities into Earl’s sound. (64)
ROY HARGROVE was a jazz trumpeter who incorporated elements of hip hop, soul, funk and gospel into his music. Besides leading his own group he performed with most of the best jazz performers over the years. (49)
HUGH MASEKELA played the trumpet, and similar instruments, as well as singing and composing music. He was born in South Africa and became a vocal critic of the appalling Apartheid regime that ruled the country at the time.
He later studied classical music in London. He was mostly a jazz performer but ventured into pop music from time to time – he played with The Byrds on one of their albums. He sings and plays Alright from his album “No Borders”. (78)
ROBERT MANN was a violinist and the founder of the acclaimed Juilliard Quartet, one the foremost string quartets in the world. He was also a conductor and music teacher. (97)
VIC DAMONE was a crooner much influenced by Frank Sinatra and Perry Como. He had his own radio program and later on a TV show as well. He had a number of hits in the fifties and sixties. (89)
HARRY M MILLER was an Australian music promoter who first brought out Louis Armstrong, the Rolling Stones, the Beach Boys and many others. He also staged the first productions of Hair and the Rocky Horror Show amongst others. (84)
Dominick RANDY SAFUTO was the lead singer for the Doowop group Randy and the Rainbows who had hits with Denise (later covered memorably by Blondie), Little Star and a few other similar songs. (71)
CHARLES AZNAVOUR was a French singer, songwriter and diplomat whose songs spread far and wide, and have been translated into many different languages. During the war he and his family sheltered and rescued many people risking their own lives.
He started performing at a young age and it was when he opened the bill for Edith Piaf at the famous Moulin Rouge that his career began in earnest. His songs have been covered by most of the famous (and less so) performers over the years. From the more than 1,200 songs he wrote I have chosen La Boheme. (94)
♫ Charles Aznavour - La Boheme
BOB DOROUGH was a BeBop singer, songwriter and pianist. He performed with comedians, folk musicians and jazz legends, including adding a rare vocal to a Miles Davis composition. He also had a successful career creating educational songs for kids on maths, history and so on. (94)
NANETTE FABRAY was an actress, singer and dancer who started in Vaudeville and became musical theatre staple. She also appeared in several films. (97)
SONNY PAYNE was the long-time host of the radio program King Biscuit Time that introduced blues music to several generations of listeners. (92)
JIM RODFORD was a bass player for the groups The Kinks and The Zombies. He was also a founding member of Argent. (76)
Marty sings solo lead on the song Comin' Back to Me from their successful and ground breaking album “Surrealistic Pillow”. (76)
♫ Jefferson Airplane - Comin' Back to Me
EDWIN HAWKINS was a gospel singer who had a surprise hit with his song Oh Happy Day in 1969. His group toured widely and often appeared at music festivals around the world. (74)
TAB HUNTER was an actor and occasional singer, one of whose records my sister owned as a young girl. (86)
THOMAS’S MUSIC SHOP started out selling sheet music and musical instruments in Melbourne. They later added records and CDs. They were the go-to place for classical music. (96)
HARVEY SCHMIDT co-wrote the long running musical “The Fantasticks” (with Tom Jones, not the singer). The pair also created “I Do! I Do!” and other musicals. (88)
NANCY WILSON was a jazz singer who had crossover pop hits, mainly in the sixties, but later on as well. She learned from the best – Nat King Cole, Billy Eckstine and others were on the records her father brought home.
Nancy’s career began when, upon meeting Cannonball Adderley he suggested she move to New York where her style would be more appreciated. She took his advice and became an almost instant success. Her albums not only topped the jazz charts, but frequently the pop ones as well. She also appeared on all manner of TV shows.
From early in her career, indeed her first hit, is Guess Who I Saw Today. (81)
♫ Nancy Wilson - Guess Who I Saw Today
LORRIE COLLINS, who along with her brother Larry, formed the Collins Kids who were big rockabilly performers in the fifties. (76)
LAZY LESTER (Leslie Johnson) was a blues harmonica and guitar player as well as the writer of many songs that have been covered by just about everyone who plays the blues, as well as rock and country. (85)
BARBARA ALSTON was a founding member and lead singer for the vocal group The Crystals on their early records. She later became a support singer which she preferred due to her excessive shyness. (74)
RAY THOMAS was a singer and flute player and also a founding member of the progressive rock group The Moody Blues. He continued with the group until early this century. (76)
TERRY EVANS was a soul, R&B and blues singer, guitarist and songwriter. He played with many people over the years, notably long stints with Bobby King and Ry Cooder. He also performed with Boz Scaggs, John Lee Hooker, Eric Clapton, Maria Muldaur and many others. He even found time to have a successful solo career.
Terry performs That's The Way Love Turned Out For Me from the album “Blues For Thought”. (90)
♫ Terry Evans - That's The Way Love Turned Out For Me
DONALD MCGUIRE was a singer with the fifties group The Hilltoppers. They had a couple of hits at the time, the most notable being Marianne. (86)
CONWAY SAVAGE was the long time pianist for Nick Cave and the Bad Seeds. He also released solo albums, and was a member of several Australian groups in the eighties. (58)
RANDY WESTON was a jazz pianist and composer. He was influenced by the best – Thelonious Monk, Duke Ellington and Nat King Cole. He made dozens of records, the last, earlier this year. (92)
GEORGE WALKER was a composer, concert pianist and teacher. His compositions included piano sonatas, symphonies, string quartets and many vocal works. (96)
OTIS RUSH was a blues guitarist, singer and songwriter. He played the guitar left handed but strung as a right hander which probably contributed to his distinctive sound much imitated by younger blues and rock guitarists. Like many, he moved to Chicago after hearing Muddy Waters play and made a name for himself playing in the clubs.
From the album “Right Place, Wrong Time” Otis sings and plays Tore Up. (83)
ED KING was a guitarist and songwriter notable for such diverse works as Incense and Peppermints and Sweet Home Alabama. (68)
SPENCER P JONES was a New Zealand born, Australian guitarist who was in several of the leading Australian groups of the last forty years. (61)
DON CHERRY was a singer in the Sinatra mould, who had a number of hits in the fifties. He was also a world ranked golfer. (94)
EDDIE WILLIS was a session guitarist, one of the “Funk Brothers”, who played behind just about every Motown hit. (82)
COLIN BRUMBY was an Australian composer and conductor. He studied in Spain and Britain before returning to Australia to become professor and composer in residence at Brisbane University.
He eventually tired of working in atonal music, and switched to tonal which led to many more commissions and greater acceptance by the public. He wrote operas, concertos for many diverse instruments, two symphonies, chamber works, and notably, a number of operas for children.
Here is the second movement of his Trio for Clarinet, Cello and Piano. (84)
♫ Colin Brumby Trio for clarinet cello and piano (2)
GEOFF EMERICK was a recording engineer at Abbey Road studios who recorded the Beatles’ records from Sergeant Pepper onwards. He also recorded many other groups. (72)
CHAS HODGES was a singer, pianist and guitarist best known for being half of Chas and Dave. (74)
BIG JAY MCNEELY was an R&B saxophone player who helped to define the sound of early rock & roll. His outrageous onstage antics probably helped as well. (91)
ROY CLARK was a country singer and guitarist who is probably best known for his appearances on “Hee Haw”. I prefer to remember him as a superb guitar player. (85)
ARETHA FRANKLIN, considered the “Queen of Soul”, started her musical career singing and playing organ and piano at her father’s church.
Her first foray into recorded music was unsuccessful as the record company didn’t really understand what she was about. When she found a sympathetic company (Atlantic) the sky was the limit. Her first singles shot to the top of the charts as did most of the following ones.
Besides her music, Aretha was a great champion of civil rights and donated millions to help the poor, the indigenous and many other such causes. | https://www.timegoesby.net/weblog/2018/12/elder-music-toes-up-in-2018.html |
Q:
Decoding bash script
Have following line in bash script, parsing input arguments:
((10#$2 > 0)) 2>/dev/null && shift 2 || shift
Basically it helps handling parameters with optional integer subparameter. Like:
-x 100 -y
-x -y
Could you explain how it works.
A:
The line checks whether the second positional parameter is greater than 0. If the condition is true then it shifts the positional parameters 3, 4, ... to 1, 2, ... If the condition is false then it shifts the positional parameters 2, 3, ... to 1, 2, ...
Constants with leading zero are interpreted as octal numbers. Saying 10#$2 causes the positional parameter $2 to be interpreted as a decimal. You might also want to refer to Shell Arithmetic.
As such, ((10#$2 > 0)) checks if the second positional parameter represented in base 10 is greater than 0. 2>/dev/null causes any errors resulting from this test to be redirected to /dev/null. See Bash Arithmetic Expressions for more on the # operator.
&& and || are conditional constructs. So if the condition is true then shift 2 is executed else shift is executed.
expression1 && expression2
True if both expression1 and expression2 are true.
expression1 || expression2
True if either expression1 or expression2 is true.
As an example, refer to the following:
$ ((10>42)) && echo greater || echo smaller # Condition is false so the `echo smaller` expression is evaluated
smaller
$ ((100>42)) && echo greater || echo smaller # Condition is true so the `echo greater` expression is evaluated
greater
Quoting from the manual:
((...))
(( expression ))
The arithmetic expression is evaluated according to the rules described below (see Shell Arithmetic). If the value of the expression
is non-zero, the return status is 0; otherwise the return status is 1.
This is exactly equivalent to
let "expression"
| |
To enter some often used macro commands, you can use dialogs from the floating toolbar. If the toolbar is unavailable, click the "Check extensions" button in Options and follow instructions.
However, Quick Macros does not provide dialogs for all commands and functions. You will have to type them directly in the editor. You can find all intrinsic commands and functions through the reference topic. There are several features that can help in finding commands and getting help.
When you type dot (.) somewhere in text (for example, at the beginning of a line), appears list of functions and other identifiers that you can use. Various type kinds (functions, types, constants, variables, etc) have different icons. To insert an identifier, double click it. Or begin to type, and press Tab or Enter to complete.
At the top of the list are categories - collections of related functions. Type . after a category name to view the functions. At the bottom of the list are various libraries. Type . after a library name to view its contents.
Press:
Also there are many other functions (member functions) that are displayed when you type . after a variable of a certain type, for example str, Acc, Database, ARRAY, BSTR, a COM interface. To use such functions, declare a variable of that type, and call functions with that variable. Example:
Ftp f f.Connect("ftp.myserver.com" "user" "passw") f.DirSet("public_html")
In the lists, names of hidden and restricted members are gray.
If you want to hide a user-defined function in the list, place it in a private folder. The list also does not show anything that begins with __ (two underscores). To show hidden items, press Ctrl+Shift+. or use menu Edit -> Members -> Show Hidden.
To show the list when you already partially typed an identifier, choose the Completion command (Ctrl+Space). In most cases it shows only identifiers that begin with the same letter. Ctrl+. and other commands also can be used for this.
When the text insertion point is on an identifier (function, type, etc, except variables), QM status bar shows some info for it. You can Ctrl+click to view it in QM output.
The status bar info for intrinsic and user-defined functions is always available. For other user-defined identifiers, it is available only if the identifier is already declared. That is, if the macro or function with the declaration is already compiled, or the identifier is from a type library or reference file. QM also automatically declares identifiers from type libraries and reference files when typing typelib.identifier. QM also may look in reference files to show info in QM status bar.
To show help for a function or other identifier, type or click it and press F1. | https://quickmacros.com/help/QM_Help/IDH_TYPEINFO.html |
Weiler Atherton – Polygon Clipping Algorithm
Background:
Weiler Atherton Polygon Clipping Algorithm is an algorithm made to allow clipping of even concave algorithms to be possible. Unlike Sutherland – Hodgman polygon clipping algorithm, this algorithm is able to clip concave polygons without leaving any residue behind.
Algorithm:
1. First make a list of all intersection points namely i1, i2, i3, ... 2. Classify those intersection points as entering or exiting. 3. Now, make two lists, one for the clipping polygon, and the other for the clipped polygon. 4. Fill both the lists up in such a way that the intersection points lie between the correct vertices of each of the polygon. That is the clipping polygon list is filled up with all the vertices of the clipping polygon along with the intersecting points lying between the corresponding vertices. 5. Now, start at the 'to be clipped' polygon's list. 6. Choose the first intersection point which has been labelled as an entering point. Follow the points in the list (looping back to the top of the list, in case the list ends) and keep on pushing them into a vector or something similar of the sorts. Keep on following the list until an exiting intersection point is found. 7. Now switch the list to the 'polygon that is clipping' list, and find the exiting the intersection that was previously encountered. Now keep on following the points in this list (similar to how we followed the previous list) until the entering intersection point is found (the one that was found in the previous 'to be clipped' polygon's list). 8. This vector now formed by pushing all the encountered points in the two lists, is now the clipped polygon (one of the many clipped polygons if any of the clipping polygons is concave). 9. Repeat this clipping procedure (i.e. from step 5) until all the entering intersection points have been visited once.
Explanation:
1. Finding all the intersection points and grouping them
Here, let there be a polygon ABCD and another polygon VWXYZ. Let ABCD be the clipping polygon and let VWXYZ be the clipped polygon.
So, we can find the intersection points using any method. For example, we can find the intersecting points separately and then find for each intersecting point find if it is entering or leaving, or, we can use Cyrus Beck and find all the intersecting points and also get if a point is entering or exiting. Refer Cyrus Beck for more information on this algorithm.
2. Making and filling of two lists
Now, we make two lists. One for the clipping polygon and one for the clipped polygon.
Now this is how we fill it:
3. Running of the algorithm
We start at the clipped polygon’s list, i.e. VWXYZ.
Now, we find the first intersecting point that is entering. Hence we choose i1.
From here we begin the making of the list of vertices (or vector) to make a clipped sub-polygon.
According to the given example, i1 Y i2 is a clipped sub-polygon.
Similarly, we get:
i0 V i3 as another sub-polygon also.
Hence, we were able to get two sub-polygons as a result of this polygon clipping, which involved a concave polygon, which resulted in:
Similarly, this clipping works for convex polygons.
Limitations:
This polygon clipping algorithm does not work for self – intersecting polygons, although some methods have been proposed to be able to solve this issue also, and have successfully worked.
Example:
Let V1V2V3V4V5V6 be the clipping window and P1 P2 P3 P4 P5 P6 be the polygon.
Now, here is how the algorithm will operate.
Two lists are generated, one for the clipping window, and the other for the polygon. We will start from the polygon’s list. (Note that the polygon’s list only contains the polygon’s vertices and the intersections and similarly for the clipping window’s list.)
So, according to the algorithm, the first polygon to be formed will be i2 i3 i8 i1
Then the next subpolygon, i4 i5 i6 i7.
The output will look like:
Citation:
K. Weiler and P. Atherton. 1988. Hidden surface removal using polygon area sorting. In Tutorial: computer graphics; image synthesis, Kenneth I. Joy, Charles W. Grant, Nelson L. Max, and Lansing Hatfield (Eds.). Computer Science Press, Inc., New York, NY, USA 209-217. | https://www.geeksforgeeks.org/weiler-atherton-polygon-clipping-algorithm/ |
Hamilton City Council has awarded its latest major construction contract to Fulton Hogan for the construction of the new Borman Road connection.
The project, which is expected to take two years to complete, supports Council's focus to provide a safe, well-connected transport network for Rototuna.
“It’s about providing safe and accessible ways for the community to get around, no matter how they choose to travel,” said Council’s Development General Manager, Chris Allen.
“That means having urban roads, safe paths for pedestrians, people on bikes and scooters, and safe places for people to cross the road.”
The project also supports Council’s Vision Zero goal – to have zero deaths and zero serious injuries on our roads – and the Biking and Micromobility Plan, which aims to make it more convenient for Hamiltonians to choose active modes of transport to get around our city.
Designated in 2004, this highly sought-after section of road will join the two existing parts of Borman Road, improving connectivity and making it safer for the community to move around – including to the new Rototuna Village, and those heading to and from from Rototuna Junior and Senior High schools.
Along with completing the new 600m stretch of road and shared paths that connect both sides of Rototuna, the $22 million project also includes:
- upgrading the Borman Road and Horsham Downs Road intersection to a raised intersection with traffic lights
- completing urban road upgrades and installing shared paths on Horsham Downs Road (from the intersection to North Ridge Drive) and the existing Borman Road (from the intersection to Barrington Drive).
“This is in addition to the already completed upgrade of North Ridge Drive and the wetland, which was installed to manage stormwater for the quickly expanding development in the area,” said Allen.
Now that the contract has been awarded, preparations have started to begin construction works in November.
“Construction of the on-road works along Horsham Downs Road and Borman Road will be staged to minimise disruption to residents and road users.
“These works will involve significant traffic management and the occasional stop go; however, we don’t plan on closing any roads for this work. We will also be working with residents and local businesses to make sure access is maintained during construction,” said Allen.
The plan is to start on the existing section of Borman Road, before moving to Horsham Downs Road and the intersection.”
The new section of Borman Road will take two years to construct. Minimal disruption is expected for road users during this time, as these works are in a greenfield area. | https://hamilton.govt.nz/your-council/news/on-the-move/contract-awarded-for-rototuna-road-connection-1661486655840 |
The brain weighs, on average, about 3.3 pounds (1.5 kilograms). It requires about 20 percent of energy to run. When the brain is dead, there is no longer any neurological activity. With the help of machines, it can be kept alive for a short time, but the body will not function within a week. Scientists can transplant a brain, but doing it will be a problem.
Believe it or not, even though tens of thousands of papers have been written about consciousness in the literature, nobody has a suitable definition for “consciousness.” Some people in AI believe that one day we will be immortal. The atoms that make up the body give consciousness, which gives rise to personality, fears, and desires.
Memories are processed at the brain’s center, and they’ve been able to duplicate the functions with a chip. So again, this does not mean it encodes memories with a chip. But it does mean that it takes the brain’s information storage and has a silicon chip that duplicates those functions.
Alchemists dedicated their lives to discovering the secrets of immortality. The suggestion is that the human mind could be uploaded into digital form. It is the hope of transhumanists. People believe science will provide a way to transcend physical limitations and access cyber immortality.
Can you upload your brain to a computer?
The idea of the brain uploading to a computer includes all thoughts, feelings, and memories. It is called whole brain emulation. Surprisingly enough, it’s not a particularly new concept. Similar ideas have been the fodder of science fiction writers for almost 100 years, with legendary author Arthur Clarke describing a city one billion years in the future. We’re having this conversation today because of the rapid advances in nanotechnology, biotech, and artificial intelligence over the last few decades.
Uploading a brain to a computer wouldn’t make people immortal as many think. The uploaded mind on the computer would have an identical memory and personality as anyone but would be a copy of anyone. Similarly, if people were to clone themself, they would still maintain their existence and eventually die from natural causes.
The science behind mind uploading or the process of uploading consciousness into a computer to engage in a virtual reality simulation is possible. Kurzweil is an American inventor and futurist. He’s a genuine visionary and credible inventor. He’s also been making predictions for some time, with well-known examples being nanotechnology, ebooks, face recognition software, and a computer beating the human at chess by 2000. The development Kurzweil is most passionate about is the singularity.
By 2045, humans can record their memories on the computer 2045. Avatar research world best computer engineer and brain neuroscientists are working on the project. – Dimitry Itskov, Russian Business Man
I think the brain is like a program in the mind. Which is like a computer, so it’s theoretically possible to copy the brain onto a computer. – Stephen Hawking
Humans may live for more than 1000 years! We believe it will be possible within 2045. – Elon Musk
The computational power needed to simulate a human brain will become available later in the 21st century, and you’ll be able to upload your brain to a computer by the year 2045. – Ray Kurzweil
The technological singularity is a theoretical point when technology will develop superintelligence. It allows it to upgrade itself at an exponential rate. Kurzweil says humans will merge intelligence with machine intelligence and become super beings. All of this will happen by the year 2045! Humans will have transcended the need for physical bodies and learned to upload brains to computers along with personalities, skills, and personal histories.
These brains could then be dropped into robots or projected as living holograms. Maybe you could be uploaded into the cloud where you could live out eternal life in a 3D simulation of bliss and bottomless digital Pina Coladas. The first can scan the entire human brain to create the connectome.
Brain mapping
A 3D map of the brain shows every neuron and molecule within. There are big questions about whether this would ever be possible. The brain is an insanely complex piece of machinery. So far, scientists have only been able to map the connectome of one creature, a one-millimeter roundworm with a total of 302 neurons in its entire nervous system. By comparison, a fruit fly or ant has about 250000 neurons.
A mouse has about 71 million, and the human brain has 86 billion neurons. Today, brain mapping relies on technology like MRIS, MEGS, and EEGS, giving a partial but incomplete picture by a mile. Other potential scanning methods include electrodes that could be placed directly on the brain, with each electrode capable of mapping thousands of neurons. But as this technique requires cutting open the skull!
For decades, computer scientists from giant companies, major universities, and world governments have mapped and translated the brain network into a computer network. But, thoughts don’t live on a single neuron, nor are they processed in a single place, but rather as a patterned network of brain cells picking up the information and processing it.
Memory space/Storage
The human brain isn’t a finite storage system like a hard drive. There could be a lot of space in there. Estimates vary from 2.5 petabytes, about a million gigabytes, down to a few measly gigabytes. If you wanted to copy 2.5 petabytes over a USB 3.0 connection, it would have to run continuously for more than 80 DAYS. The biggest issue in consciousness transfer is mapping the brain accurately.
Simultaneously, that number may seem achievable that estimate does not consider the complexities of a fully functioning brain with decision-making abilities. The actual data storage needs for a functioning brain would be immense. Nobody knows how much storage space you would need.
The human brain mapping exercise would require computing power to process vast amounts of data, zettabytes. How big is a zettabyte? If every person in 2000 had a 180-gigabyte hard drive filled with data, all those drives would occupy one zettabyte. It’s the same amount of data as 200 trillion mp3s!
Supercomputer
Neurons don’t simply store one bit of information like in a computer. Each neuron can create 1,000 connections with those around them and, unlike machines, aren’t only on or off. They have other states too. Luckily, humans are pretty obsessed with mind-reading.
The fastest supercomputer in the world is Fugaku. But this obscene technology doesn’t even come close to the processing power. We would need to create an accurate virtual copy of the brain. It may be that quantum computers can step up to the plate for us on this one.
If computing technology continues to follow trends such as Moore’s law, supercomputers may simulate the human mind within the next few years.
- Moore’s law states that computing power doubles approximately every two years.
Artificial Robot
We could pop our digital doppelgangers into robots or create mechanical versions of ourselves. But that relies on robots evolving fast enough over the coming years for it to be worthwhile. The most advanced robots around at the moment can climb stairs, mix cocktails and do gymnastics. But they’re nowhere near as agile and adaptable as the average ape. The more likely and probably more affordable scenario for an average person would be creating a digital version of you.
It could be loaded into a simulated virtual reality. This digital world would require the sensory complexity to replicate all the ingredients. It makes up the human experience of sights, smells, tastes, textures, emotions, and thinking. We’re going to have to build the matrix. But unlike pre-red pill neo, the inhabitants of this world would know complete. It would almost certainly require artificial intelligence, and creating a continuous, high-quality believable experience would require giant technological strides forward.
For example, the computer game industry has fueled incredible advances in graphics in recent decades. It might be possible to make a digital universe look pretty great soon. But for obvious reasons, no real work has been done to satisfy the other human senses like smell, touch, and taste in a digital setting. Even assuming we could meet the technological demands to build a satisfying and realistic digital afterlife for our uploaded minds.
Experiment with brain uploading
Researchers at the University of California Berkeley scanned the brains of people while they watched videos and used only brain scans. A computer was able to determine what the brains were processing. Using an fMRI scan to follow blood through the brain, using three-dimensional representations of the scanned areas called voxels, the researchers trained the computer to piece together what the brain was looking at.
Using tech like this, scientists have scanned the brains of Counterstrike players. Also, they see when they want to turn left or right, but emotional response overwhelms the scan if their character is killed. Emotions are far too complex to read yet.
There are trillions of synaptic connections in each person’s head. A study in Nature explored how this process works using mice with cells that had been genetically modified to activate when laser light hits. This is called optogenetics. Researchers demonstrated how memories are written, erased, and reactivated with this method. Then they can “implant” a false memory into another genetically modified mouse!
Some researchers believe that an artificial brain could be constructed as early as ten years. To test this technology, researchers uploaded the mind of a worm onto the computer and during a virtual recreation of the worm’s mind. The simulated worm reacted the same way as a real worm, not because anybody programmed it. But because this behavior was hardwired in its neural network. That’s a worm with only 300 neurons.
In one study, scientists successfully simulated a rat’s neocortical column elements, a complex layer of brain tissue in all mammals. We must build a functional human brain model, a series of cortical simulations. These simulations have utilized the best computer technology, such as IBM’s Blue Gene supercomputer, and have successfully matched the processing power.
How would that system operate? It would require huge data centers running 24/7 across massive networks. In the same way, the cloud does now. But people have to pay a subscription for cloud storage each month. Uploads could be erased by computer viruses or malware without the need to destroy their hardware. It could mean that the assassination of uploads could be easier than the assassination of their biological human counterparts.
If a virus erased an upload, would that be prosecuted as murder? Would that upload have committed a crime punishable by a lengthy prison sentence or even the death penalty if one upload erased another? The questions are endless. These are all potentially huge social issues that civilization will have to confront later in this century.
Project – Neural Link
Elon Musk is preparing to start another revolution in a different area this time. This is Elon Musk created the project Neural Link. Startup developers believe that the brain and the computer can be connected. They need to implant polymer threads with electrodes into the human brain. They’ll read the activity of neurons and stimulate them.
Electrodes are divided into 96 flexible threads thinner than a human hair. In total, they contain three thousand and seventy-two electrodes. The company has also developed a unique robot for implanting the threads. It will sew them into the nerve tissue on the same principle as a sewing machine. Stitch by stitch. It takes only 16 minutes to integrate the human brain’s interface fully.
It is incredibly fast for such technically complex operations. A special chip will be attached to read data from the threads behind the ear. It will transmit data to a computer. In the future, the developer plans to create a system that would work over a wireless network implanting the device as promised by the scientist.
People who have lost their sight will be able to see those who are left without hearing and finally hear their relatives’ voices again. Paralyzed people can control their smartphones, computers, and advanced prosthetics, restore memory, eliminate brain damage after a stroke, and restore limbs’ functionality. The neural link will be able to do it all in the future. This invention may become as necessary for the ordinary person as a smartphone.
Artificial intelligence will develop more to avoid falling behind in evolution, and humanity will have to make friends with it. In a couple of decades, our brain will have no chance of surpassing this rival. They’ll have to become allies that will also help the neural link. So in the future, will people become cyborgs in general? Elon Musk believes that we’ve already become them.
More Articles:
- How Do Neurotransmitters Work On Feelings?
- Can The Human Brain Live Without A Body?
- How Humans Lost Their Fur?
- What Does Dopamine Do For The Brain?
- What Does Dopamine Do For The Human Brain?
- Will Robots Destroy The Humanity?
Sources:
Kandel, Eric. Principles of Neural Science, Fifth Edition. McGraw-Hill Education.
Ayd, Frank. Lexicon of Psychiatry, Neurology and the Neurosciences. Lippincott, Williams & Wilkins.
Shulman, Robert. “Neuroscience: A Multidisciplinary, Multilevel Field.” Brain Imaging: What it Can (and Cannot) Tell Us About Consciousness. Oxford University Press.
Ogawa, Hiroto; Oka, Kotaro. Methods in Neuroethological Research. Springer.
Tanner, Kimberly. “Issues in Neuroscience Education: Making Connections.”
Kandel, Eric. Principles of Neural Science, Fifth Edition. | https://journalhow.com/can-you-upload-your-brain-to-a-computer/ |
When it comes to painting, preparation is key. A good paint job starts with the proper prep work. This includes everything from cleaning the surface to applying primer and any other necessary steps. In this guide, well go over the basics of paint prep and provide a step-by-step guide on how to properly prepare a surface before painting.
Step 1: Clean the Surface
The first step in paint prep is to thoroughly clean the surface youll be painting. This is important because it helps ensure that the paint adheres properly and prevents any dust, dirt, or debris from getting stuck in the paint. To do this, use a mixture of soap and water and a cloth or sponge to wipe down the surface. For tough stains, a mild abrasive cleaner may be necessary.
Step 2: Sand the Surface
Once the surface is clean, the next step is to sand it down. This helps create a smooth and uniform surface for the paint to adhere to. Use a fine-grit sandpaper and start by sanding in the same direction as the grain of the wood. You may need to go over it a few times to get the desired level of smoothness.
Step 3: Apply Primer
Now that the surface is clean and sanded, its time to apply the primer. This helps further ensure that the paint will adhere properly and also helps with coverage. To apply the primer, use a brush or roller and start in one corner. Work your way across the surface in an even, steady motion. Make sure to cover the entire surface and allow the primer to dry completely before moving on to the next step.
Step 4: Apply Paint
Once the primer is dry, its time to apply the paint. Start in one corner and work your way across the surface. Use a brush or roller and make sure to cover the entire surface. Allow the paint to dry completely before adding a second coat. For best results, its recommended to use a paint sprayer for a smooth and even finish.
Step 5: Add Finishing Touches
After the paint is dry, its time to add any finishing touches. This could include adding a sealer or topcoat, or adding some decorative touches such as stencils or decorative tape. Whatever you choose, make sure to allow the paint to dry completely before applying any additional layers. | https://www.savacap.com/paint-prep-a-step-by-step-guide/ |
Hi,
My Recipe Box
My Newsletters
My Account
Customer Care
Log out
Sign up for Our Newsletters
Sign Up for Our Newsletters
Home
Recipes
Dishes & Beverages
Pasta Dishes
Mediterranean Shrimp Linguine
This picture-perfect linguine is a feast for the eyes and, with a hint of heat, a treat for the palate. —Megan Hidalgo, Quarryville, Pennsylvania
Mediterranean Shrimp Linguine Recipe photo by Taste of Home
Next Recipe
Test Kitchen Approved
Contest Winner
Be the first to review
Recommended
33 Healthy Grilled Chicken Recipes
Rate
Comment
Save
Share
Print
Next Recipe
Total Time
Prep: 20 min. Cook: 20 min.
Makes
8 servings
Read Next
Baking Is Now a Party—Because Nestle Just Released Disco Morsels Made with Edible Glitter
Mediterranean Shrimp and Pasta
Pesto Shrimp Pasta
Ingredients
1 package (16 ounces) linguine
2 pounds uncooked medium shrimp, peeled and deveined
1 medium onion, chopped
6 tablespoons olive oil
4 garlic cloves, minced
1 cup chopped roasted sweet red peppers
2 cans (2-1/4 ounces each) sliced ripe olives, drained
1/2 cup minced fresh parsley
1/2 cup white wine or chicken broth
1/2 teaspoon crushed red pepper flakes
1/2 teaspoon kosher salt
1/2 teaspoon dried oregano
1/2 teaspoon pepper
3/4 cup crumbled feta cheese
2 tablespoons lemon juice
Text Ingredients
View Recipe
Directions
Cook linguine according to package directions.
Meanwhile, in a large skillet, saute shrimp and onion in oil until shrimp turn pink. Add garlic; cook 1 minute longer. Stir in the red peppers, olives, parsley, wine, pepper flakes, salt, oregano and pepper. Reduce heat.
Drain linguine, reserving 1/2 cup cooking water. Add linguine and reserved water to the skillet. Stir in cheese and lemon juice; cook and stir until cheese is melted.
Nutrition Facts
1-1/3 cups: 462 calories, 16g fat (3g saturated fat), 144mg cholesterol, 610mg sodium, 48g carbohydrate (4g sugars, 3g fiber), 28g protein. | https://preprod.tasteofhome.com/recipes/mediterranean-shrimp-linguine/ |
Today we will start by looking at the work of Paul Cezanne and his use of Atmospheric perspective.
Paul Cézanne was a French artist and Post-Impressionist painter whose work laid the foundations of the transition from the 19th-century conception of artistic endeavour to a new and radically different world of art in the 20th century.
—
So how does this relate to what we will be studying today? Today we will be looking at and practicing atmospheric perspective.
Aerial perspective or atmospheric perspective refers to the effect the atmosphere has on the appearance of an object as it is viewed from a distance. As the distance between an object and a viewer increases, the contrast between the object and its background decreases, and the contrast of any markings or details within the object also decreases. So remember when we were playing around with contrast yesterday, well now we have a continuation of the importance of contrast in the form of how contrast decreases as we go further back in space. To put it really simply, stuff gets blurrier as it goes into the distance. In the selected Cezanne paintings above we can see how he used atmospheric perspective to achieve a sense of depth (even though Cezanne was all about flattening the picture plane). Check out the Da Vinci below, that’s the Virgin on the Rocks and is also a great example of how atmospheric perspective works.
Your task will be to work from these landscape photos to create a work which exhibits the qualities of atmospheric perspective. Since we are working with watercolors the key will be to water down the colors more as they fade into the distance, and use higher concentration of colors for the foreground. Remember that these principles can be applied to any painting and that the key isn’t necessarily only to learn how to use this principle in landscape painting, but also in other situation where you want the background to recede. Here in Prague, Czech Republic it is often easy to see atmospheric perspective at play in the hills west of Prague where we will go in a few weeks for some Plein Air painting. So consdier this some preparatory work. | https://www.painting-course.com/lesson/atmospheric-perspective/ |
The job hunting season is coming. There are thousands of skills, and hard strength is the key. It is said that the epidemic environment this year is bad, so we should make good preparations. MySQL is a necessary skill for advanced Java programmers. Many friends often lose their halbers and sink sand here during the interview. Proficient in MySQL knowledge and strong operability in practice, especially in the Internet industry, we should not only write code and realize functions, but also operate normally under high concurrency.
Therefore, Xiaobian will share this MySQL Notes document with you today. This document will explain to you from the three parts of foundation, performance optimization, architecture design and architecture design. At the same time, I hope it will have some effect on your big brothers and friends, and I hope you will like it! Finally, friends who need this pure hand to play MySQL notes need to pay attention to the “Java” and pay attention to the official account.
Let’s take a look at this MySQL Directory:
Because this note is made purely by hand, there is no cover to share with you. It’s a pity that such an excellent document has no cover;
primary coverage
This MySQL notes is mainly divided into three parts: foundation, performance optimization and architecture design; So next, Xiaobian will carefully expand each article to explain the knowledge points of this book in detail! Just pay attention to + point and then pay attention to the official account [Java didi] can get free.
1、 Basic chapter
As one of the most popular open source database software, MySQL database software is well known. However, in order to take care of readers who are not familiar with MySQL, we will make a brief introduction to MySQL in this chapter. The main contents include the composition of each functional module of MySQL, the cooperative working principle of each module, query processing flow, etc.
Chapter 1: basic introduction to MySQL
- Introduction to mysqlserver
- Simple comparison between MySQL and other databases
- Main applicable scenarios of mysq
- Summary
Chapter 2: MySQL architecture composition
- MySQL physical file composition
- MySQL server system architecture
- Introduction to MySQL built-in tools
- Summary
Chapter 3: introduction to MySQL storage engine
- MySQL storage engine overview
- Introduction to MyISAM storage engine
- Introduction to InnoDB storage engine
- Ndecluster storage engine introduction
- Introduction to other storage engines
- Summary
Chapter 4: MySQL security management
- Database system security related factors
- Introduction to MySQL permission system
- MySQL access authorization policy
- Safety precautions
- Summary
Chapter 5: MySQL backup and recovery
- Database backup usage scenario
- Logical backup and recovery test
- Physical backup and recovery
- Design idea of backup strategy
- Summary
2、 Performance optimization
Chapter 6: related factors affecting MySQL server performance
- Impact of business requirements on Performance
- Impact of system architecture and Implementation on Performance
- Effect of queryi statement on system performance
- Impact of schema design on system performance
- Impact of hardware environment on system performance
- Summary
Chapter 7: MySQL database locking mechanism
- Introduction to MySQL locking mechanism
- Analysis of various locking mechanisms
- Make rational use of lock mechanism to optimize MySQL
- Summary
Chapter 8: MySQL database query optimization
- Understand query optimizer of MySQL
- Basic ideas and principles of queryi Language Division optimization
- Make full use of explain and profiling
- Rational design and use of index
- Implementation principle and optimization idea of joir
- Order by: groupby and distihct optimization
- Summary
Chapter 9: Performance Optimization of MySQL database schema design
- Efficient model design
- Appropriate data type
- Canonical object naming
- Summary
Chapter 10: MySQL server performance optimization
- MySQL installation optimization
- MySQL log settings optimization
- Querycache optimization
- Other commonly used optimizations for MySQL server
- Summary
Chapter 11: common storage engine optimization
- MyISAM storage engine optimization
- InnoDB storage engine optimization
- InnoDB cache related optimization
- Transaction optimization
- Data storage optimization
- InnoDB other optimizations
- InnoDB performance monitoring
3、 Architecture design
Chapter 12: basic principles of MySQL extensible design
- What is scalability
- Transaction correlation minimization principle
- Principle of consistency of teaching evidence
- High availability and data security principles
- Summary
Chapter 13: MySQL replication for extensibility design
- Significance of replication to scalability design
- Implementation principle of replication mechanism
- Replication implementation level
- Common replication architectures
- Replication on build and Implementation
- Summary
Chapter 14: data segmentation of scalable design
- What is data segmentation
- Vertical segmentation of data
- Horizontal segmentation of data
- Use of vertical and horizontal joint segmentation
- Data segmentation and integration scheme
- Possible problems in data segmentation and integration
- Summary
Chapter 15: the use of cache and search in scalability design
- Extensible design extends beyond the database
- Leverage third-party cache solutions
- Self implementation of cache service
- Efficient full-text retrieval using search
- Using distributed parallel computing to realize high-performance operation of large amount of data
- Summary
Chapter 16: mysqlcluster
- Introduction to mysqlcluster
- MySQL Cluster Environment Setup
- MySQL Cluster configuration details (config. INI)
- Basic management and maintenance of MySQL Cluster
- Basic optimization ideas
- Summary
Chapter 17: ideas and schemes of high availability design
- Leverage replication for a highly available architecture
- Use mysqlcluster to achieve overall high availability
- Using dred to ensure high security and reliability of data
- Other highly available designs
- Advantages and disadvantages of various high availability schemes
- Summary
Chapter 18: MySQL monitoring of high availability design
- Monitoring system design
- Performance status monitoring
- Summary
Acquisition method
Just pay attention to + point praise and pay attention to the official account [Java didi] can get free ~ ~ this pure hand hits the MySQL note!!!
After reading three things ❤️
If you think this article is very helpful to you, I’d like to invite you to help me with three small things:
- Praise, forwarding, and your “praise and comments” are the driving force for my creation.
- Pay attention to the official account of “Java didi” and share original knowledge without any time. | https://developpaper.com/worship-the-internal-data-of-mysql-notes-written-by-alis-technical-director-is-shared-within-a-limited-time/ |
BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention relates to a grism, and more particularly to a grism used suitably for monitoring and/or inspections apparatuses in a factory and the like for scientific observation.
It is to be noted herein that “grism” is a transmission type direct vision dispersive element prepared by combining prisms with a grating so as to allow a light beam having an arbitrary order and an arbitrary wavelength to go straight.
2. Description of the Related Art
In recent years, a number of instruments for astronomical observation having both functions of imaging and spectroscopic function have been developed with progress of two-dimensional detectors. In such an instrument, grisms are used for a dispersive element.
In this case, a so-called high dispersive grism provided with a replica grating has been proposed as a diffraction grating. In such grism, since a refractive index of a resin being a material for preparing a replica grating is around 1.5, there has been such a problem that high resolving power cannot be attained because a light beam exceeds its critical angle in the case where a prism to which is attached the replica grating has a refractive index of, for example, around 2.3, even if the vertex angle of the prism is around 40°.
Furthermore, the above-described high dispersive grism provided with a replica grating may be used within a range where a light beam does not exceed the critical angle, as a matter of course, but there has been a problem of disadvantage in efficiency.
In this respect, if a diffraction grating can be directly processed and formed on a surface of a prism, high efficiency can be achieved. However, there has been such a problem that it is difficult to directly process a diffraction grating having a depth of 1 &mgr;m or more on a surface of a prism. Moreover, mass-production is difficult, whereby a grism produced by means of a direct process becomes expensive.
3. Object and Summary of the Invention
The present invention has been made in view of the above-described problems involved in the prior art, and an object of the invention is to provide a grism wherein a light beam does not exceed its critical angle even if the vertex angle of a prism is made to increase, its efficiency can be elevated, besides, mass-production thereof can be made, and a low cost therefor can be realized.
In order to achieve the above-described object, a grism according to the present invention is constituted by combining prisms each prepared from a material having a high refractive index with a volume phase grating (Volume Phase Grating: VPG).
Namely, a grism according to the present invention comprises a first prism having a high refractive index, a second prism having a high refractive index, and a volume phase grating used for a diffraction grating; the vertex angle of the above-described first prism being opposed to the vertex angle of the above-described second prism so as to sandwich the above-described volume phase grating between the first prism and the second prism; a light beam being input from the outside through a surface of the above-described first prism; the light beam input inside the first prism being input into the above-described second prism through the above-described volume phase grating; and the light beam input a inside the second prism being output to the outside through a surface of the second prism.
Furthermore, the grism according to the present invention is characterized in that a refractive index of the above-described first and second prism are higher than a refractive index of the volume phase grating.
Moreover, the grism according to the present invention is characterized in that a material for preparing the above-described first prism is either of zinc sulfide and lithium niobate; a material for preparing the above-described second prism is either of zinc sulfide and lithium niobate; and a material for preparing the above-described volume phase grating is bicromate gelatin.
Besides, the grism according to the present invention is characterized in that a sum of the vertex angle of the above-described first prism and the vertex angle of the above-described second prism is equal to or larger than a critical angle determined by a refractive index of the first prism and a refractive index of the volume phase grating.
BRIEF DESCRIPTION OF THE DRAWING
The present invention will become more fully understood from the detailed description given hereinafter and the accompanying drawing which is given by way of illustration only, and thus is not limitative of the present invention, and wherein:
FIG. 1
is a conceptual, constitutional, explanatory diagram showing a grism according to the present invention.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
An example of a preferred embodiment of a grism according to the present invention will be described in detail hereinafter by referring to the accompanying drawing.
FIG. 1
10
16
12
14
14
is a conceptual, constitutional, explanatory diagram showing a grism according to the present invention wherein the grism is constituted in such that a volume phase grating used as a diffraction grating is sandwiched between a first prism having a high refractive index and a second prism having a high refractive index .
12
14
3
In this case, an example of materials for preparing the high refractive index first prism and the high refractive index second prism includes dielectrics or semiconductors such as zinc sulfide (ZnS) and lithium niobate (LiNbO), and refractive indices thereof are a high refractive index of around 2.3.
12
14
In the present preferred embodiment, the first prism and the second prism are to be prepared from the same material.
Further scope of the applicability of the present invention will become apparent from the detailed description given hereinafter. However, it should be understood that the detailed description and specific examples, while indicating preferred embodiments of the invention, are given by way of illustration only, since various changes and modifications within the spirit and scope of the invention will become apparent to those skilled in the art from this detailed description.
16
Furthermore, a resin, for example, bicromate gelatin or the like may be used for a material from which the volume phase grating is to be prepared. A refractive index of the bicromate gelatin is a smaller refractive index of around 1.5 than that of around 2.3 of zinc sulfide and lithium niobate.
10
16
12
14
In this respect, zinc sulfide, lithium niobate, and bicromate gelatin are transparent with respect to a light beam having a wavelength within a visible range. Accordingly, the grism wherein bicromate gelatin is used as a material for preparing the volume phase grating in addition to application of zinc sulfide and lithium niobate for preparing the first prism and the second prism , respectively, can be employed for dispersing visible light.
10
12
12
14
14
16
12
14
12
14
FIG. 1
a
a
In the following, the grism shown in will be described in more detail wherein a side , defining right angle (90′) with respect to the bottom of a right angled triangle, of the first prism the cross section of which exhibits the above-described right angled triangle is opposed to a side , defining right angle with respect to the bottom of a right angled triangle, of the second prism the cross section of which exhibits the above-described right angled triangle. In this arrangement, the volume phase grating is further sandwiched between the first and second prisms and in such that the vertex angle &agr; of the first prism is opposed to the vertex angle &bgr; of the second prism .
10
12
12
14
16
14
14
In the grism , a light beam is input from the outside through a surface of the first prism , the light beam thus input to the first prism is input to the second prism through the volume phase grating , and the light beam thus input to the second prism is output to the outside through a surface of the second prism .
12
16
12
1
2
When it is assumed that a refractive index of the first prism is “n” and a refractive index of the volume phase grating is “n”, a critical angle of the vertex angle a of the first prism can be determined as follows.
FIG. 1
12
12
12
16
b
Namely, in , an expression of refraction in a plane of incidence of a light beam in the first prism as well as an expression of refraction in an interface defined between the first prism and the volume phase grating correspond to the following expressions (1) and (2), respectively.
&lt;PTEXT&gt;&lt;PDAT&gt;sin &agr;&equals;&lt;/PDAT&gt;&lt;HIL&gt;&lt;ITALIC&gt;&lt;PDAT&gt;n&lt;/PDAT&gt;&lt;/ITALIC&gt;&lt;/HIL&gt;&lt;HIL&gt;&lt;SB&gt;&lt;PDAT&gt;1 &lt;/PDAT&gt;&lt;/SB&gt;&lt;/HIL&gt;&lt;PDAT&gt;sin &thgr;&lt;/PDAT&gt;&lt;HIL&gt;&lt;SB&gt;&lt;PDAT&gt;1&lt;/PDAT&gt;&lt;/SB&gt;&lt;/HIL&gt;&lt;PDAT&gt;&emsp;&emsp;(1)&lt;/PDAT&gt;&lt;/PTEXT&gt;
&lt;PTEXT&gt;&lt;HIL&gt;&lt;ITALIC&gt;&lt;PDAT&gt;n&lt;/PDAT&gt;&lt;/ITALIC&gt;&lt;/HIL&gt;&lt;HIL&gt;&lt;SB&gt;&lt;PDAT&gt;1 &lt;/PDAT&gt;&lt;/SB&gt;&lt;/HIL&gt;&lt;PDAT&gt;sin(&agr;&minus;&thgr;&lt;/PDAT&gt;&lt;HIL&gt;&lt;SB&gt;&lt;PDAT&gt;1&lt;/PDAT&gt;&lt;/SB&gt;&lt;/HIL&gt;&lt;PDAT&gt;)&equals;&lt;/PDAT&gt;&lt;HIL&gt;&lt;ITALIC&gt;&lt;PDAT&gt;n&lt;/PDAT&gt;&lt;/ITALIC&gt;&lt;/HIL&gt;&lt;HIL&gt;&lt;SB&gt;&lt;PDAT&gt;2 &lt;/PDAT&gt;&lt;/SB&gt;&lt;/HIL&gt;&lt;PDAT&gt;sin &thgr;&lt;/PDAT&gt;&lt;HIL&gt;&lt;SB&gt;&lt;PDAT&gt;2&lt;/PDAT&gt;&lt;/SB&gt;&lt;/HIL&gt;&lt;PDAT&gt;&emsp;&emsp;(2)&lt;/PDAT&gt;&lt;/PTEXT&gt;
1
2
2
wherein when “&agr;−&thgr;” is a critical angle, &thgr;is 90°, that is, sin &thgr;is 1.0, so that it results in:
&lt;PTEXT&gt;&lt;PDAT&gt;sin (&agr;&minus;&thgr;&lt;/PDAT&gt;&lt;HIL&gt;&lt;SB&gt;&lt;PDAT&gt;1&lt;/PDAT&gt;&lt;/SB&gt;&lt;/HIL&gt;&lt;PDAT&gt;)&equals;&lt;/PDAT&gt;&lt;HIL&gt;&lt;ITALIC&gt;&lt;PDAT&gt;n&lt;/PDAT&gt;&lt;/ITALIC&gt;&lt;/HIL&gt;&lt;HIL&gt;&lt;SB&gt;&lt;PDAT&gt;2&lt;/PDAT&gt;&lt;/SB&gt;&lt;/HIL&gt;&lt;HIL&gt;&lt;ITALIC&gt;&lt;PDAT&gt;/n &lt;/PDAT&gt;&lt;/ITALIC&gt;&lt;/HIL&gt;&lt;HIL&gt;&lt;SB&gt;&lt;PDAT&gt;1&lt;/PDAT&gt;&lt;/SB&gt;&lt;/HIL&gt;&lt;PDAT&gt;&emsp;&emsp;(3)&lt;/PDAT&gt;&lt;/PTEXT&gt;
When the above-described expression (3) is substituted by the expression (1), an expression (4) as to the vertex angle a is obtained:
&lt;PTEXT&gt;&lt;PDAT&gt;&agr;&minus;sin&lt;/PDAT&gt;&lt;HIL&gt;&lt;SP&gt;&lt;PDAT&gt;&minus;1&lt;/PDAT&gt;&lt;/SP&gt;&lt;/HIL&gt;&lt;PDAT&gt;(sin &agr;/&lt;/PDAT&gt;&lt;HIL&gt;&lt;ITALIC&gt;&lt;PDAT&gt;n&lt;/PDAT&gt;&lt;/ITALIC&gt;&lt;/HIL&gt;&lt;HIL&gt;&lt;SB&gt;&lt;PDAT&gt;1&lt;/PDAT&gt;&lt;/SB&gt;&lt;/HIL&gt;&lt;PDAT&gt;)&equals;sin&lt;/PDAT&gt;&lt;HIL&gt;&lt;SP&gt;&lt;PDAT&gt;&minus;1&lt;/PDAT&gt;&lt;/SP&gt;&lt;/HIL&gt;&lt;PDAT&gt;(&lt;/PDAT&gt;&lt;HIL&gt;&lt;ITALIC&gt;&lt;PDAT&gt;n&lt;/PDAT&gt;&lt;/ITALIC&gt;&lt;/HIL&gt;&lt;HIL&gt;&lt;SB&gt;&lt;PDAT&gt;2&lt;/PDAT&gt;&lt;/SB&gt;&lt;/HIL&gt;&lt;HIL&gt;&lt;ITALIC&gt;&lt;PDAT&gt;/n&lt;/PDAT&gt;&lt;/ITALIC&gt;&lt;/HIL&gt;&lt;HIL&gt;&lt;SB&gt;&lt;PDAT&gt;1&lt;/PDAT&gt;&lt;/SB&gt;&lt;/HIL&gt;&lt;PDAT&gt;)&emsp;&emsp;(4)&lt;/PDAT&gt;&lt;/PTEXT&gt;
12
Based on the expression (4), a critical angle of the vertex angle &agr; of the first prism is represented.
Concerning the vertex angle &bgr;, a critical angle thereof can be also determined in accordance with the same manner as that described above.
10
It is to be noted that the vertex angle &agr; is defined at the same angle as that of the vertex angle &bgr; in this grism .
1
2
12
Under the circumstances, when it is arranged in such that “n=2.3” and “n=1.5”, such a large value of “&agr;=63.6°”, can be obtained as a critical angle of the vertex angle &agr; of the first prism from the above-described expression (4).
1
2
10
Since a critical angle defined between nand nis 40.7°, an optical path difference between the present grism and a conventional grism with a replica grating is as follows:
&lt;PTEXT&gt;&lt;PDAT&gt;2 tan(63.6)/tan(40.7)&equals;4.7.&lt;/PDAT&gt;&lt;/PTEXT&gt;
10
Namely, about 4.7 times higher resolving power than that of a conventional grism can be obtained by the present grism .
1
2
1
2
12
14
10
As described above, a critical angle of 40.7°or more defined between nand ncan be obtained with respect to only the vertex angle &agr; of the first prism . Besides, there is also the vertex angle &bgr; of the second prism in the grism , and accordingly, an angle obtained by adding the vertex angle &agr; to the vertex angle &bgr; (a sum of the vertex angle &agr; and the vertex angle &bgr;) exceeds easily the critical angle of 40.7° or more defined between nand n.
10
12
16
1
2
More specifically, according to the grism of the present invention, lithium niobate is used as a material for the first prism and bicromate gelatin is further used as a material for the volume phase grating , whereby even if the vertex angle &agr; is made to be 40° or more, it results in an angle which does not exceed its critical angle in the case when such condition that “n=2.3” and “n=1.5 ” is satisfied.
1
2
10
As explained as above, since it is possible that an angle obtained from at least a sum of the vertex angle &agr; and the vertex angle &bgr; is made to exceed a critical angle defined by nand n, the grism according to the present invention can achieve positively a higher resolving power than that of a conventional grism.
10
Therefore, according to the present invention, a grism having a high resolving power and a high efficiency can be realized.
10
Furthermore, according to the grism of the present invention, it is possible as described above that a higher vertex angle than that of a conventional grism with a replica grating is established, so that the whole grism can be downsized in the present invention.
10
Besides, the grism according to the present invention is easily mass-produced as compared with a directly processed grism, which is prepared by working directly a diffraction grating with respect to a prism by means of ion etching or the like manner, so that a manufacturing cost therefor can be significantly reduced.
The above-described preferred embodiment maybe modified into the following paragraphs (1) through (6).
12
14
(1) Although zinc sulfide and lithium niobate have been used for the materials of the first prism and the second prism in the above-described preferred embodiment, the invention is not limited thereto as a matter of course, but a material, which is transparent with respect to a wavelength of a light beam that is intended to permeate a grism and which has a high refractive index (for example, it is around 1.5 to 4) maybe appropriately employed. For instance, if a light beam having a wavelength within infrared region is intended to permeate the grism, gallium arsenide (GaAs), silicon and germanium they are transparent materials with respect to a light beam within infrared region and have a high refractive indices, or the like material may be used. In this case, gallium arsenide (GaAs) and silicon have refractive indices of around 3.5, germanium has a refractive index of around 4.0.
12
14
12
14
12
14
(2) While the first prism and the second prism have been prepared from the same material with each other in the above-described preferred embodiment, the invention is not limited thereto as a matter of course, but the first prism and the second prism may be prepared from different materials from one another. In this case, a refractive index of the first prism may differ from that of the second prism .
16
16
12
14
(3) Although a resin such as bicromate gelatin has been used for a material of the volume phase grating in the above-described preferred embodiment, the invention is not limited thereto as a matter of course. More specifically, it is sufficient that the volume phase grating has a smaller refractive index than that of at least one of the first prism and the second prism in the present invention. Accordingly, a transparent material with respect to a wavelength of a light beam, which is intended to permeate a grism in addition to have such refractive index as described above may be appropriately employed.
12
14
12
14
(4) While the vertex angle &agr; of the first prism has been the same as the vertex angle &bgr; of the second prism in the above-described preferred embodiment, the invention is not limited thereto as a matter of course, but the vertex angle &agr; of the first prism may differ from the vertex angle &bgr; of the second prism .
12
14
12
14
(5) Although each cross section of the first prism and the second prism has been defined in a right angled triangle in the above-described preferred embodiment, the invention is not limited thereto, but each cross section of the first prism and the second prism maybe defined in an appropriate configuration.
(6) The above-described preferred embodiment as well as the modifications described in the above paragraphs (1) through (5) may be appropriately combined with each other
Since the present invention has been constituted as described above, it has such an excellent advantage to provide a grism wherein even if the vertex angle of a prism increases, it does not exceed its critical angle, and its efficiency can be improved, besides it is possible to mass-produce such grism as described above and to realize reduction of its manufacturing cost.
It will be appreciated by those of ordinary skill in the art that the present invention can be embodied in other specific forms without departing from the spirit or essential characteristics thereof.
The presently disclosed embodiments are therefore considered in all respects to be illustrative and not restrictive. The scope of the invention is indicated by the appended claims rather than the foregoing description, and all changes that come within the meaning and range of equivalents thereof are intended to be embraced therein.
The entire disclosure of Japanese Patent Application No. 2000-195384 filed on Jun. 29, 2000 including specification, claims, drawing and summary are incorporated herein by reference in its entirety. | |
Q:
Triangle-triangle continuous collision detection
I am making a 3D game engine and I use continuous collision detection. I am using Sphere-Trees to cull primitive collision checks to a minimum. However, I'd like to perform continuous triangle-to-triangle collision checking.
How does continuous triangle-triangle collision detection work, assuming triangles with linear velocities?
A:
Continuous triangle intersection is explained in a classic Computer Graphics paper (PROVOT) and almost all research in Continuous Collision Detection use it to perform elementary tests.
The paper describes how to mathematically model the continuous triangle X triangle intersection problem. There are two types of collision involved: a vertex intersecting a triangle (vertex-face collision) and an edge intersecting another edge (edge-edge collision).
A triangle X triangle intersection is reduced to 6 vertex-face tests (1 for each triangle vertex) and 9 edge-edge tests (each triangle edge against each edge in the other triangle).
Let t0 be the start of the time interval [t0, t0 + ∆t] and assume that the positions and velocities in t0 are known. Assume also that the velocities are constant in the interval.
Vertex-face Collision
Let P (t) be the vertex and A(t), B(t), C(t) the vertices of the triangle. Let also Vp , Va , Vb , Vc be their respective constant velocities during the time interval. Thus,
P(t) = P(t0) + tVp,
A(t) = A(t0) + tVa,
B(t) = B(t0) + tVb,
C(t) = C(t0) + tVc.
If there is collision, then
∃t ∈ [t0, t0 + ∆t] such that
∃u, v ∈ [0, 1], u + v = 1, AP(t) = uAB(t) + vAC(t) (Equation 1)
Equation 1 just shows that in the collision time t, P must be inside the triangle ABC. But it is non-linear since u, v, t are unknown and there are factors that depend on two of them. To solve this, another condition is considered: that the triangle normal is orthogonal to the triangle. Thus
AP(t) · N(t) = 0, where N(t) is the triangle normal. (Equation 2)
It is important to note that this equation is not sufficient to verify the collision
since it is true if A(t), B(t), C(t), P(t) are coplanar. However it calculates t and therefore
eliminates Equation 1 dependency on this variable and turns it linear. N(t) is a t^2
term and AP(t) is a t term, so Equation 2 is cubic. The approach to solve it is described in Section Cubic Equations Solver. Equation 2 can result in three values for t. The lowest positive value t′ for which P(t′) ∈ ABC(t′) is chosen as the final result.
Edge-edge Collision
The ideas in Section Vertex-face Collision can be used in the edge-edge collision case with minor changes. Let AB(t) be one edge and CD(t) be the other one. The collision occurs if and only if
∃t ∈ [t0, t0 + ∆t] such that
∃u, v ∈ [0, 1], uAB(t) = vCD(t) (Equation 3)
Once again, this is a nonlinear system. The relation used to calculate t is that
A, B, C, D must be coplanar, like before.
(AB(t) × CD(t)) · AC(t) = 0 (Equation 4)
This also is a cubic equation, which can lead to 3 values for t. The lowest positive value that makes it possible for Equation 3 to be solved is chosen as the final result.
Cubic Equations Solver
There are several methods to get the roots of cubic equations. NUMERICAL RECIPES 3RD ED. shows a good algorithm using a combination of Newton-Raphson and bisection methods. The bisection method gets an interval [a, b], where the root is known to be, i.e., f (a) and f (b) have opposite signals. The algorithm iterates by dividing the interval at the midpoint, reducing interval to [a′,b′] as a result, and evaluating f(a′) and f(b′). This is done until the error in the root value is acceptable. The bisection method is assured to find the root, but has slower convergence than other methods.
On the other hand, the Newton-Raphson uses the derivative of the function to refine a root guess. If f is the function and a is the root guess, the Newton-Raphson method iterates by evaluating the zero crossing of the tangent line of f passing through f(a). The new root guess will be the abscissa of this zero crossing. The image shows the method. It converges fast, but has some special cases that can lead to divergence or cycling.
The approach of the book suggests using Newton-Raphson while it is converging fast enough and bisection otherwise. This allows fast and safe convergence. The code to implement it can be found in the book.
A:
Erin Catto GDC 2013, Continuous Collision Detection. The video is free for public viewing on the GDC vault. Erin keeps his own version of his slides available, but the GDC vault itself houses the free video.
Erin's link:
https://code.google.com/p/box2d/downloads/detail?name=ErinCatto_GDC2013.zip&can=2&q=
I can't speak for the Eberly work, but the idea of Erin's work is to reduce the problem to that of a root finding problem. GJK is used to compute distances of which are used to step forward/backwards in time to find a time of impact as a root to the equation of separation.
In order to implement Erin's work one must have an understanding of how to implement GJK, which makes use of knowledge of the Minkowski Sum and Support Points. Both of these concepts are fairly simple in isolation, though the implementation of a full-featured continuous collision package is very difficult to author.
All I can say about the Eberly work is that it seems to (after a 10 second skim) compute time impact directly through a fully enumerated system of if statements. Eberly is extremely good with geometry, and in general his work and documentation is quite intensive making it difficult for beginners in read.
Eberly, swept SAT: http://www.geometrictools.com/Documentation/MethodOfSeparatingAxes.pdf
| |
Unless God is in the business of building a home, the effort is in vain. In Psalm 127, God reveals that the best reward a man can receive is not what his hands can provide, but only what God can give him – a child.
Sons are indeed a heritage from the LORD, children a reward. Like arrows in the hand of a warrior are the sons born in one’s youth. Happy is the man who has filled his quiver with them. Such men will never be put to shame when they speak with [their] enemies at the city gate. -Psalm 127:3-5
Imagine it’s Christmas time. You have given your child a gift that took a lot of your time and money. After your child unwraps the gift, he or she begins to complain about having to care for the gift. How do you feel concerning their attitude and what do you say and/or do as a result of their reaction?
How do you think God views it when parents complain about the gifts of children that He has given them?
Children are gifts. The Bible is full of parents who viewed their children as real rewards. It seems as if those who had the hardest time actually having a child were often more grateful than others. Their beautiful expressions in Scripture remind us that children are undeniably a gift from God.
Look at some of these statements from biblical parents concerning when they realized they were expecting:
- Sarah: By faith even Sarah herself, when she was barren, received power to conceive offspring, even though she was past the age, since she considered that the One who had promised was faithful. -Hebrews 11:11
- Issac: Issac prayed to the LORD on behalf of his wife because she was barren. The LORD heard his prayer, and his wife Rebekah conceived. -Gen. 25:21
- Rachel: Then God remembered Rachel. He listened to her and opened her womb. She conceived and bore a son, and said, “God has taken away my shame.” She named him Joseph “May the LORD add another son to me.” -Genesis 30:22-24
- Manoah (Samson’s father): Then Manoah asked, “When Your words become true, what will the boy’s responsibilities and mission be?”
- Hannah: I prayed for this boy, and since the LORD gave me what I asked Him for, I now give the boy to the LORD. For as long as he lives, he is given to the LORD. –1 Samuel 1:27-28
- Zechariah: But the angel said to him: Do not be afraid, Zechariah, because your prayer has been heard. Your wife Elizabeth will bear you a son, and you will name him John. There will be joy and delight for you, and many will rejoice at his birth. -Luke 1:13-14
- Mary: And Mary said: “My soul proclaims the greatness of the Lord, and my spirit has rejoiced in God my Savior, because He has looked with favor on the humble condition of His slave. Surely, from now on all generations will call me blessed, because the Mighty One has done great things for me, and His name is holy.” Luke 1:46-49
Today, don’t see your children as a burden. Choose to see them as a blessing.
Travis Agnew serves as the Lead Pastor of Rocky Creek Church in Greenville, SC. His most recent book is Distinctive Discipleship. | http://www.travisagnew.org/2013/06/18/children-are-gifts/ |
I’ve been working on top-down angled action idea. For this indie project, I wanted to find safe cover locations for enemies to hide (crouch) behind and shoot from.
On my first attempt at a solution, I tried using the unreal EQS system, then my own implementation (picking points, raycasting to see if they hit an object at a crouching level, and raycast to see the point was clear at gunfire level.
This produced ‘okay’ results. But it presented a few problems. I’d have to implement additional checks for enemy sizes. Implement additional traces to checks if a spot was reserved/occupied. It also would miss spots (as shown by the red spheres) depending on how the checkpoints lined up. I’m also testing a lot of locations.
This solution just had too many issues, so I decided to take a different approach.
My second solution, add a simple box collider to valid cover spots, on a cover only trace channel. To accomplish this, I extended the UBoxComponent class creating UAreaCoverMarker and implementing a GetCoverLocations function. This function takes two parameters, the player’s location, requesting enemies width and returns a TArray of structs that contain each cover points FNavLocation and their ForwardVector (vector pointing towards the cover object.)
To discover the best hiding side, I calculate the colliders top for vertices, and sorted them by distance from the player.
A = Furthest vert from player
B = 2nd Furthest from player
C = 3rd Furthest from player
D = Closest to player
AB will always be the furthest side from the player. The direction of A-C will point away from the BoxCollider and towards the area, we want to test for cover space.
Step 1: Taking the enemies width I divided the length of AB, to calculate how many hiding slots could potentially exist on this line.
Step 2: Using the direction A-C I located the starting and ending test points.
StartPoint = Move A in the direction of A-C by 1/2 the enemies requested width.
EndPoint = Move B in the direction of A-C by 1/2 the enemies requested width.
Step 3: Move the StartPoint 1/2 the distance of enemy width towards B. (this is the center of our first test.
Step 4: Check if this spot has a collider on the ReservedArea channel below it towards the ground. If not proceed.
Step 5: Check if we can cast this point onto navmesh (setup a query tolerance.) if we can the point is valid!
Step 6: advance this test point by the enemies width towards the EndPoint, repeat for the number of hiding slots calculated at Step 1.
As you can see below, this doesn’t exactly work great for large rectangles or cubes. Depending on the player’s location, sometimes enemies should hide behind AC too!
To fix this, let’s turn to the late great Heron and find the side AC. If the Triangle PlayerAC has a greater area than PlayerAB, include AC in the solution. | http://johnmcroberts.com/index.php/2018/03/22/cover-location-detection/ |
Old Schwinn Bicycle
A few years ago I’d acquired an old Schwinn bicycle that needed a little TLC. Rather than restoring it to its original state I had something else in mind. Like the pioneers of mountain biking in the movie Klunkers, I wanted to get that bike rolling so I could have a new kind of adventure: take it off road on some gravel downhills! Naturally, I drove up to First Flight Bicycles to talk it over with Jeff. His knowledge of cycling, especially mountain bike history, was second to none in our area.
I met Jeff through the Tarheel Trailblazers, of which we were both members. Although we didn’t always work the same trails together, we did share a common bond through trail-building and riding. Jeff Archer and I weren’t what you would call “good friends”, because our friendship only existed in the local cycling world. He was one of the nicest people I’d ever met.
When I told him my plans for Schwinn, he let out a hearty laugh and, of course, offered to help. The frame was pretty much the only thing salvageable, so Jeff ordered some parts for me. We started with the wheels. His recommendation was to get something sturdy I wouldn’t get hurt (too bad.) Since he was the expert, and obviously had my best interests in mind, I ordered just about everything he suggested.
“He always had an appreciation for the vintage (his dad fixed old cars) and the Museum of Mountain Bike Art and Technology evolved from that love of history. He has said he liked the stories behind the bikes; the stories of the people who built and rode them.”BY GRAHAM AVERILL
Once all the parts arrived I was eager to put all together. I still needed a few odds and ends, though. So I took another trip up to the shop. When I got there Jeff led me down to the basement. Even though it was your typical basement, it was sort of creepy down there. A sort of graveyard for bikes with old bike parts everywhere. I wondered if that feeling showed on my face? He told me I could search for anything I needed. I wasn’t surprised to have found just that. Everything I needed to finish my epic build. Before I left that day I told him I would take that “old bike” out to Uwharrie and hit the gravel roads. I wanted to experience some of the joy those first mountain biker’s experienced. Jeff laughed again, and said he couldn’t wait to hear all about it.
Jeff passed away shortly after that. Before I got the chance to ride the bike he helped me finish. The local cycling community suffered the loss of a pillar. Through his involvement in building trails and his shop, First Flight Bicycles, he was instrumental in building the great community we have now. I was able to fulfill my promise to ride on the day of Jeff’s funeral service. I honored him by taking that “old Schwinn bicycle” out to Uwharrie and riding it like I said I would. I thought about him often during that ride. While I was sad at times, I swear I could hear his hearty laugh as I screamed down the loose gravel on that sixty year old bike. From what I know of Jeff, he would have been pleased at my decision.
An Awesome Guy
Jeff was an awesome guy. When I spent time at the shop it was like being around family. I’ve always thought his knowledge of cycling (especially mountain bike history) was second to none in our area, and the museum above the shop holds a vast collection that he was always happy to discuss at length.
Those things made him important to me and, I think, the rest of our local cycling community. There’s one thing in particular that stands out. While mountain biking was our common bond, I realized I was yearning for something else.
Since Jeff has been gone, I don’t limit myself to mountain biking. My rides these days are more adventure-minded, and I’m pretty sure that seed was planted by him the day I dragged that old crappy bike into the shop and asked for his help.
In a way, that was most likely the beginnings of my new love of bike-packing.
Jeff Archer Flight Crew
I’ll take another ride to honor Jeff Archer this December. Follow me as I accept the adventurous challenge to ride 300 miles across Florida.
My goal?
To raise money for the Jeff Archer Flight Crew. A fund of the North Carolina Interscholastic Cycling League supporting youth cyclists’ who are on scholarships.
Through the fund the league will cover race & registration fees and purchase loaner bikes for use the whole season. | http://www.northcarolinamtb.org/braf/ |
Company Description Traveloka is a technology company based in Jakarta, Indonesia founded in 2012 by ex-Silicon Valley engineers and aims to revolutionize human mobility with technology. Today, Traveloka is expanding its reach by operating in six countries in Southeast Asia and experimenting with new endeavors that will create a large impact in the markets and industries we touch.
Job Description
The User Interface (UI) Developer spearheads new product development and the improvement of existing products. You will leadthe development of drawings for prototyping and production, and conducts technical feasibility analysis of the design plans. You will workin close partnership with stakeholders to revitalize design solutions for outdated products and/or services.
The UI Developer is encouraged to uncover the rising new media technology trends and develop some business acumen to meet the user needs and feasibility of the technology. As a strong communicator, you need to be able to present ideas and concepts to both technical and non-technical audiences.
You are responsible for the coordination with the Engineers in Tech function to ensure the delivery of quality design products. In addition, you need to developthe more junior team members through capability development and technical coaching.
You are able to work on multiple tasks concurrently and deliver on expectations within deadlines with high level of changes and iterations required. In addition, You will need to demonstratestakeholder management skills in partnering internal stakeholders to develop quality creative solutions that meet overall business objectives and goals.
Key Responsibilities:
Determine design solution requirements
- Clarify the assigned design project scope, including its goals, requirements, and expectations
- Develop and communicate requirement-specific solutions to stakeholders
- Compartmentalize designs into functionally effective components
- Conduct technical feasibility analysis for choosing the appropriate medium/technology that fits with design intent and specification, user needs, and business goals.
Develop prototypes for new products and services
- Oversee design adjustments made to prototypes
- Manage components and assemblies of prototypes in adherence to applicable industry and business standards
- Recommend potential solutions to circumvent future design issues throughout the prototyping process
- Lead the design and demonstration of product prototypes
- Lead the development of drawings for prototyping and production
- Monitor design project execution to ensure timely completion and raise concern early to anticipate delay when required
Evaluate product and service performance and maintain the high standard of design quality
- Proactively provide explanations regarding the limits of technical solutions and propose alternative / potential solutions to overcome the barriers in achieving the design intent
- Implement changes and fix errors across the lifecycle of products and/ or services
- Identify outdated products and/or services and revitalise with design solutions in partnership with stakeholders
- Evaluate product concepts or technical solutions to identify the most viable products and/or services for implementation
- Test design solutions with validation activity that meets with design intent and specifications, and get them ready to be implemented in the intended platform
- Produce designs and test specifications for new visual and/or product and feature ideas, including participation in design research
- Engage relevant stakeholders to study the technical feasibility of products and/or services before going into the development phase
Conduct Usability Testing
- Conduct usability and concept testing of design prototypes in laboratory setting, remotely, and real-life setting
- Analyse and synthesize user and/or expert feedback on the navigation of user interface performance, as well as overall quality of the design solution
- Recommend refinements and iterations to design based on usability and concept testing results, including identifying potential big ideas for future improvement
- Monitor the quality of the user interaction and the design from success metric performance overtime, and provide actionable insights to improve product performance
Qualifications
Minimum qualifications
- Between 1–5 years overall work experience as a hands-on UI Developer or Front End Engineer in the industry (education background could range from Computer Science or any other relevant Programming disciplines; or Human-Computer Interaction, Digital Design, or other related field, with equivalent practical design experience).
- Portfolio that showcases UIDev craft (and its applications of technical methodology) with some level of understanding of the Human Centered Design methodology (along with its methods and tools), willingness to learn and iterate, and passionate about design and storytelling.
- Fluency in Indonesian and English communications (both verbal and written).
- A deep sense of accountability, adaptability, and embracing ambiguity working collaboratively in a fast-paced environment.
Preferred qualifications
- Experience working with Design System and with web or mobile based technologies using design and prototyping tools (such as Sketch, Adobe Creative Cloud, Balsamiq, Zeplin, ProtoPie, Miro, etc.)
- An evidence of performing in a multidisciplinary design, business, and tech team is a plus.
- Some understanding of the eCommerce and/or digital travel and lifestyle industries, extra points for having successfully deliver products in/for the SE Asia and/or regional markets.
- The ability to form consensus across a broad, fast-moving organization through excellent communication and interpersonal skills (including: negotiation, constructive feedback, presentation, managing expectations, resolve conflicts).
- Effective design sensibilities as they relate to applying a brand to digital experiences.
If this sounds like you, let us know you exist! With you as a User Interface Developer, we can seize business opportunities, all while creating meaning for our products and providing value for our users.
Additional Information
Traveloka Design team is tasked with the role of humanising technology – the one who has the most relevant expertise in understanding human values and life experiences – using systematic approaches such as the Human Centred Design methodology in delighting users and customers through Traveloka product offerings.
Comprises of people from the Design, Architecture, Social Sciences, and Computer Science background, we value collaboration and empathy when it comes to producing innovative solutions.
Read about our design journey here: https://medium.com/traveloka-design
Source: Bebee2
Area:
Company Description We are on a journey to build Indonesia’s digital future. That future can only be built with bright people like you. Our team’s operating...
From Pusat Inovasi Global - Jakarta
Published a month ago
Position: Backend Engineer (micro service) Design: Lead and participate in design sessions and discussions, including architectural recommendations, best...
Jakarta
Published 25 days ago
· A skilled React.js front end developer. · At least 3 years of experience as software developer with similar responsibilities. · Knowledge of web libraries...
Jakarta
Published 15 days ago
- Build applications and website in PHP and CodeIgniter - Be committed to the deadlines - Resolve technical issues and escalate unresolved issues if can’t fix... | https://findojobs.id/user-interface-developer-ID-8420 |
Preheat oven to 350 F.
Mix dry ingredients: flour, powder, cinnamon, cardamom, and sugar.
Mix in butter, egg, and yogurt.
Pour batter into a greased and lined cake tin. The batter should be fairly thick. Level it out and arrange plum slices over the top.
Bake cake for 8 min, while you make the streusel.
To make the streusel, mix flour, sugar, and cinnamon. Cut in cold butter and use your fingers to rub the butter into the dry ingredients to make a crumbly mixture.
Once the cake is out, sprinkle the streusel over it and bake for another 8 minutes. Use a toothpick to test the center.
Tips
Fruit
Any stone fruit could work. I actually used pluots that a friend gave to me. Plums seem like the most likely choice. Apricots, peaches, or cherries would also be yummy. You don't need to press the fruit into the batter, just lay the fruit over the top of the cake. Also, lay the skin side up if you can.
Yogurt
You could substitute with sour cream or cream cheese if you don't have yogurt. You could even try milk, but the batter will be a bit more liquidy.
Streusel
Streusel is actually quite easy, and a worthwhile extra for this cake. Try to keep the butter cold and chop it into very small pieces, then use your fingers to pinch and squish the butter into the dry ingredients. You could use a pastry cutter/dough blender, or forks, but fingers work fine. If your hands are warm, run them under some cold water and then dry them to keep the butter cool.
Calories
Baked goods are often high in sugar, fat, and carbs. As someone who bakes often, I have to watch how much I eat. I love to eat. I do. I also love to have a healthy body. That being said, I appreciate when a recipe states a calorie count. So I've decided to provide that for you.
Total calories in the cake: 2,385
Divide that number by how many slices you cut. I cut my cake into 8 slices, so each slice was 298 calories. Note that this does not include the streusel. Also, different ingredients will yield different calories, such as using a high fat yogurt (I used 2%).
Photos!
Our cake before baking
Streusel!
Our cake halfway through baking, with streusel added.
What can I say? Plums (or pluots in my case), cinnamon, and streusel. Delicious. I'll be perfectly honest, I ate half of this cake in one day. I also ate about half the streusel before it got on the cake. This recipe is perfect for breakfast, an afternoon snack, or, of course, with coffee. Let me know how you like it.
Love, | https://www.rachelscottage.com/post/plum-streusel-coffee-cake |
Why? Because the task of using pedals to change the notes is challenging, the strain caused by the finger systems and hand locations can be cumbersome, and lastly, coordinate between the feet and hands while playing takes a lot of practice, mastering, and patience for one to get it right.
6. Piano
The prevalence of using pianos has increased exceedingly since the 18th century, being the world’s most famous instrument, the piano is, in fact, the shortened form of pianoforte, the Italian word for the instrument.
The Piano is simply a soundboard with metallic strings sited that is embraced with a protective wooden case. Normally, the piano has 88 keys with each key produce a different sound variation, learning how to coordinate this keys and play a song is actually the hard part.
Any novice will not have an easy time learning the instrument particularly because learning to coordinate both hands at the same time and hit the right keys simultaneously to produce a melody can be quite a daunting task. The Piano is a very handy instrument not to mention it produces very nice sound; the piano has been used far and wide during an array of different performances.
5. Classical Guitar
The classical guitar belongs to the guitar family and has 6 strings on it. Though the classical guitar resembles a modern guitar, the materials used to make them are different plus it’s harder playing the classical guitar than the modern-day guitar. | http://topplanetlist.com/most-difficult-instruments/6/ |
The Board opened the new pavilion in Albox Francisco Turrets, with an investment of 648.000 euros.
The Minister of Education, Culture and Sport of the Junta de Andalucía, Luciano Alonso, This has opened new Sports Pavilion Saturday Turrets Francisco Sánchez in the Secondary School (IES) Martín García Ramos, in Albox (Almería), representing an investment of 648.000 euros, financed by the regional government and the city of the municipality under a collaboration agreement between the two institutions.
In a statement, Alonso, who was accompanied by the mayor of the town, Rogelio Mena (PSOE), the delegate of the Junta de Andalucia in Almeria, Sonia Ferrer, and territorial delegate of the Ministry, Isabel Arevalo; noted that the work has involved the construction of a sports hall, built on a plot with irregular topography staggered, allowing the formalization of a new facade, that “renews the image of the center”. The floor area is 1.352 square meter.
The plot is home to the pavilion and an outdoor sports area, is rectangular and is formed by a sports track 23,58 by 44,06 meters, suitable for the practice of handball, basketball and volleyball. Downstairs, a hall are located, costumes, nursing, toilets and store and upstairs, with separate access, audible public.
The Minister stressed that this equipment has a dual purpose that is “improve and encourage physical exercise among school downtown and also promote accessibility to sport for all citizens of Albox”, inasmuch as “use not only be academic and the doors of the new pavilion will be open for the rest of the population, belonging or not to the educational community, from the 18,00 hours”.
“The sport involves the acquisition of social values and is an important and solid foundation for the comprehensive development of school”, Alonso has stated, who has stated that “the sport in school is a key and essential part in the Andalusian sports gear system”. In this manner, He has insisted that “implementation and development represents an investment of present and future for society in the two basic pillars of the welfare state –education and health–“, as “instills positive social values in the age of formation of personality, while greatly contributes to the improvement and efficiency of the Public Health”.
According to the Minister of Education, Culture and Sports, “introduce the sport in schools aims to contribute to the increase in sport school of Andalusian making physical activity occupies a prominent place in schools favoring the promotion of healthy lifestyles”.
A su juicio, this project “will promote the improvement of programs for school-age athletes and consolidated” and “will implement new measures to increase and universal youth sports Andalusian, in line with the provisions of the Plan Sports School Age of Andalusia”.
SPORTS INFRASTRUCTURE INVESTMENT AND EDUCATIONAL
The new sports hall is one of the 98 performances in sports infrastructure undertaken by the regional government in the province of Almería through different agreements, with a total investment of “almost 27 million”, of which “provided the Andalusian 14 million”. This investment has been made in proceedings 20 covered pavilions, 14 pools, 11 soccer fields and 24 Outdoor sports courts, other facilities.
Since 2008, in educational infrastructure, excluding Plan Careers in Andalucia (OLA), the Board has indicated that have been made in the province “total 270 works, with a total investment of 113,5 million”. As for the OLA Plan, in the province of Almería develop “total 76 works with a total budget of 26,5 million”, of which “there are a total of 42 Completed, 16 construction, seven and eleven in project procurement, namely, “the 85,5 percent of the actions of the Plan are implemented”.
Also, He added investments in equipment, posed “total 43,47 EUR million invested since 2008”, furniture both, educational and sports equipment, and computer equipment –PC and laptop, whiteboards, technological installations–, specific material for Vocational Training and Specialised Education, inter alia. This requires an investment from 2008 in the province of Almería a total of 183,47 million in educational infrastructure. | http://elalmanzora.com/en/junta-inaugura-el-nuevo-pabellon-francisco-torrecillas-en-albox-con-una-inversion-de-648-000-euros/ |
Water tank and vertical antenna effect on nearby beam?
|Jul 27th 2011, 13:09|
|
|
W1VT
Super ModeratorJoined: Apr 4th 1998, 00:00
Total Topics: 0
Total Posts: 0
|
An ARRL Member writes:
|
The location is very near a steel water tank which is approximately 32 feet high with a diameter of approximately 25 feet. It is in the shape of a cylinder with a steel enclosed top and sits on a concrete base. I have 2 questions.
(1) would the center of the steel top be a good place or bad place to mount an 80/40 meter vertical antenna and/or a 2 meter vertical?
(2) would the water tank have negative effects on a nearby 20/15/10 meter beam on top of a 40 foot tower? The tower would be approximately 100 feet away from the tank.
|Jul 29th 2011, 05:09|
|
|
KE8DOJoined: Apr 4th 1998, 00:00
Total Topics: 0
Total Posts: 0
|
(1) I would think that it would be a very good antenna location but for 2 meters higher is better.
|
(2) I would think that the effect would be small but a tower higher than 40 feet would be better. | http://www.arrl.org/forum/topics/view/74/page:1 |
Author(s): Ijaz Ahmad, Fan Zhang, Junguo Liu, Muhammad Naveed Anjum, Muhammad Zaman, Muhammad Tayyab, Muhammad Waseem, Hafiz Umar Farid, Yong Deng.
http://doi.org/10.1371/journal.pone.0192294
Abstract
This paper presents a simple bi-level multi-objective linear program (BLMOLP) with a hierarchical structure consisting of reservoir managers and several water use sectors under a multi-objective framework for the optimal allocation of limited water resources. Being the upper level decision makers (i.e., leader) in the hierarchy, the reservoir managers control the water allocation system and tend to create a balance among the competing water users thereby maximizing the total benefits to the society. On the other hand, the competing water use sectors, being the lower level decision makers (i.e., followers) in the hierarchy, aim only to maximize individual sectoral benefits. This multi-objective bi-level optimization problem can be solved using the simultaneous compromise constraint (SICCON) technique which creates a compromise between upper and lower level decision makers (DMs), and transforms the multi-objective function into a single decision-making problem. The bi-level model developed in this study has been applied to the Swat River basin in Pakistan for the optimal allocation of water resources among competing water demand sectors and different scenarios have been developed. The application of the model in this study shows that the SICCON is a simple, applicable and feasible approach to solve the BLMOLP problem. Finally, the comparisons of the model results show that the optimization model is practical and efficient when it is applied to different conditions with priorities assigned to various water users.
Partial Text
The ever-increasing population growth and industrialization are putting constant pressure on water resources and it is more likely that the available water resources may not be able to meet the future water demands. The shortage of water resources has become more severe due to the uneven distribution of available water resources among various water demand sectors and is a major constraint to economic development in many countries around the world . Conflicts among various water demand sectors often arise when these sectors compete for limited water resources. As a solution to these conflicts, earlier studies have developed optimization models for water allocation to achieve sustainable development, such as dynamic programming [2,3], genetic algorithms [4,5], and game theory approach . However, these models are difficult to apply to practical water allocation issues because of their complex programming requirements to deal with discontinuous, multi-dimensional, non-differentiable, stochastic, uncertainty and non-convexity problems in solving multi-objective functions [7,8].
Bi-level programming issues are frequently found in the allocation of water resources among various water users . The proposed model offers an insight into the economic, water supply and hydrologic interaction for water allocation to distinctive water users. In the present study, there are not only conflicts among the different water users but also between the water users and reservoir managers. Consequently, BLMOLP was evolved to optimally allocate the water resources among competing water users for sustainable economic development. The developed BLMOLP model is applied to a single reservoir by aggregating the releases from the reservoir for water allocation. However, the model structure may be improved by way of integrating parallel stems and can be applied to complex water resource networks or the model can be run separately for each reservoir in the network. The reservoir is fulfilling the irrigation water demands of the areas located right away downstream, however, the water shortages occur in the further downstream areas in the course of the dry seasons.
In this study, a bi-level multi-objective model has been developed for the optimal water allocation under the heirachical structure. The model consists of a ROM and a BLMOLP. The ROM estimates the AW for allocation in a dry seasoan, which is used as an input to the BLMOLP. The BLMOLP model allocates the AW based on the decisions made by the upper level DMs (i.e. leaders) and the lower level DMs (i.e. followers). The model has been applied to the Swat River basin of Pakistan for an optimal allocation of AW among competing water use sectors, i.e. irrigation, industry, domestic and environment. Different techniques have been used to estimate the NEB to water use in irrigation, domestic, industrial, hydropower, and environmental (salinity control) sectors. The NEB is as low as USD 5 per thousand m3 for hydropower sector to as high as USD 412 per thousand m3 for domestic use. The estimated NEB of water use in the agriculture sector is USD 78 per thousand m3. The environmental (salinity control) sector has a NEB of USD 7 per thousand m3.
Source: | https://chromoscience.com/research-article-a-linear-bi-level-multi-objective-program-for-optimal-allocation-of-water-resources/ |
Bucksport controls the line, rushes past Dexter in LTC football clash
BUCKSPORT, Maine — The Bucksport Golden Bucks played as advertised during their homecoming football game at Carmichael Field on Friday night — and a game Dexter team had no answers.
Freshman Jaxon Gross and sophomore Josh Miller combined to rush for 360 yards and six touchdowns as coach Joel Sankey’s club methodically pulled away to a 43-7 Little Ten Conference victory over the Tigers.
Gross amassed a game-high 210 yards and three touchdowns on 30 carries while Miller contributed 148 yards and three scores on 13 attempts as Bucksport averaged 8.0 yards on its 56 rushes.
Bucksport (2-0) totaled 542 total yards, 453 on the ground, as yet another Golden Bucks’ running back, sophomore Ty Giberson, added 90 yards on just 11 carries. They ran behind a punishing line led by senior David Gross, who at halftime was honored as the school’s homecoming king.
“It’s a three-headed monster of young kids that are all just hungry,” David Gross said of the running backs, who all weigh more than 200 pounds. “They just run hard, and they’ve worked for this all spring and summer.”
The front — David Gross, classmate Dawson Eaton and juniors Gavin Billings, Owen Gaudreau and Julian Shook — not only controlled the line of scrimmage offensively but played dominating defensive roles as the Golden Bucks limited Dexter to 67 total yards.
“We all put in the work, and that’s why there’s no jealousy here,” David Gross said. “We see the work everyone puts in up in the gym and weight room. It’s all respect, and everyone deserves to have the best games of their lives because they all put in the work.”
Dexter (1-1) got most of its offensive production on senior Cameron Paige’s 54-yard touchdown run on the first play of the second quarter. That came after Bucksport had built a 14-0 lead on scoring runs of 1 and 14 yards by Jaxon Gross.
Paige had another run of 22 yards and a 52-yard kickoff return after Jaxon Gross’ first touchdown, as well as a touchdown-saving forced fumble to deny Giberson a touchdown after a long run to the brink of the goal line.
But the rest of the night belonged to Bucksport’s line in a game it considered an early test given Dexter’s own size up front and overall experience. The Tigers have 11 seniors compared to only five for the Golden Bucks.
“The kids work very hard, they push each other in practice, they push each other in the weight room and it’s paying dividends,” said Sankey, Bucksport’s 25th-year head coach. “They’re out there communicating the blocking assignments and always talking to each other, and then they get off the ball. They’re pretty physical.”
Bucksport extended its lead to 22-7 when Miller scored the first of his three consecutive touchdowns from 4 yards out with 6:27 left before intermission and senior quarterback Brady Findlay (6 of 14 passing, 89 yards) passed to Logan Stanley for the two-point conversion.
Bucksport forced Dexter into a three-and-out to open the second half, then marched 67 yards in six plays with Miller rushing the final 36 yards before Findlay passed to Tyler Hallett for the two-point conversion that extended the Golden Bucks’ lead to 30-7.
Miller added his final rushing touchdown (31 yards) to make it 36-7 with 43 seconds remaining in the third quarter. Jaxon Gross powered in with his final score of the night from 1 yard out with 2:47 left in the game.
“All three of the backs are over 200 pounds, and we have a quarterback who can throw and kids who can catch the ball so we could spread people out and do that,” Sankey said, “But for us it’s line up and here we come.” | |
- Padma Ravichander, MD, consulting and applications solutions, Perot Systems
Over the past two decades, Padma Ravichander has watched her native India grow into a major global centre for IT services. Ravichander started her career in Canada, where she studied and worked for 15 years before being lured back to India in 1994 to set up the first offshore operation for Hewlett-Packard in Bangalore. In 2005 she joined Perot Systems, a Texas-based IT service provider with revenues of $2.3 billion, founded by one-time US presidential candidate Ross Perot.
- How did Bangalore become India's IT capital?
When the Indian economy was opening up in the 1990s, after two decades of tight state control, the government in Karnataka set up a tax-free zone for IT. It built software technology parks that brought people to Bangalore, which is a great place to live. The climate is like that of Silicon Valley and in those days there was very little congestion. South India produces lots of software engineers: 300,000 new engineers qualify every year and about 60% of them come from the south.
- Was it hard setting up an operation in India after so many years in Canada?
When I came to Hewlett-Packard (HP) in Bangalore, it had 17 employees in a four-storey building and each person sat in a corner of one floor. They said I could set up in the middle on any floor. Actually, my priority was to get a female washroom. All the employees were men and they had turned the one female washroom into a store.
It was very hard in those days to get through the bureaucracy to bring equipment into the country, and to ensure a reliable power supply and internet connections. The culture was also more chauvinistic. Many of the men didn't like having a woman as boss. They couldn't imagine that I could accomplish things that they couldn't. I had to show my calibre professionally and socially, going out to bars and cafes. But then business picked up.
We grew to 300 employees in the first year and won two big contracts. Everybody likes being in a growing company because it means good career opportunities.
- You're still the only woman in a leadership position at Perot Systems. Does it get any easier?
I am used to walking into a room full of men and being the only woman. Now I hardly notice it. The only time it makes a difference is in a very tough meeting when people become angry. Men are more emphatic and authoritative, and maybe I don't come across as strongly. Sometimes, though only rarely, my voice isn't even heard.
- How do you attract people to Perot Systems in India, when you are competing with much better known firms?
When I joined Perot Systems in 2005, I personally had a better brand than Perot Systems - no-one had heard of it, but I had hired thousands of people for HP and lots of people from HP moved to Perot. The profile of a company is very important in attracting people here, and the big names such as Microsoft, Oracle, HP cream off all the best graduates.
I had a high attrition rate at Perot. The process has to start from within. I've taken my population of 3,000 engineers and I've made them my brand ambassadors. There used to be a philosophy here of "we can't do that, we're not good enough" that held us back. I've launched seven initiatives, starting with recruitment and integrating new employees into the work environment, as well as improving training, mentoring and reward and recognition.
Employee referrals have gone up to 30% from nothing and the attrition rate is down to 13% from 24% in 18 months. I think I can bring it down to single digits.
- You spent three years in Shanghai for HP; how has India developed in comparison with China?
India has the big advantage of a population familiar with the English language and many of its people have very strong project management skills. When I was in Shanghai, only two people out of 100 employees spoke English. Those two people had to be in every meeting and then translate for the others.
Many Indians have studied in North America. In the 1970s and '80s, there was a big brain-drain because the Indian economy was so closed, but many of those people have now returned and act as a link between the cultures.
- How do you manage your time?
Work is always the priority. My customers come first, then my people, then my family, then myself. I get up really early to have some time to myself and I try to cram 27 hours into a 24-hour day by taking advantage of time differences. With email and my BlackBerry, I'm always on call. It's just what's expected. Multi-tasking is what tomorrow's world is about. | https://www.managementtoday.co.uk/leadership-lessons-perots-top-woman/article/667960 |
How long is a small hourglass?
Also know, how long is a standard hourglass?
Hourglass, a device for measuring time. In its usual form it consists of two cone-shaped or oval glass receptacles joined by a narrow neck. Sand or a liquid (such as water or mercury) in the uppermost section of a true hourglass will run through the neck into the lower section in exactly one hour.
Secondly, why is an hourglass called an hourglass? The edge of the sand is uneven. If you mark five-minute intervals on the glass, the sand will hit those marks differently each time you turn it. An hourglass shows only when an hour is up. The very name says it's hard to make one that runs more than an hour.
In this way, how long is a small sand timer?
The one minute timer is accurate to within 1 second. The two minute timer is accurate to within 10 seconds . The three minute timer is accurate to within 12 seconds. The five minute timer is accurate to within 3 seconds.
How accurate is an hourglass?
Hourglass Sand Timer Accuracy. Hourglasses are aesthetically pleasing ornaments, rather than accurate timepieces - most of our hourglasses (except fillable ones) are accurate to within +/- 10%. | https://findanyanswer.com/how-long-is-a-small-hourglass |
Professor Irving Kirsch is Associate Director of the Program in Placebo Studies at Harvard Medical School and is noted for his work on placebo effects, antidepressants, expectancy, hypnosis and the originator of response expectancy theory, is coming to give a talk at Bournemouth University. His influential book “The Emperor’s New Drugs: Exploding the Antidepressant Myth” was shortlisted for the 2010 Mind book of the Year award and was the central premise of a CBS 60 Minutes documentary. His work has changed how anti-depressants are prescribed in the UK. He will be giving a public lecture on the “Wonderful Word of Placebo” on Wednesday the 21st of June at 6.30pm in the Allesbrook LT. I have created an Eventbrite registration page (https://thewonderfulworldofplacebo.eventbrite.co.uk) should you want to attend this. Professor Kirsch will also be giving a talk that will be more directly about his book on Friday the 23rd of June at 12.30pm in the Lawrence LT. The abstracts for both talks are also below.
Wonderful Word of Placebo
Wednesday 21st of June 2017 18:30 in the Allesbrook LT
Abstract
There is not just one placebo effect; there are many placebo effects. Placebo effects can be powerful or powerless depending on the color, dose, strength of the active treatment, branding, price, mode of administration, and the condition being treated. Psychological mechanisms underlying the placebo effect include Pavlovian conditioning, expectancy, and the therapeutic relationship. Because the placebo effect is a component of the response to active treatment, these mechanisms can be used to enhance treatment outcome. Also, contrary to received wisdom, placebo treatment can produce meaningful effects even when placebos are given openly without deception.
The Emperor’s New Drugs: Exploding the Antidepressant Myth
Friday 23rd of June 2017 12:30 in the Lawrence LT
Antidepressants are supposed to work by fixing a chemical imbalance, specifically, a lack of serotonin or norepinephrine in the brain. However, analyses of the published and the unpublished data that were hidden by the drug companies reveal that most (if not all) of the benefits are due to the placebo effect, and the difference in improvement between drug and placebo is not clinically meaningful. Some antidepressants increase serotonin levels, some decrease serotonin, and some have no effect at all on serotonin. Nevertheless, they all show the same therapeutic benefit. Instead of curing depression, popular antidepressants may induce a biological vulnerability making people more likely to become depressed in the future. Other treatments (e.g., psychotherapy and physical exercise) produce the same short-term benefits as antidepressants, show better long-term effectiveness, and do so without the side effects and health risks of the drugs. | https://blogs.bournemouth.ac.uk/research/author/bparris/ |
The concept of probability in physics: an analytic version of von Mises' interpretation Louis Vervoort, Centre Interuniversitaire de Recherche sur la Science et la Technologie (CIRST), University of Quebec in Montreal (UQAM), Montreal, Canada [email protected] [email protected] Abstract. In the following we will investigate whether von Mises' frequency interpretation of probability can be modified to make it philosophically acceptable. We will reject certain elements of von Mises' theory, but retain others. In the interpretation we propose we do not use von Mises' often criticized 'infinite collectives' but we retain two essential claims of his interpretation, stating that probability can only be defined for events that can be repeated in similar conditions, and that exhibit frequency stabilization. The central idea of the present article is that the mentioned 'conditions' should be well-defined and 'partitioned'. More precisely, we will divide probabilistic systems into object, initializing, and probing subsystem, and show that such partitioning allows to solve problems. Moreover we will argue that a key idea of the Copenhagen interpretation of quantum mechanics (the determinant role of the observing system) can be seen as deriving from an analytic definition of probability as frequency. Thus a secondary aim of the article is to illustrate the virtues of analytic definition of concepts, consisting of making explicit what is implicit. 1. Introduction. The first and simplest axiomatic system for probability, published in 1933 by Andrei Kolmogorov (1933/1956), is generally believed to cover all probabilistic systems of the natural, applied and social sciences. At the same time Kolmogorov's theory – the calculus of probabilistic or random events – does not define what a 'probabilistic / random event' is and does not provide any interpretation of the notion of probability – besides of course through the mathematical axioms it fulfills. For an intuitive understanding beyond mathematics of what probability 'is', one thus needs to resort to the ideas of other fathers of probability theory, such as Laplace, Fermat, Venn, von Mises, and to the philosophers having investigated the 2 question (for general references on the different interpretations of probability see e.g. Fine (1973), von Plato (1994), Gillies (2000), Khrennikov (2008)). Based on our practice as a physicist, we would think that in the broader community of (classical) physics the most popular interpretation of probability is the frequency model – especially the limiting frequency version due to Richard von Mises (1928, 1964). There seems to be a good reason for that: in the practice of physics any experimental determination or verification or measurement of a probability is always done by determining a relative frequency. However in philosophy and the foundations of quantum mechanics other interpretations, in particular the subjective interpretation, are increasingly popular. Suffices to consult contemporary works on the interpretation of probability in physics to realize how vivid and wide-ranging the debate is. One conclusion of a very recent review work is that for instance the objective / subjective controversy is far from being settled (see the excellent Beisbart and Hartmann 2011). At any rate, as both a philosopher and a physicist we cannot escape from noticing the relative underrepresentation of the frequency interpretation à la von Mises in the contemporary debate on probability, even in philosophy of physics. This may be due to the fact that more basic frequency interpretations have extensively been analyzed and found wanting (cf. e.g. Fine (1973), von Plato (1994), Gillies (2000) and references therein). Moreover, there are well-known mathematical problems with von Mises' work, notably related to the notion of 'collective'. It seems to us that all this may have unduly discredited the valuable parts of von Mises' work. In any case we believe that a revisit of von Mises' interpretation, taking these criticisms into account, is a worthwhile effort – if only to balance the debate. Specifically, we will argue that von Mises' work, if amended, can become an efficient tool for problem solving. Thus we will not focus here on the interpretation of probability in general, i.e. on the concept of probability as it is used in general philosophy, social science, everyday life, etc. Rather, we will focus on the notion as used in physics solely. To that end we will propose an analytic definition of probability that can be seen as a modification of von Mises definition 1 (1928, 1964). We try to make explicit some notions which we believe are implicit in von Mises' theory. Moreover we claim this can be done without using much mathematical formalization, but by applying a simple but precise conceptual analysis. Also van Fraassen has provided a well-known analysis of probability as frequency (1980, pp. 190 – 194). Based 1 In essence, von Mises interprets the probability P(Rj) as limn ∞ n(Rj)/n, where n(Rj) is the number of events or trials that have result Rj in a series of n trials (j {1,...,N}). 3 on a study of how probability is used in physics, van Fraassen presents a logical analysis of how to link in a precise way physical experiments to probability functions. The author gives as a summary of his elaborate model following definition (1980, p. 194): "The probability of event A equals the relative frequency with which it would occur, were a suitably designed experiment performed often enough under suitable conditions." Our definition will be seen to be in agreement with the above; but our starting point will be different, namely to investigate what precisely a probabilistic physical system is. In the following we will give in particular a further analysis of what the 'suitable conditions' in van Fraassen's definition are, and what 'often enough' might mean. At least one other very recent work concludes that von Mises' theory is highly relevant for interpretational questions (Khrennikov 2008); we will make a link also to this reference. Finally, we also found Kolmogorov's and in particular his pupil Gnedenko's standard references on the calculus highly recommendable sources 2 for studying the interpretation of probability (cf. Kolmogorov 1933/1956, Gnedenko 1967). Anyone who tried to construct a precise definition of probability soon realizes that the topic is indeed surprisingly subtle. In a standard physics curriculum, students are offered at least two interpretations of probability, the classic interpretation of Laplace for chance games, and the frequency interpretation for natural probabilistic phenomena (cf. almost any textbook on probability calculus, e.g. Thijms 2001, Gnedenko 1967). Modern books may also provide the subjective interpretation, without linking it to the other interpretations (e.g. Thijms 2001). What we will term 'chance games' are random experiments using man-made probabilistic systems, such as dice, urns containing balls, cards, roulettes, etc. 'Natural' probabilistic phenomena are ubiquitous in nature and can be found in diffusion, population dynamics, fluid dynamics, quantum mechanics and in any branch of natural science. Scientists typically suppose that these phenomena spontaneously happen in nature according to the laws of probability theory, also when nobody looks. A first challenge we are interested in is to devise a unified definition that applies to both chance games and natural probabilistic phenomena – both concern physical systems, after all. Such a definition seems highly desirable, not only for foundational but also for practical reasons. First, it suffices to have a look at the texts of von 2 We found in particular Gnedenko's book on probability calculus helpful. Boris Gnedenko did not only substantially contribute to probability calculus (especially on elaborations of the Central Limit Theorem and statistics); his work manifestly reflects his interest in the foundational issues. Also Kolmogorov insisted on the need of a philosophical definition of probability (cf. his (1933/1956) p. 9). 4 Mises or almost any reference text on the calculus to realize that without a precise idea of what probability is, beyond the formal aspect, many somewhat subtle problem is likely to be treated in a wrong manner. Von Mises provides many tens of such flawed calculations by even experts. No surprise, it has been said that "In no other branch of mathematics is it so easy to make mistakes as in probability theory" (Tijms 2004, p. 4). Needless to say, for foundational reasons it is even more important to have a clear idea of the implicit notions that the concept of probability contains. We will highlight the latter claim by exposing a (perhaps surprisingly) close link between the interpretation of probability and of quantum mechanics. In some detail, the frequency interpretation we propose here differs from von Mises' theory (1928, 1964) in at least two respects (some may find our model actually quite different). To start with, our model is simpler. It is essential to note that a real-world (or physical) theory (T) for probability is the union of a calculus (C) and an interpretation (I): T = C I (Bunge (2006)). The theory 'I' is not part of mathematics but stipulates how to apply the theory to the real world. 'I' may contain e.g. the philosophical hypothesis that the theory represents things or events 'out there'; and more importantly, it contains rules stipulating on which things / events the mathematics to apply, and how. Von Mises proposed both a calculus C and an interpretation I. His calculus is based on the concept of collective (an in principle infinite series of experimental results) – a calculus that strikes however by its complexity, and that probably no-one would consider using nowadays on a regular basis (see e.g. von Mises' treatment of de Méré's problem, von Mises (1928) p. 58ff). Therefore we will not use von Mises' calculus (it leads of course to the same results as Kolmogorov's). Even within the interpretational part I, we will not make use of the cumbersome concept of collective; in particular we believe that it is not necessary to resort to the notion of 'invariance under place selection' to characterize randomness (see next Sections and Appendix 1; Gillies (2000) p. 112 actually proves this fact). So our attitude is pragmatic: few people challenge the idea that the mathematics of probability theory is fully contained in Kolmogorov's calculus – so our calculus (C) is Kolmogorov's. What is however controversial is the following question: what is the exact subject matter of probability theory – to what exactly to apply it? We will push the answer further than von Mises' classic answer: according to him probability theory treats "mass phenomena and repetitive events" (von Mises (1928 / 1981) p. v), characterized by a set of attributes that form the attribute space (Anticipating, we will arrive at the conclusion that probability theory applies to a special type of random events, which we will term 'p- 5 random'.) The definition we will propose in Section 3 captures, we believe, the essence of von Mises' interpretational model (I), but in a simpler form which is not based on the often criticized notion of collective. Besides a simplification, our analytic model is intended as a clarification of the frequency interpretation, as applied to physics. We believe it helps to avoid paradoxes to which probability theory seems so sensitive. Our main claim is that one substantially gains in partitioning probabilistic systems into subsystems, namely test object, initiating, and probing subsystem (or 'environment' if one prefers). We will argue that this partitioning allows to solve classic problems (such as Bertrand's paradox), and bring in focus a link between classical and quantum systems. The latter link has been emphasized by other philosophers from a different angle (Szabó 1995, 2000, 2001, Rédei 2010). As already mentioned, recently another author has come to the conclusion that von Mises' work is more than worth a reappraisal (cf. Khrennikov 2008). The author offers in his textbook a detailed and convincing analysis based on the mathematics of collectives. Khrennikov concludes in particular that well-known criticisms claiming that von Mises' theory (C) lacks mathematical rigor are not really cogent, as we argue for completeness in Appendix 1. But since we will not focus on the formalism but on a conceptual study (I) we at least are immune against the criticisms relative to C. 2. Introductory ideas: frequency stabilization and partitioned conditions. Examples. The frequency interpretation of probability can be seen as in agreement with a general credo of philosophy of science (dating from the Vienna Circle or earlier) stating that to look for the meaning of concepts, it is a good starting point to ask how one measures / verifies instances of the concept – if this is possible. As already said, in physics any experimental probability is determined, measured, as a relative frequency, also probabilities in quantum mechanics. Of course theories may predict probabilities as numbers (such as the square of a modulus of a wave function, a transition amplitude, and values of any probabilistic property) that are not obviously ratios or frequencies, but to verify these numbers one needs to determine relative frequencies (we will illustrate this procedure in a few case studies). This is the starting point of von Mises' frequency model. A point to mention from the start is that von Mises attributes probability only to series of experimental outcomes – series of outcomes of random experiments. 6 In order to get a feel for the problems to be investigated, let us have a look at a typical probabilistic system, namely a die. A central question for us is: when or why is a die 'probabilistic', or 'random' (or rather the throwing events or outcomes) ? Simply stating in non-anthropocentric terms what a die throw is, seems already to bring to the fore a few key notions. In physical terms a 'die throwing event' or 'die throw' consists of the 3-dimensional movement of a (regular) die, that is 1) characterized by the time evolution of its center of mass and its three Euler angles, 2) caused or initiated by an 'initiating / initializing system' (e.g. a randomizing and throwing hand, or automat), and 3) probed by an 'observing / probing system' (e.g. a table) that allows to observe (in general 'measure') an outcome or result R (one up, two up, etc.). In the case of die throwing, it is easy to realize that if we want to use the die as it should, i.e. if we want to be able to observe or measure the usual probability for the different results of the throws, we have to repeat the experimental series (i.e. the throws) not only by using the same regular die, or similar regular dies, but also by applying the usual 'boundary conditions'. Here we obviously not refer to the detailed initial conditions of each individual throw, which are unknown, but to the conditions of our random experiment as they can be communicated to someone else ("throw roughly in this manner, probe on a table", etc.). These boundary conditions, then, are related to our throwing, the table, and the environment in general. Irregular conditions in any one of these three elements may alter the probability distribution. We can for instance not substantially alter our hand movement, e.g. by putting the die systematically ace up on the table: this could change the probability distribution for the six outcomes from the usual (1/6, 1/6, 1/6, 1/6, 1/6, 1/6) to (1, 0, 0, 0, 0, 0). Nor can we put glue on the table, position it close to our hand, and gently overturn the die on the table while always starting ace up – one of the outcomes could occur much more often than in a ratio of 1/6. Nor can we do some experiments in honey instead of air: again one can imagine situations in which the probabilities of the outcomes are altered. As will be seen further, it will make sense to isolate the mentioned elements, and to consider a random event as involving a random system containing three subsystems, namely 1) the (random) test object itself (the die), 2) the initiating system (the throwing hand), and 3) the probing or observing system (the table, and let's include the human eye). We will call such a composed random system a 'p-system' or probabilistic system. Just as one can associate subsystems to the random event, one can associate (composed) conditions to it, so 7 conditions under which the random experiment occurs, in particular initiating and probing conditions. Here is another example, from darts. According to the frequency interpretation, if we want to determine the probability that a given robot hits a given section of a dartboard (suppose there are 10 sections S1 to S10), we let it throw say 500 times, and count relative frequencies: n1 hits in S1, so P1 (the probability the robot hits S1 in the given conditions) is approximately n1/500, etc. Again, if we want to define probabilities for the experiment, the robot, the darts and the board need to behave well and 'operate' in well-defined 'constant' conditions. The probabilities are defined relative to these conditions, referring to the test object (the darts), the initiating system (the throwing robot), and the probing or observing system (the board, and we can again include some optical detection system). For instance, if during our experiment the robot's output force diminishes due to a failing component in its circuit, or if some of the darts break during flight, P1 will be much lower than n1/500, etc 3 . All this may look almost trivial; and yet it will prove highly helpful to explicitly include these observations in the definition of probability we will propose in the next Section. At this point an essential remark is in place. Instead of 'initiating system' and 'probing system', it can be more appropriate to speak of 'environment', namely in the case of spontaneous or 'natural' probabilistic events (versus 'artificial' ones, as outcomes of chance games, which are created by human intervention). Many random events occur spontaneously, without any known cause. A spontaneously disintegrating nucleus has probabilistic properties (e.g. its half-life, i.e. the time after which it disintegrates with a probability = 1/2). Such properties are, on usual interpretations, not 'initiated', as in our die throwing, by a physical system or a person. Neither are they 'probed', except when subject to lab experiments. But the nuclei disintegrate spontaneously, according to a given well-defined probabilistic pattern, only in a well-defined environment. Changing the environment, e.g. by irradiating the nuclei or by strongly heating them, may very well change the probability distribution in question. In other words, if we want to determine (measure) the half-life of the nuclei, i.e. the probability of disintegration, we have to put them in well-defined ('initializing') conditions of temperature, pressure, etc. and measure their properties in well-defined ('probing') conditions, that scrupulously imitate its natural environment – if we want to know the properties 'in nature'. So also here the initial and final conditions, or environment, re-appear. (As a side-remark, 3 Notice there is some arbitrariness in where we draw the line between the different subsystems. But that doesn't play any role in the following. 8 note that the half-life of 'a' nucleus can only be measured on a huge ensemble of 'identical' nuclei.) By the above partitioning in subsystems we have thus rendered the concept of 'conditions' explicit. To the best of our knowledge, a somewhat detailed analysis of this notion has not yet been undertaken. It appears succinctly in Gnedenko's reference work on probability calculus (1967). Gnedenko states (p. 21): "On the basis of observation and experiment science arrives at the formulation of the natural laws that govern the phenomena it studies. The simplest and most widely used scheme of such laws is the following: Whenever a certain set of conditions C is realized, the event A occurs." And a little further (1967, p. 21): "An event that may or may not occur when the set of conditions C is realized, is called random." Thus on a natural interpretation, and as is already implicit in Kolmogorov (1933/1956, p. 3) and explicitly stated by Gnedenko (1967, p. 67), even unconditional probabilities (P(R)) can be regarded as conditional on the circumstances of realization. Therefore the natural generalization in the model we will propose below is to replace, if helpful, P(R) by P(R|C) where C contains all relevant parameters that describe the initiating, probing, and 'environmental' experimental conditions. The importance of what we termed 'probing conditions' has been emphasized by other philosophers in the specific context of quantum probabilities. According to Szabó's 'Kolmogorovian Censorship Hypothesis' (Szabó 1995, Szabó 2001) there are no genuine nonclassical probabilities, in the sense that 'quantum probabilities' are always classical conditional probabilities of the measurement outcomes, where the conditioning events are the events of choosing / performing the measurement setup to measure a certain observable. This claim can well be seen to be fully agreeing with the model we will elaborate here. The relevance of this statement for the foundations of quantum mechanics will be argued for below (and see Szabó 1995, 2000, 2001, Rédei 2010). Now that we have a somewhat better idea about the above mentioned conditions, it seems not difficult to deduce the physical conditions for 'randomness' of a well-known physical system as a thrown die (we will see further that the condition proposed above by Gnedenko is not sufficient). Suppose we throw a die 20, 30, in general n times (n >> 6), and 9 that we note how often the 6 results Rj (j = 1,..., 6; Rj = 1,..., 6) occur in these n throws. We can then determine the relative frequencies of the results Rj, namely the 6 ratios nj/n where nj is the number of throws that have result Rj and n the total number of throws. Now, according to the frequency interpretation (and to any physicist's intuition and practice), in order that the die throws can be termed 'probabilistic' or 'random' or 'stochastic', it is a necessary condition that the relative frequencies of the Rj (the ratios nj/n) converge towards a constant number when n grows. For a regular die thrown in regular conditions it is an empirical matter of fact that the six ratios nj/n converge towards 1/6 when one increases the number of trials n; in other words the nj/n approach this number with increasing numerical precision. If such 'frequency stabilization' would not occur, one cannot speak of probability 4 . If for instance the die would sublimate during the experiment in an asymmetric manner (thus gradually losing weight on one side), or if we accompany the die's movement in a structured manner that does not lead to frequency stabilization, we can in general not attribute a probability to the outcomes. If we repeat the throws, the above ratios may not converge but erratically jump from one value to another. In sum, if there is no frequency stabilization, the die throwing is not probabilistic. On the other hand, if frequency stabilization occurs, the probability of the result Rj is given by P(Rj) = nj/n = (number of events that have result Rj) / n, where n must be large enough (we will come back to the latter condition in a moment). This is, or this is in full agreement with, the frequency interpretation of von Mises and others (von Plato (1994), Gillies (2000), Khrennikov (2008)). At least in physics, frequency stabilization appears to be a property that any system that deserves the predicate 'probabilistic' necessarily must have, as we will illustrate below by a realistic physical case study. Von Mises writes e.g. (1928, p. 12): "It is essential for the theory of probability that experience has shown that in the game of dice, as in all the other mass phenomena which we have mentioned, the relative frequencies of certain attributes become more and more stable as the number of observations is increased." Let us then illustrate the ideas of (partitioned) conditions and frequency stabilization by a realistic example from the physics lab, namely recent experimental findings in fluid mechanics and nonlinear physics, which have sparked much interest in the physics 4 The term 'frequency stabilization' appears in von Mises' work; in our opinion it best captures the property in question. 10 community in the last years. A group of experimentalists have discovered that oil droplets can be made to hover over an oil film (Couder et al. 2005, 2006 and Eddi, Couder et al. 2011). To that end, the oil film is made to vibrate by an external motor. If small oil droplets are deposited on such a film they sometimes begin to horizontally walk over the film, for indefinite time; actually they bounce so fast on the film that they seem to hover 5 . However this stable walking regime only occurs in well-defined experimental conditions, i.e. for precise values of the physical parameters of the system, essentially the frequency and amplitude of the external vibration, the size of the droplet, the geometry of the oil film and bath, and the viscosities of film and droplet. If these parameters are fine-tuned and lie within precise ranges of values, well-documented by the researchers, the droplets walk horizontally; outside these value ranges the movement becomes erratic and/or the droplet is captured by the film. Now, in the walking regime it is possible to experimentally determine certain probabilities, notably the probabilities PR that the walking droplet or 'walker' passes through a certain space region R. To that end, the physicists have counted relative frequencies (cf. e.g. Couder et al. 2006, Fig. 2-3) much as we illustrated above in the case of a dartboard: they determine nR/n, where nR is the number of trials in which the droplet passes through R and n the total number of trials. (Obviously they create droplets in identical conditions and measure them in identical conditions.) However before determining numerical values of probabilities as PR, the physicists have spent weeks, months and possibly years to identify the exact conditions for which a stable probabilistic pattern (i.e. probabilities PR) occurs. To prove that the system is probabilistic (and not erratic) there is only way: namely to prove that frequency stabilization occurs, i.e. to prove that the ratios nR/n converge to a constant number with ever better precision when n is increased. When experimentalists state numerical values of probabilities in publications, they have assured that such a frequency stabilization has occurred in their experiments (note that ratios / frequencies could also be determined without first verifying that a stable regime is identified, but it would be scientifically unacceptable to call such ratios probabilities). In physics this typically manifests itself by the fact that a probability histogram (e.g. nR/n as a function of discrete values of R) is better and better 5 These walking droplets exhibit moreover a behavior that strikingly imitates quantum behavior, including double-slit interference, tunneling and quantization of angular momentum. Couder et al. have quite convincingly shown that the origin of such a quantum-like behavior lies in the wave-front that accompanies the hopping droplets (the wave is created by the external vibration and by the back-reaction of the bouncing droplets on the oil film). Thus they could claim that these walking droplets are the first realization of a "particle + pilot-wave" (Couder et al 2005, 2006). Here we do not focus on these intriguing quantum-like features, but on the classical probabilistic features, which were also extensively investigated by Couder et al. 11 approximated by a smooth curve when n is increased (see e.g. Couder et al. (2006), Fig. 2-3 where both histogram and curve are shown); often one publishes just the curve. This curve, in other words these limiting values of nR/n, are supposed to be 'the' probabilities, or good approximations of it. Note that van Fraassen's condensed definition, "The probability of event A equals the relative frequency with which it would occur, were a suitably designed experiment performed often enough under suitable conditions" (van Fraassen 1980, p. 194), is in agreement with this operational procedure. Below we will fill in what 'often enough' might mean. The important point that the above example highlights is that frequency stabilization, i.e. (the observation of) a probabilistic pattern or in other words probabilities, only occurs under well-defined conditions, and by no means always. Very small variations 6 in some parameter (e.g. in the external vibration frequency, the droplet size, etc.) may lead to completely different probabilities, or to erratic behavior without stable relative frequencies, or even to absorbance of the droplet by the film. Therefore if a physicist who is not an expert in the dynamics of oil droplets on films – i.e. almost anyone – is asked to determine the probabilities PR by experimental observation, she may well come to the conclusion, even after weeks of experimentation, that the system is not probabilistic, or if she is more cautious that she did not identify conditions in which the system is probabilistic. Before generalizing these ideas, a remark is in place. As is well known, von Mises uses in his theory the concept of limiting relative frequency, for the limit n → ∞. This limit for n tending to infinity is considered problematic by some authors. For the sake of completeness, we will analyze some of these criticisms in Appendix 1 (our own model will however not use the notion of infinity). In all fairness one should remark that it is often forgotten that von Mises has dealt in detail with many of these objections (see in particular the introductions of his 1928/1981 and 1964). Among other points von Mises recalls that probability theory is not only a mathematical but also a physical theory (it applies to the physical world). And the concept of infinity is ubiquitous in theoretical physics, even if it is in many calculations obviously an idealization – a useful one when the formulas correctly predict reality to the desired precision. 6 The extreme dependence of physical properties on certain variables is typical for nonlinear systems, such as those investigated by Couder et al. The oil droplet bouncing on the film is governed by a nonlinear equation of movement, e.g. due to the so-called 'viscous friction' between droplet and film. 12 On reflection, we believe that von Mises has attracted criticisms that could have been avoided had he made following observation. It is well-known that his definition of probability as limn ∞ n(Rj)/n cannot work in the mathematically strict sense (cf. e.g. Richter (1978), p. 426). Mathematically, P(Rj) = n Rn j n )( lim ,0 N: ,Nn )( )( j j RP n Rn . (1) However, for any probabilistic event, for a given and for some n in the series of experimental results the last inequality in (1) may of course not be satisfied. Definition (1) works for defining the limit of mathematical sequences which evolve monotonically towards a limiting value. But for the sequence of physical, experimental results {n(Rj)}, an overwhelming amount of experimental data has indicated that the n(Rj) tend towards a fixed number but in general in an oscillating manner – as anyone can verify by throwing dice, etc. Yet with some charity one understands what is meant with "limn ∞ n(Rj)/n": namely the "experimentally determined value of convergence (not mathematical limit in the strict sense) of the ratio if one would continue – or if one continues – determining the ratio for increasing n". In sum, it seems we can forgive von Mises' slight misuse of a mathematically precise concept; and maybe he could have avoided criticisms if he would have defined probability in terms of say "lim*n ∞ n(Rj)/n", where lim* is the operational, experimental notion just defined. At any rate, this is not our problem: we will in the following not define frequency stabilization in terms of convergence in the limit n → ∞, but 'for n large enough for the desired precision'. This being said, we believe that the just defined operational notion 'lim*', or frequency stabilization, reflects a particularly interesting fact of nature. At the very basis of the frequency interpretation lies the scientific hypothesis that if certain well-defined experiments (trials) would be repeated an infinity of times, certain ratio's (involving outcomes of these trials) would converge to a limit. The modal aspect of the concept of probability has been highlighted in detail by van Fraassen (1980). And experiment has shown over and over again that long series (obviously not infinite ones) do show such frequency stabilization 7 . 7 Note that these massive empirical findings corroborate the initial hypothesis (namely of frequency stabilization as a real-world phenomenon) as strongly as any hypothesis of the natural sciences can be confirmed: scientific hypotheses are never confirmed with absolute certainty. It may be that after repeated measurements of a magnetic field strength (say 0.5 Tesla) in a given point X, the next measurements unexpectedly jump to 0.8 Tesla. This would not be enough ground to reject the hypothesis that, in the given conditions, there should be a fixed field in X. Physicists would look for causes of the jump. Similarly, if in a given experiment that is known 13 Let us finally note that, as formulated above, random systems include deterministic systems. Indeed, in the latter case frequency stabilization will also occur. If we move a die by placing it systematically – deterministically – ace up on the table, the six P(Rj) will trivially converge to a value (namely to 1, 0, 0, 0, 0, 0). We could exclude deterministic cases from our definition of random events / systems by adding that, besides exhibiting frequency stabilization, the events / systems should be unpredictable. Thus 'genuinely' random events show frequency stabilization and are unpredictable (or one could replace the latter by another predicate that qualifies the 'unstructured', disordered nature of randomness). 3. Generalization, definitions. Before proposing our definitions, which will essentially be straightforward generalizations of what we learned in the preceding Section, two philosophical problems deserve to be highlighted. We argued with von Mises that the essential feature qualifying physical events or systems as random, or probabilistic, is frequency stabilization. At this point an obvious, but neglected, question pops up: are all physical random systems really random – in the sense of probabilistic ? In the case of systems studied in physics, one often declares that physical systems are either random in the probabilistic sense, or deterministic. This classic dichotomy is for instance investigated by Bell's theorem (see e.g. Vervoort [2013]). However the above question becomes more interesting in the case of 'physical systems' in a larger sense, especially if the systems in question 'contain' human beings. If frequency stabilization is indeed the essential feature of real-world random systems and events, then it is clear that not all so-called random systems are random in the sense of probabilistic or stochastic. Frequency stabilization is by no means guaranteed for an arbitrary disordered event, system or parameter. Consider a mountain torrent, and define a parameter characterizing its flow, such as the height of its waves or its rate of flow through a given surface. Such a torrent may show such erratic behavior over time that it may be impossible to define a parameter of which the frequency stabilizes. In the former Section we saw a concrete example from physics (fluid dynamics) that illustrates the daunting practical difficulty for finding conditions of probabilistic stabilization. Arguably a majority of physical systems are so complex and non- to be p-random frequency stabilization would not occur, one would not reject frequency stabilization but look for causes to explain the strange behavior. 14 linear that they are extremely sensitive to variations in the parameters that describe them. As illustrated above this leads often or 'normally' to the impossibility of determining conditions of frequency stabilization. Sure, this is a statement of pragmatic nature – maybe one could argue that an omniscient being could know such conditions. As a next example, atmospheric temperatures in a certain city look random, they even look statistically random, but they arguably don't show frequency stabilization over long periods, e.g. because of climate changes 8 . What is even less likely to exhibit frequency-stabilized features, is 1) any 'random' system that is subject to human actions, e.g. the chaos on my table, or the random-looking features of a city-plan, or the duration of my Sunday walks etc., and 2) any 'composite' random-looking system, like a car and the house in front of it, or a tree, a stone underneath, and a cloud above. In sum, it seems that frequency stabilization happens often 9 , but certainly not always. Then not all randomness is the randomness that is the object of probability calculus. This observation justifies the introduction of the concept of 'p-randomness', to be contrasted with 'randomness' and 'non-p-randomness': p-randomness, or probabilistic or structured randomness, is randomness (chance) that is characterized by frequency stabilization (at infinity, or 'for large n'). Before one applies probability calculus to a physical event, one conjectures or has evidence that it is p-random – a feature that can only be confirmed by experiment (however never with absolute logical certainty, but with the certainty or uncertainty that characterizes empirical hypotheses). Note that the technically familiar way to characterize a parameter or variable R as p-random, is to say that it has a 'probability density function', a parameter often symbolized by '' in physics ( = (R)). Another philosophical problem that deserves attention is the following. In Section 2 we have emphasized the role of the conditions under which experiments disclosing the prandom nature of physical events need to be performed. Such experiments involve repeated tests, in which the object under study (e.g. the die) should be subjected to repeatable, 'similar' or 'identical' conditions before being probed. It is a well-known problem of the foundations 8 This may however be seen as a limiting case, in which frequency stabilization in experiments occurs 'for large n', even if not 'at infinity'. Indeed, modern meteorologists heavily use statistical talk: they assume that the conditions at a given time in a given place are sufficiently similar to those for which they have a large set of data which allow to deduce (approximate) probabilities. Since these data sets are very large, one can accept that statistics is still applicable here and leads to 'reasonable' precision. But not the kind of precision one can attain in more typical physical systems, for which the sampling size can be virtually infinite. 9 A particularly beautiful mathematical argument for the ubiquity of probabilistic randomness is provided by the Central Limit Theorem. 15 of probability to identify what 'similar' / 'identical' exactly means. How similar is similar (we will propose an answer in C4 below) ? Now we all know by experience that we do not need a lot of precautions when shaking a true die so as to launch it in a 'regular' manner – leading to the normal probability distribution of a true die. In other words, it seems that in general we have a good idea what these 'similar conditions' are; but that is certainly not always the case – in particular in physics, and even more so in quantum mechanics. At this point the idea of partitioning we introduced in the preceding Section proves useful. Indeed, similar tests on similar systems imply, in general: similar initiating system (or environment), similar test object, and similar probing system. These three subsystems should 'act', in the repeated test sequence, in similar (enough) ways in order that a probability can be defined. Alterations or divergences in any of these three subsystems can lead to different probabilities (probability distributions) – the latter may not even be defined, i. e. existing. Remember the sublimating die, or the inadequate randomization, which can lead to an undefined (i.e. changing, unstable) probability distribution of the die throws. Or recall the experiments in fluid mechanics of Couder and collaborators (Couder et al. 2005, 2006 and Eddi, Couder et al. 2011). This just to put the remainder, and in particular the question what 'similar' means, in perspective. We now propose a definition that can be derived from von Mises' theory, but that avoids the concept of collective. The move we make is essentially the following. 1) Von Mises starts by defining a collective as an (in principle infinite) series of random results, and defines probability with respect to a given collective. But we have just argued that probability cannot be attributed to any random series, but only to a p-random series. We therefore first define the concept of frequency stabilization or p-randomness: it is logically prior to that of probability. 2) We partition the probabilistic system, as explained in Section 2. We emphasize that we do so essentially for pragmatic reasons – i.e. for addressing concrete problems as those presented in this and the next Section. So our aim is quite minimal and we refrain from strongly metaphysically tainted claims – in any case if they have no verifiable function of problem-solving 10 . The task is then to construct a definition that makes the notion of 'experimental conditions' explicit, and that applies to both chance games and natural probabilistic phenomena. If we say 'identical' in DEF1 and DEF2, one should read 'identical or similar enough', a predicate which will be specified in C4 below. We have explained in 10 For instance, von Mises' claim that probability is relative frequency 'at infinity', or Popper's claim that probabilities are propensities (cf. next footnote) might be seen as such less concrete statements. 16 Section 2 how a stochastic system or experiment in which stochastic events occur can be partitioned in subsystems (test object, initiating and probing subsystems, or environment for natural phenomena). DEF1. A system (or event occurring in the system) possesses the property of frequency stabilization IFF (i) it is possible to repeat n identical experiments (with n a number sufficiently large for the desired precision) on identical test objects by applying n times identical initial and final actions or conditions on the object (realized by the initiating and probing subsystems, or more generally, by the environment); and (ii) in this experimental series the relative frequencies of the outcomes Rj (j = 1,..., J) of the corresponding experimental events (detected by the probing subsystem) converge towards a constant number when n grows; more precisely, the ratios { (number of events in the series that have result Rj) / n } converge towards a constant number when n grows, for all j. DEF2: Only for an event (or system, or outcome, or variable) that shows frequency stabilization according to DEF1, as it can be tested by an experiment in well-defined conditions, the probability of the result or outcome Rj (j = 1,..., J) of the event is defined, and given by: P(Rj) = (number of events that have result Rj) / n, for n large enough for the desired precision, and for all j. Following von Mises we conjecture that the above definitions allow to characterize all physical probabilistic phenomena; i.e. that they cover the physical events or systems described by probability calculus. To some believers of the frequency interpretation this claim may seem acceptable or even obvious. Others may find that a heavy burden of providing arguments awaits us. Clearly, we cannot review all probabilistic systems and show how the definitions apply in all these cases. But this article will provide several examples, and the cited works of von Mises offer a wealth of these (von Mises 1928, 1964). Moreover, under C1 – C6 we provide those arguments and completing notes that seem the most important to us in rendering above claim defendable. 17 C1. It is well-known that the frequencies or ratios defining probability in a model like the above fulfill the axioms of Kolmogorov's probability calculus (cf. e.g. van Fraassen (1980), Gillies (2000), Khrennikov (2008)). C2. If one reduces the definitions to their essence, they state that probability only exists for systems or events that can be subjected to massively repeated physical tests, occurring under well-defined (and composed) conditions; and that in that case probability is given by a simple ratio or 'frequency'. The numerical precision of that ratio grows when the number of experimental repetitions grows. Of course, probabilities of events may 'exist' even if no experiments have been or will be done; but it must be physically possible that such an experiment be done – as stipulated in DEF1. The intimate link between probability and physical experiments, leading to the idea that probability is a property not of objects but of repeatable events, or even better of repeatable experiments, is obviously built-in in von Mises' concept of collective (a series of experimental results), as in our model. In the case of physics, this link has been further emphasized and analyzed by philosophers as Popper (1957, p. 67) 11 and van Fraassen (1980, Ch. 6) (see also e.g. von Plato (1994)). Note that the definitions can be used both for artificial systems (chance games, typically) and natural systems (behaving in a p-random manner without human intervention). In other words our phrasing allows to see how the same concept applies to such widely different events as a die thrown by a human, a molecule in a gas, and a quantum measurement. Indeed, in the case of chance games the initializing and probing subsystems are easily identified as truly separate systems acted upon, or made, by humans (see also C3). In the case of natural probabilistic phenomena (quantum phenomena, diffusion phenomena, growth phenomena, etc.) the initializing and probing systems coincide with – are – the environment. A detailed application of DEF1+2 to the case of gas molecules is given in Appendix 2, as well as other examples. DEF1 of p-randomness is stated in terms of 'possible' repetitions, or of 'possible' trials, not necessarily actual ones, which highlights the modal aspect of probability (van Fraassen 1980). As already stated, natural, spontaneous random events are normally not initiated nor measured by humans in repeated experiments. Still, according to the model we advocate, they only 'have' a probability if they occur, or can occur, on a massive scale as could be imitated in the lab, or studied under repeatable conditions. The probabilities of such 11 Popper (1957, p. 67) famously interprets probability as the propensity that a certain experimental set-up has to generate certain frequencies. 18 natural events can only be revealed by experiments imitating the environment; they can sometimes be hypothesized by calculations based on experimental data / assumptions, and representing such experiments 12 (cf. examples in Appendix 2); at any rate such probabilities can only be verified / proven by real experiments. But this seems to be nothing more than the scientific paradigm. C3. In physics most p-random phenomena or events are characterized by outcomes that are (described by) continuous variables, rather than discrete ones. In other words, R will range over an interval of the real numbers, rather than taking discrete values Rj, j=1,..., J as in die throws, coin tosses and other chance games. One could observe that these objects used in games are constructed on purpose by humans in order to 'produce' a limited number of discrete outcomes – in contrast to natural systems. The latter, like diffusion phenomena, gas kinetics, biological growth phenomena, etc. have outcomes that are continuous, or quasicontinuous. (A notorious exception are certain quantum systems, notably when they are carefully 'prepared' – we would say 'initiated'. Still many or most quantum properties have a continuous spectrum.) In the case of a continuous p-random variable x (we used R before), one defines in probability theory the probability density function (x), the meaning of which is however defined via the concept of probability. Indeed, in the continuous case P(Rj) = (Rj).dR, defining via P as characterized in DEF2. One can thus for the present purpose safely treat probability density functions and probability on the same foot. A model that interprets probability for discrete variables also does the job for the ubiquitous continuous variables. (Also note in this context that one can formally describe discrete cases by a density function (R) = j Pj.(R-Rj), where is the Dirac delta-function: the discrete case is a special case of the more general continuous case.) C4. DEF1 of p-randomness relies on the notion of 'identical' or 'similar' conditions and objects. It may thus look suspicious in the eyes of a philosopher. But we will now argue that the definition is sound in the sense that it allows to define p-randomness and probability in a scientifically sound manner. First, notice that it seems impossible to define 'similar' or 12 Von Mises found it important to emphasize that when one applies probability theory to a concrete problem, one always starts from assuming a certain probability distribution. One of his best known maxims is "Probability in, probability out". E.g., when one calculates the chances in urn picking by combinatorics, one typically (and almost always implicitly) starts from assuming that the chance that a given ball is picked is equal for all balls. Then one proceeds to the problem posed (such as: what is the probability that two successively picked balls are white ?). In theoretical physics too, one often guesses or has evidence or derives by using physical theory that the probability distribution of a certain property is of a certain form (say a Gaussian); based on this hypothesis one then uses probability calculus to derive other probabilistic predictions which can be tested. Examples of this are given in Appendix 2. 19 'identical' in a more explicit manner. One could try in following way: "to the best of common knowledge, or of expert knowledge, one should repeat the 'same' event in a manner that allows to speak about the 'same' event." But does this help ? It seems however there is no real problem. The important point is that, as defined, frequency stabilization can be tested for by independent observers; and these observers can come to the same conclusions. The conditions for doing the 'frequency stabilization test' can be communicated for any particular system: "do such-and-such ('identical') initial and final actions on such-and-such ('identical') systems and the probabilities Pj will emerge. I found frequency stabilization and the probabilities Pj, so if you do the experiment in the 'right, identical' conditions, you should find them to." It would seem that the problematic term 'identical / similar' of probability theory can thus be defined in an operationally fully consistent manner. We believe the above shows that one could, or should, speak of 'p-identical', or rather '-identical', events / systems, where is the probability density of the outcomes of the events under study. To test whether a system is probabilistic, and to identify the probability of the corresponding probabilistic events, one needs to perform a series of experiments on identical systems (including object, environment, initiating / probing subsystem) – systems that lead to the probability distribution (R) of the results R of the events 'defined' or 'generated' by the system. Upon this view, then, 'identical' is 'identical' insofar a probability distribution emerges. It will also be clear that in an analogous way one could meaningfully define a -system = -{object, environment, initiating /probing subsystem}. We therefore believe that in the context of probability theory, one should in principle speak of -systems, or -identical systems 13 . C5. Does the frequentist model cover the classical interpretation of probability, traditionally used to tackle chance games, urn pulling and the like ? Von Mises (1928/1981, p. 66 ff.) and many modern texts on probability calculus come to this conclusion. We will therefore only succinctly present what we believe to be the essential arguments. Notice, first, that our definitions can at least in principle be applied to such chance games. The initiating / 13 At this point one might be tempted to include a speculative note. In effect, there seems to exist such a strong link between probability () and the concept of (identical) 'system' – or 'thing' –, that one may indeed wonder whether we conceive of something being a (simple) 'object' just because it shows frequency stabilization. We saw that composed objects, like a house and the tree in front, or a car and a cloud above, are very unlikely to exhibit frequency stabilization – and indeed we do not conceive of them as simple objects, but as composed ones. (Remember in this context that deterministic systems are just a type of probabilistic systems; obviously they are the first candidates for being associated with 'things'.) 20 initializing subsystem is most of the time a randomizing hand (tossing a coin or a die, pulling a card or a colored ball from an urn, etc.); the probing subsystem is often simply a table (plus a human eye). According to the famous expression of Pierre-Simon Laplace, for calculating a probability of a certain outcome, one should consider "events of the same kind" one is "equally undecided about" 14 ; within this set, the probability of an outcome is the ratio of "favorable cases to all possible cases". In all reference books on probability calculus, the latter definition is the basis of probability calculations in chance games. Notice now that dice, card decks, urns containing balls, roulette wheels, etc. are constructed so that they can be used to produce equiprobable (and mutually exclusive and discrete) basic outcomes, i.e. having all Pj equal, and given by 1/J (J = the number of basic 15 outcomes). Equiprobability is at any rate the assumption one starts from for making mathematical predictions, and for playing and betting; and indeed Laplace's "events of the same kind one is equally undecided about" would now be termed equiprobable events. Along the lines exposed above, a chance game can thus be seen to correspond to an (artificially constructed) -system with (R) = j (1/J) (R-Rj). It is at this point straightforward to show that in the special case of equiprobable and mutually exclusive events, the frequentist and classical interpretation lead to the same numerical values of probability. Indeed, within the frequentist model, P(Rj) → n J n 1 . = 1/J (the numerator n / J = the number of Rj-events among n (>>J) exclusive and equiprobable events each having a probability 1/J). Thus the result, 1/J, is equal to the prediction given by Laplace's formula (1 favorable case over J possible cases). Let us immediately note, however, that Laplace's formulation is not superfluous. It allows in the special case of chance games for calculation, i.e. theoretical prediction: 'favorable cases' and 'possible cases' can conveniently be calculated by the mathematical branch of combinatorics a theory of counting, initiated by the fathers of probability theory. In sum, it appears that the classical interpretation is only applicable to a small subset of all probabilistic systems, namely chance games, in general artefacts having high degrees of symmetry leading to easily identifiable, discrete basic outcomes. For this subset Laplace's 14 The events thus fulfill the 'principle of indifference' introduced by Keynes. 15 The 'basic events' or 'basic outcomes' of coin tossing are: {heads, tails}, of die throwing: {0, 1, ..., 6}, etc. The probability of 'non-basic', or rather composed, events (two consecutive heads in two throws, etc.) can be calculated by using probability calculus and combinatorics. 21 interpretation can be seen as a formula, handy for calculation, rather than an interpretation of probability. As we often stated, after calculation the only way to verify the prediction is by testing for frequency stabilization. It is among others for this reason we believe the latter property is the natural candidate for a basis of an overarching interpretation. C6. We have defined frequency stabilization and probability by means of the notion of 'convergence' of a certain ratio when n, the number of trials or repetitions of an experiment, grows. It is clear that this phrasing is close to the original definition by von Mises of probability as P(Rj) = limn ∞ n(Rj)/n. However, our phrasing "P(Rj) = n(Rj)/n for a number of trials n that is large enough for the desired precision" avoids the notion of infinity; it may therefore avoid problems of mathematical rigor (see discussion in Appendix 1 and Section 2). Note that from a pragmatic point of view, our definition allows to derive, if one would use it to experimentally determine a probability, numbers that are equal to those identified by von Mises' definition to any desired precision. At least operationally there is no difference in the definitions: they lead to the same results. (Note that some may find von Mises' definition more satisfactory in view of their metaphysical aspirations: one could say that 'the' probability is the limiting relative frequency 'at infinity'. An 'ideal' number that one can seldom calculate with infinite precision (except, for instance, if one can use combinatorics), but that one can always measure with a precision that grows when n grows.) These notes conclude the basic description of our model. Needless to say, they are a first introduction to the model; we are well aware that questions will remain unanswered. The only way to validate and strengthen a model is to show that it applies to non-controversial cases, and that it may solve problems left open by other models. Concerning noncontroversial cases, the examples we have presented and will present below will allow the reader to verify that DEF1+2 work for these cases (more examples can be found in von Mises' works). It is maybe expedient to verify these definitions explicitly in a slightly more subtle case of a natural probabilistic property as the velocity of gas molecules. For completeness, we have done this exercise in Appendix 2. It illustrates the essential role of the 'environment' in the case of natural probabilistic phenomena. Next, Beisbart has analyzed interesting examples of probabilities as they are generated by physical models (Beisbart and Hartmann 2011, pp. 143 – 167). We will also briefly compare these to our model in Appendix 2. 22 Since the above model condensed in DEF1+2 is a direct elaboration of von Mises' interpretation, it should be able to tackle the probabilistic systems that the latter can tackle. However it is much simpler: we do not need the concept of collective, nor its calculus; our calculus is Kolmogorov's. 4. Applications. The aim of this Section is to show that the model of Section 3 allows to interpret controversies and solve paradoxes of probability theory, as exposed in general works as e.g. Fine (1973), von Plato (1994), Gillies (2000). We will investigate here problems R1 – R4. We emphasize that we can only provide first arguments. The first thing we should do in this article is to show that the model allows to address a rather wide scope of problems. But we need some charity of the reader: we cannot go in full detail into problems that have been addressed since decades nay centuries by countless philosophers. R1. According to frequency interpretations it makes no sense to talk about the probability of an event that cannot be repeated. In condition (i) of the definition of prandomness, repeatability is an explicit requirement. For instance, according to such models it makes no sense to talk about the 'probability' of a dictator starting a world-war and the like: no experiments can be repeated here, and even less experiments in well-defined conditions. Popper's propensity interpretation of probability was an attempt to accommodate such probabilities of single events – quantum events were his inspiration, like the disintegration of one atom (Gillies (2000) Ch. 6). But again, any quantum property of any single quantum system can only be attributed a probability if the property can be measured on an ensemble of such systems – as in the case of macroscopic systems. Measurement (verification) of a probability is always done on an ensemble of similar systems and events, whether quantum or classical. Therefore the added value of the propensity interpretation is not obvious to us. For a recent discussion and overview of the intense controversies related to 'single-case probabilities', see Beisbart and Hartmann (2010) p. 6-7. According to the above model, probability is not a property of an object on its own; it is a property of certain systems under certain repeatable human actions; or, more generally, of certain systems in dynamical evolution in well-defined and massively repeatable conditions. The following slogan captures a part of this idea: probability is a property of certain composed systems. Similarly, probability is not a property of an event simpliciter, but of an 23 event in well-defined conditions, in particular initializing and probing conditions. An even more precise way to summarize these ideas would be, it seems, to attribute probability to experiments (van Fraassen 1980, Ch. 6), or experimental conditions (Popper 1957). The advantage of the term 'experiment' is that it only applies to 'composed events' for which the conditions are defined in a scientifically satisfying manner – exactly as we believe is the case for the use of the term probability in physics. Note that a scientific experiment is also, by definition, repeatable. R2. A classic paradox of probability theory is Bertrand's paradox. It goes as follows: "A chord is drawn randomly in a circle. What is the probability that it is shorter than the side of the inscribed equilateral triangle ?" Bertrand showed in 1888 that apparently three valid answers can be given. A little reflection shows however that the answer depends on exactly how the initial randomization is conceived. Indeed, in the real world there are many ways to 'randomly draw a chord', which may not be obvious upon first reading of the problem. One can for instance randomly chose two points (homogeneously distributed) on the circle by using a spinner 16 ; a procedure that leads to the probability 1/3, as can be measured and calculated. But other randomization procedures are possible, leading in general to different outcomes. In other words, Bertrand's problem is not well posed, a conclusion that seems now widely accepted (see e.g. Marinoff (1994)). Now, this conclusion is a direct consequence of our model, according to which probability is only defined for experiments in well-defined conditions, among others initiating conditions. More precisely (see DEF1+2), probability exists only for events which show frequency stabilization in experiments under precise conditions, initial, final and 'environmental'. For a situation that is experimentally ambiguous no unique probabilities can be defined. R3. In the following we show how our detailed model allows to counter subjectivist threats – at least in the context of physics. It is both a very popular and very tempting idea that, somehow, "probability depends on our knowledge, or on the subject". This is a key ingredient of subjective interpretations of probability, or subjective Bayesianism, associating probability with strength of belief (see detailed expositions in e.g. Fine (1973), von Plato (1994), Gillies (2000)). When I throw a regular die, I consider the probability for any particular throw to show a six to be 1/6. But what about my friend Alice who is equipped with a sophisticated camera allowing her to capture an image of the die just before it comes to a 16 More precisely, by fixing a spinner at the center of the circle; a pointer on the spinner and two independent spins generate two such independent random points. 24 halt ? For her the probability seems to be 0 or 1. Or what about following case: imagine Bob, waiting for a bus, only knowing that there is one bus passing per hour. He might think that the probability he will catch a bus in the next five minutes is low (say 5/60). Alice, who sits on a tower having a look-out over the whole city, might have a much better idea of the probability in case (she might quickly calculate, based on her observations, that it is close to 1). Are these not patent examples of the idea that a same event can be given different probabilities, depending on the knowledge (or strength of belief) of the subject ? And is in that case probability not a measure of the strength of belief of the subject who attributes the probability ? Examples as these are unlimited, but the above claims can be countered by referring to the boundary conditions that are explicitly part of our definition of probability. In the above model we claimed that probability is only defined – only exists – for repeatable experiments on well-defined composed systems, including among others well-defined conditions of observation. Doing a normal die throw, and observing the result on a table as is usually done, corresponds to a well-defined p-system, with a specific initiating system, probing system etc. In the example, our second observer Alice does not measure the same probability: she does not measure the probability of finding a six on a table after regular throwing and regular observing, but of finding a six after measurement of whether a six will land or not on the table. The latter is a very different, and indeed fully deterministic, experiment; at any rate, the observing subsystem (including a high-speed camera) is very different. A similar remark holds for the bus-case; the measurement system (even if just the human eye in a given location) is part of the p-system; one cannot compare observer Alice and observer Bob if their means of observation are widely different. Thus according to our model Alice and Bob do not measure the probabilities of the same event or system: that is why they measure or predict different probabilities. It thus seems that DEF1+2 allow to safeguard the objective interpretation of probability of physical events. A more modest claim is the following: it seems that cases for which the subjective interpretation might be helpful (in physics), might be re-integrated into the frequency interpretation along above lines. In sum, based on DEF1+2, if one includes in the experimental series all boundary conditions and especially the probing subsystem, the probability of a given outcome can be seen as an objective measure. True, 'objective' (and 'observer-independent' even more) is a 25 tricky word here: it means 'identical (and mind-independent) for all observers performing identical experiments', so objective in the scientific sense – even if the observer, or rather the observing subsystem, is in a sense part of the system ! (Remember that the observing subsystem is part of the p-system.) Stated more precisely, the probability of an event is an 'objective' property of the p-system that generates the event, in that independent observers can determine or measure the same probability for the same event. We emphasize that we believe that this is not in disagreement with the idea – bracketed here – that there is also a subjective element in what someone calls a 'probabilistic system'. Indeed, we saw in Section 2 that it is often extremely difficult to identify conditions of probabilistic behavior, for instance in the experiments of Couder and collaborators. What these experts consider to be a probabilistic system (a droplet in fine-tuned conditions of vibration, etc.) will for almost any other person appear to be an extremely complex, erratic, hermetic or why not deterministic system – not a probabilistic one. So in this sense it seems that the attribution of the term 'probabilistic system' does depend on the knowledge of the attributor ! But we believe this is definitively not in contradiction with the idea that for a welldefined p-systemprobabilities can be defined (as in DEF1) in an objective or if one prefers inter-subjective manner – as argued above. Needless to say, the mentioned subjective element and its precise link with an objective definition deserves to be investigated in much more detail than we can do here (we have planned to do so elsewhere). Finally, let us note that for physical systems it is not a very shocking claim to consider probability as an objective measure: after all, different physicists all over the world do measure the same probabilities if they do the same experiment. R4. At this point it seems an interesting and important step to quantum mechanics can be made. Indeed, it seems our conceptual analysis has brought in focus a striking similarity between classical and quantum systems. It belongs to the key elements of the standard or Copenhagen interpretation that "the observer belongs to the quantum system", or at least that the measurement detectors belong to it, and influence it. Suffices here to cite Bohr in his famous debate with Einstein on quantum reality (Bohr 1935, see also Gauthier 1983 on the role of the 'observer' in quantum mechanics). The debate concerned the completeness of quantum mechanics, questioned by Einstein, Podolsky and Rosen in the case of two noncommuting observables of 'EPR electrons'. The key point of Bohr's counterargument is summarized in following quote: "The procedure of measurement has an essential influence on 26 the conditions on which the very definition of the physical quantities in question rests" (Bohr 1935, p. 1025). Bohr invokes here the 'quantum of action' linked to Plank's constant h: any measurement of any property needs the physical interaction of a detector / observing system with the test object under study, an interaction that carries minimally one quantum of energy. Bohr's phrase is arguably one of the most famous quotes of the Copenhagen interpretation (cf. e.g. our [YYY], [ZZZ]), corresponding to one of its most basic ingredients. However, even experts of the foundations of quantum theory as John Stuart Bell, author of Bell's theorem, have complained that precisely this quoted phrase is incomprehensible 17 (cf. Bell (1981) p. 58). But it seems we are now armed to make Bohr's phrasing transparent. According to our model Bohr's words can well be understood as meaning that the definition of quantum properties depends in a fundamental way on the measurement conditions. Quantum systems are probabilistic systems; and we have argued throughout this article that the numerical value of a probability depends in a fundamental way on the observing subsystem or conditions. In classical systems one has to look a bit more carefully for examples to exhibit this in-principle fact (we gave several examples), but in the quantum realm it apparently becomes basic. As an example, the probability that an x-polarized photon passes an y-polarizer obviously depends on x and y (the angles x and y are the parameters that describe the initiating and probing conditions). In conclusion, we believe Bohr's quote is understandable within our model, and that a careful inspection of the concept of probability shows that in all probabilistic systems, quantum or classical, the measurement system plays an essential role. We thus come to the perhaps surprising conclusion that, in this respect, quantum systems are not as exceptional as often thought. This claim can be much elaborated, as we have done elsewhere (Vervoort [2012]). Let us also note that other philosophers have come to a similar conclusion from a different angle (Szabó 1995, 2000, 2001, Rédei 2010). Thus results R1 – R4 are all derived from putting to the fore the experimental 'conditions' mentioned in DEF1+2. Now, that a precise definition of initial and final conditions or subsystems is necessary for defining probability may have – in hindsight – something obvious about it. But if that is true, our model would show nothing more than that analytically defining concepts, making implicit notions explicit, is an efficient method for addressing problems also in the philosophy of physics. 17 We could, for that matter, not make sense of the phrase before we did the present study on the notion of probability. 27 5. Conclusion. We have argued here that maybe not all efforts have been done to save the frequency interpretation of probability as applied to physics. The philosophy literature abounds with criticisms against more basic frequency interpretations (cf. e.g. Fine (1973), von Plato (1994), Gillies (2000)), to the point that von Mises' work might have unduly suffered from it. We have argued that frequency models à la von Mises, if reworked along the lines indicated here, might be a viable interpretation of probability in physics. To make our point we proposed an analytic definition, and showed that it has a certain unifying capacity and that it allows to address problems of the philosophy of probability. We do not claim that all aspects of what 'probability' is are captured by the model. For instance, as exposed in Section 4 (R3) we are very sympathetic to the idea that there are also subjective elements related to the notion of probabilistic system – which we bracketed here, for the moment. We leave it to the reader to judge whether our model is still close to von Mises' theory or not: we retained von Mises' notion of frequency stabilization, but rejected the infinite collective and did not use his calculus. In some detail, we proposed a model in which it only makes sense to define probability for systems or events (or better, experiments) that exhibit frequency stabilization. This is according to our interpretation (DEF1+2) the essential characteristic of (physical) probabilistic systems; frequency stabilization ultimately provides the only empirical criterion to judge whether a system is probabilistic or not – in agreement with the practice of physics. Next we argued it is useful, if not necessary, to partition probabilistic systems into three parts: if natural, in object and environment; if artificial, in object, initiating subsystem, and probing subsystem. In particular, we claimed that including the probing subsystem into the probabilistic system allows to define physical probability in an objective manner. Only if the probing is defined, the probability is. By the same token we argued that there is an essential parallel between quantum and classical systems: in order to be able to define a probability for an event, in both cases one needs to specify the 'observer', or rather the probing subsystem. Including the initiating subsystem into the probabilistic system also allows to solve paradoxes, such as Bertrand's paradox. We thus hope to spark a renewed debate on the frequency interpretation in physics. We would welcome questions aiming at challenging the model proposed here. Ultimately the convincing way to challenge the model would be to expose typical cases in which physicists 28 use 'probability' in a sense that is not captured by the model. Such cases might exist; maybe the model can be amended to include them. Acknowledgements. For detailed discussion of the issues presented here I would like to thank Yvon Gauthier, Yves Gingras, Henry E. Fischer, Andrei Khrennikov, and Jean-Pierre Marquis. I also thank Guido Bacciagaluppi for discussion of specific points. Funding was by a scholarship of the Philosophy Department of the University of Montreal. Appendix 1. Alleged mathematical problems of von Mises' theory. Let us have a look at some criticisms of von Mises' mathematical theory, which is based on the concept of 'collective'. One criticism, stating that von Mises' probability as a limit for n → ∞ is not always well defined (see e.g. Richter (1978)), was dealt with in the main text. It may further become transparent when one realizes that von Mises' theory is a physical theory, not a strictly mathematical one (see von Mises' clear arguments in his (1928/1981), e.g. p. 85). A tougher critique concerns the 'condition of randomness' that von Mises imposes on collectives. According to him, the randomness of a collective can be characterized as 'invariance under place selection' (roughly, the limiting frequencies of a collective should remain invariant in subsequences obtained under 'place selections', certain functions defined on the original collective). But which and how many place selections are required ? – von Mises' critics ask. An important result was obtained by A. Wald, who showed that for any denumerable set of functions performing the subsequence selection, there exist infinitely many collectives à la von Mises: a result that seems amply satisfying for proponents of von Mises' theory (see the reviews in von Plato (1994), Gillies (2000) and Khrennikov (2008) p. 25). Even the famous objection by J. Ville can be shown not to be a real problem (see Khrennikov (2008) p. 27 and also Ville's own favorable conclusion reproduced in von Plato (1994) p. 197). As a side remark, let us note that von Mises' attempts to mathematically describe randomness led to interesting developments in the mathematics of string complexity, as produced by no-one else than his 'competitor' Kolmogorov, and mathematicians as MartinLöf. A general result of these developments can broadly be stated as follows: real physical randomness seems to defy full mathematical characterization (Khrennikov (2008) p. 28). In 29 our view, this is not really surprising: if a series of experimental results can be generated by an algorithm, one would think it is not random by definition (it may of course look random). A real series of outcomes of coin tosses cannot be generated (predicted) by an algorithm; but – and this is close to magic – it does show frequency stabilization. This observation allows to counter a curious critique by Fine (1973), who derives a theorem (p. 93) that is interpreted by the author as showing that frequency stabilization is nothing objective, but "the outcome of our approach to data". However, closer inspection shows that Fine's argument only applies to complex mathematical sequences (of 0's and 1's) that can be generated by computer programs. But if a sequence can be generated by a computer algorithm, it is by definition not random, but deterministic, even if it looks random. The essential point of real series of coin tosses is that they cannot be predicted by numerical algorithms... and that they show frequency stabilization, as can easily be shown by experimentation. From this perspective, mathematical randomness or complexity, as discussed in relation to the complexity of number strings, has little or nothing to do with physical randomness 18 . To end our succinct review of critiques of von Mises' theory, the decisive point is the following. It is of course possible to withhold essential elements of von Mises' interpretation (I) of probability (as a frequency), even if his calculus (C) would have shortcomings in mathematical strictness the present article only needs the interpretational part, as often emphasized. On top of that, the sometimes criticized randomness condition appears to be not necessary for the theory, in the sense that the equivalence with Kolmogorov's measuretheoretic approach can be proven without using that condition (see an explicit proof in Gillies (2000) p. 112). This is again, we believe, a clear indication of the fact that the essential feature of probabilistic randomness is frequency stabilization, not invariance under place selections. Appendix 2. Probability of the velocity of molecules, and other 'theoretical' probabilities. 18 Except if the number strings are generated by a real physical number generator, which is, as far as I know, never or almost never the case. 30 To further clarify DEF1+2, it is instructive to verify these definitions in the case of a natural probabilistic property as the velocity of gas molecules. This is a case to which von Mises devotes attention himself (von Mises 1928/1981 p. 20), since it is slightly subtle. First, recall that in (mathematical) physics a probabilistic 'event' formally corresponds to a stochastic physical variable assuming a certain value. Then, the probability that a gas molecule has a certain velocity 'exists' according to DEF1+2. Indeed, the system (the gas molecule in its environment), or more precisely the event consisting in the molecule having a certain velocity in the given environment, exhibits frequency stabilization: physics has shown that it is – at least in principle – possible to repeat velocity measurements on the same or on similar molecules, and that in long enough series of such experiments the relative frequencies of the measured velocities converge to fixed values. In actuality these values are for many types of gases given by the Maxwell-Boltzmann distribution, stating that the probability P(v) that the gas molecule has a velocity in an interval dv around v is: P(v) = 3 2 kT m . 2v .exp kT mv 2 2 . dv , (2) where m is the mass of the molecule, k Boltzmann's constant and T the temperature of the environment. (This of course implies that the probability density (v) = P(v)/dv.) This case is a bit subtle because the probabilities (2) are known to be a good description of reality since more than a century, even if in practice they were not directly determined by velocity measurements (at least not until recently). So due to technical limitations in instrumentation, in von Mises' time it was practically impossible to measure velocities of individual molecules and verify frequency stabilization for these velocities. How then did physicists come to accept (2) as the right formula ? First Maxwell and Boltzmann had derived (2) based on theoretical arguments. But the essential point is that (2) could subsequently be tested in numerous manners in the following indirect way: if one assumes (2) as the correct probability for velocities, one can use physical theory to derive various predictions for other variables (say kinetic energy) that are functionally related to velocity (kinetic energy = mv 2 /2) – and it is these predictions which were tested so often that the starting hypothesis (i.e. (2)) got the status of 'scientifically generally accepted', and indeed one of the pillars of statistical physics. So when we say in DEF1 (i) "it is possible to repeat n identical experiments", we maybe should add the proviso "direct or indirect"; or better if we say in DEF1 (ii) "the 31 relative frequencies of the outcomes Rj (j = 1,..., J) of the corresponding experimental events [converge]", we might say "the relative frequencies of the outcomes Rj (j = 1,..., J) of the corresponding experimental events, or of functionally related events [converge]". The above is a somewhat more detailed analysis of von Mises' words (1928/1981 p. 20): "It is true that nobody has yet tried to measure the actual velocities of all the single molecules in a gas, and to calculate in this way the relative frequencies with which the different values occur. Instead, the physicist makes certain theoretical assumptions concerning these frequencies (or, more generally, their limiting values), and tests experimentally certain consequences, derived on the basis of these assumptions. Although the possibility of a direct determination of the probability does not exist in this case, there is nevertheless no fundamental difference between it and the other [...] examples treated. The main point is that in this case too, all considerations are based on the existence of constant limiting values of relative frequencies [...]". Note that nowadays we likely have means to measure individual atomic speeds. And if one would use these to directly verify (2) one would determine relative frequencies, again in agreement with DEF1+2. But for our concern the most relevant point illustrated by this example is that probability (2) heavily depends on the 'environment' – the essential parameter describing the environment being the temperature T of the gas. It seems that, mutatis mutandis, the same can be said for the interesting cases of 'theoretical' probabilities that are considered by Beisbart (Beisbart and Hartmann 2011, Ch. 6). Beisbart investigates the interpretation of probabilistic properties as they are predicted by physical models, notably in the case of Brownian motion and the spatial distribution of galaxies. Without entering in any detail, it seems DEF1+2 can well be understood to also apply to these cases, as the reader will easily verify. (For instance, a Brownian particle / system can be subject to repeated tests; what is more, on usual physical intuition, experimental verification of stochastic properties of Brownian particles, say dwell-time or displacement radius, could be done (and doubtlessly is done) via the determination of frequencies as in DEF1+2.) Somewhat more subtle are probabilistic parameters physicists use for describing spatial patterns of e.g. galaxies. If such a model describes the spatial distribution of galaxies in the probabilistic sense, via e.g. the probability PR(N) that a given 32 radius R contains N galaxies, then it should be possible to apply the recipe of DEF1+2. One can pick 'many' (say n) different regions with radius R on the map, and count within each of these regions the number of galaxies. Then the experimental ratio "(number of regions containing N galaxies) / n" should increasingly approximate PR(N) if one increases n; if not physicists would not accept the model as probabilistic. Now here the same remark as we already made for meteorological 'probabilities' might apply: it does not necessarily make sense to consider the limit for infinite n (cf. footnote 8). References BELL, John S., 1981, 'Bertlmann's Socks and the Nature of Reality', Journal de Physique, 42, Complément C2, pp. C2-41 – C2-62. BEISBART, Claus and HARTMANN, Stephan, 2011, 'Probabilities in Physics' (Editors), Oxford University Press, Oxford. BEISBART, Claus, 2011, 'Probabilistic modeling in physics', in Beisbart and Hartmann (Eds.), Probabilities in Physics, Oxford University Press, Oxford, pp. 143 – 167. BOHR, Nils, 1935, 'Quantum Mechanics and Physical Reality', Nature, 136, pp. 1025-1026. COUDER, Y., S. PROTIÈRE, E. FORT, A. BOUDAOUD, 2005, Nature 437, p. 208. COUDER, Y., E. FORT, 2006, Physical Review Lett. 97, p. 154101. BUNGE, Mario, 2006, Chasing Reality: Strife over Realism, Univ. Toronto Press, Toronto. EDDI, A. et al., 2011, J. Fluid Mechanics, vol. 674, pp. 433–463 FINE, Terrence, 1973, Theories of Probability, Academic Press, New York. GAUTHIER, Yvon, 1983, 'Quantum mechanics and the local observer', Intl. J. Theor. Phys., 22 (12) pp. 1141-1152. GILLIES, Donald, 2000, Philosophical Theories of Probability, Routledge, London. GNEDENKO, Boris, 1967, Theory of Probability, Chelsea Publishing Co., New York. KHRENNIKOV, Andrei, 2008, Interpretations of Probability, de Gruyter, Berlin. KOLMOGOROV, Andrei, 1956, Foundations of the Theory of Probability (2 nd ed.), Chelsea, New York. MARINOFF, Louis, 1994, 'A resolution of Betrand's paradox', Phil. of Science, 61, pp. 1 – 24. 33 POPPER, Karl, 1957, 'The Propensity Interpretation of the Calculus of Probability, and Quantum Mechanics', pp. 65-70, in S. Körner (ed.), Observation and Interpretation, Academic Press, New York. RÉDEI, M., 2010, 'Kolmogorovian Censorship Hypothesis for general quantum probability theories', Manuscrito – Revista Internacional de Filosofia, 33, pp. 365-380. RICHTER, Egon, 1978, Höhere Mathematik für den Praktiker, Johann Ambrosius Barth, Leipzig. SZABÓ, L. E., 1995, 'Is quantum mechanics compatible with a deterministic universe ? Two interpretations of quantum probabilities', Foundations of Physics Letters 8, p. 421. SZABÓ, L. E., 2000, 'Attempt to resolve the EPR–Bell paradox via Reichenbachian concept of common cause', International Journal of Theoretical Physics 39, p. 901. SZABÓ, L. E., 2001, 'Critical reflections on quantum probability theory', in M. Rédei, M. Stoeltzner (Eds.), John von Neumann and the Foundations of Quantum Physics, Vienna Circle Institute Yearbook, Vol. 8, Kluwer. TIJMS, Henk, 2004, Understanding Probability: Chance Rules in Everyday Life, Cambridge University Press, Cambridge. VAN FRAASSEN, Bas, 1980, The Scientific Image, Clarendon Press, Oxford. VERVOORT, Louis, 2013, 'Bell's Theorem: Two Neglected Solutions', Foundations of Physics (2013) 43, pp. 769-791. VERVOORT, Louis, 2012, 'The instrumentalist aspects of quantum mechanics stem from probability theory', American Institute of Physics Conf. Proc., FPP6 (Foundations of Probability and Physics 6, June 2011, Vaxjo, Sweden), Ed. M. D'Ariano et al., p. 348. VON MISES, Richard, 1928, Probability, Statistics and Truth, 2 nd revised English edition, Dover Publications, New York (1981). VON MISES, Richard, 1964, Mathematical Theory of Probability and Statistics, Academic Press, New York. VON PLATO, Jan, 1994, Creating Modern Probability, Cambridge University Press, Cambridge.
| |
I recently had the pleasure to hang out with some Sci-Fi fans, people who are very much full of what the genre calls “Sensawunda”, or the sense of wonder. This happened at the Albacon Science Fiction Convention in Albany, NY, an event I attended to flog my new book, Cyberchild (available as a free eBook on my personal site at www.smartalix.com).
Do you still think technology is cool? Did you ever?
Like many of you, I got into the tech field because I was enthralled with the wide vistas of the future, the potentialities, places, and achievements that science and technology would eventually bring into reality (although I am still waiting for my flying car). A freewheeling discussion about the future with people who still passionately think that way reminded me how I got into this business.
When was the last time you were excited about electronics? I still get goose bumps when I see a cool new device or technology, and I know many of you still feel the same way. The floodgates of imagination fly open when creative people are confronted with challenging ideas that force their minds to travel down new and interesting paths.
That “sensawunda” ranges in intensity from open-mouthed awe to a simple sensation of “Gee, that's cool!” The key aspects are an appreciation of the new vistas provided by the revelation, recognition of the application areas affected, and pleasure at having discovered it.
Does a loss of wonder represent a reduction in creativity? If a person isn't excited by the new, does it mean they can't be an effective innovator?
Many would say that this fascination with technology is no longer as strong among our youth. Depending on whom you ask, this lack of “sensawunda” is either a fault of our success in technology, or the current issues of science and religion in our society.
Are our kids so spoiled by the widespread realization of many geek dreams that they have a hard time mustering emotion about a mature field of endeavor? Do we need to rekindle that feeling in order to save engineering as a field of development in the USA? How should we go about it?
Our society's current debate on religion and science stems from what I would call misapplied wonder, the attempt to inject religious spirituality into scientific issues. By confusing the nature of the mystery and de-emphasizing science in the process of understanding the world around us, are we preventing our children from being able to function in a rapidly developing technological world?
The possibility also exists that the loss of wonder is a perception based on popular culture stereotypes. Maybe most of us still look at the new and interesting in the world around us and still think “Cool!” but are just afraid of saying so as to not look like a neophyte.
How about you? Do you still think tech is cool? Do you feel that “sensawunda” is a requirement for creativity? Do you think that a person needs to have it to be a good design engineer? Drop me a line and let me know. | https://www.electronicproducts.com/sensawunda/ |
Job Description:
- Analysing consumer trends and information within different customer segments in order to identify appropriate marketing and sales strategy.
- Understanding brands target market perception, needs and demands and aligning brand value to deliver a strong brand experience for users.
- Conducting analysis and periodical reviews of the brand, competition, product category, customer and consumer trends to enhance the brand’s equity and marketplace performance.
- Develops growth targets, business objectives, brand strategies and marketing plans for the brand that aligns with global business plan.
Requirements:
- Maximum 35 years old
- Bachelor Degree (all disciplines/majors considered)
- Minimum of 1 year working experience as Manager/Assistant Manager Marketing
- Able to operate Ms.Office
- Leadership, communication, collaboration skills, creativity and strategic analysis abilities. | https://glints.com/tw/en/opportunities/jobs/assistant-brand-manager/531a3ca8-4445-4aa3-9540-642b9b46abcb |
CONCORD, MA- Dorothy M. Ball, 88, slipped away into Heaven peacefully at home, surrounded by her loving children and her dogs, after a brief illness.
She was the wife of the late Robert A. Ball and is survived by her 3 children and their husbands: Cynthia R. Oulighan and her late husband, Steven J. Oulighan of Concord Ma; Susan and John Bland of Derry, NH; and Jonathan Ball and Robert Cook of Pembroke, NH. She was the loving Grandmother/Great Grandmother of Susan and Matt Dunham and their children Ethan, Josiah, Levi, Asher and Clover Mae of Wappingers Falls, NY; Robyn and Edwin Hamel and their children Alex, Max, Mia and Natally of Concord, Ma; Kristen and Anthony Pagano and their daughters Abigail, Layna and Paige of Mars, PA; Katy and Brian Wuoti and their sons Everett, Griffin, Rook and Rowan of Wilmington, VT; Megan and David Ash and their sons Oliver and Archer of Idaho Falls, Idaho, Trevor and Kristen Bland of Exeter, NH and Wesley and Katy Bland and their daughter, Addison Rose of Newmarket, NH. She is also survived by her sister, Faith Johnson and her 4 children of Georgetown, Ma.
Born on September 29, 1930, she was the daughter of the late Robert E. Haley and the late Edith Ferguson of Springfield, Ma. She graduated from American International College in 1952 and went on to work for a prestigious Architectural firm in Boston, Ma. She then met her husband and started on the incredible journey of motherhood. As a stay at home mother, she found ways to help provide for the family by teaching private piano lessons and babysitting for others' children. She also introduced her love for animals to her children as they grew; her care for dogs, cats, and horses were an everyday occurrence and passion for her children as well, as she supported their endeavors with providing riding lessons and attending countless horseshows. She also owned her own successful Hair Salon in Hooksett NH, as well as Manchester, NH after she raised her children. She was a successful business owner, avid horse woman and deeply involved with the "Fast Friends Greyhound Rescue" out of Swanzey, NH.
Her greatest passion in life was to faithfully love and serve her family well. She will be greatly missed by everyone her life touched throughout the years.
A service honoring and celebrating her life will be held on December 8th, 2018 at 1:00 PM at West Parish, Andover, in the beautiful stone chapel within the cemetery. This is located across from the main church building on 129 Reservation Rd. Andover, Ma.
In lieu of flowers, Please send a donation to the Fast Friend Greyhound Rescue at 14 W. Swanzey Rd. Swanzey, NH. 03446. The Cremation Society of New Hampshire is assisting the family with arrangements. | http://csnh.tributes.com/obituary/read/Dorothy-M.-Ball-106610079 |
What is the meaning of life? This is one of those questions often asked in both in a serious and well-meaning way, as well as in a more humorous way. Rather than ask that exact question today, we are asking a more direct question: How fruitful is your life? This is one of those “Oh no, I have to look in the mirror!” moments, where we need to really figure out what we are about. Fortunately, the Bible gives us a lot of guidance as to what kind of fruit we should look for and what kind of results we should avoid. Stay with us as we get up close and personal with our lives.
If you do not have a password, please subscribe to our FREE Premium Content for the Full Edition version of CQ Rewind. The welcome message will contain your password, and a reminder will be sent each week when the CQ Rewind is available online for you to read, print, or download. | https://christianquestions.com/character/626-how-fruitful-is-your-life/ |
Renowned internationally for its cheesesteaks and home to the Declaration of Independence, Philadelphia has that small-town charm with all the perks of a big city. Plus, this sixth-largest metro area in the U.S. is only a few hours from New York City and Washington, D.C., so neighborhoods in Philly are just as diverse as you’d expect them to be.
To help out, we put together a list of the most googled neighborhoods in the city by other renters like you. Check them out below:
-
Manayunk: Philly’s Vibrant and Charming Neighborhood
Manayunk is a charming, historical neighborhood in Northwest Philadelphia peppered with small boutiques and lively restaurants. Conveniently located just 15 minutes from downtown...
-
East Falls, Philadelphia: Historic, Suburban Appeal Minutes from Center City
East Falls is a quiet little gem on the Schuylkill River. It’s the beginning of a much more residential, almost suburban, section of Philadelphia, which spreads out into neighboring Manayunk, Germantown and...
-
Rittenhouse Square, Philadelphia: A Historic Neighborhood Layered With Elegance
For many Philadelphians, Rittenhouse Square is the heart of Center City. It’s where the city comes to gather in nicer weather, picnicking on the immaculately kept lawns while dogs...
-
Philly’s Old City is Immersed in American History & Local Culture
Tucked into the tiny cobblestone streets of Old City lies Philadelphia’s most historic neighborhood. A part of the original city of Philadelphia, Old City is where some of the nation’s earliest and most...
-
South Philadelphia Is One of the Most Diverse and Expansive Areas in Philly
No one calls the neighborhood that covers the bottom portion of the city “South Philadelphia”. Down here, it’s South Philly. This is the no-nonsense neighborhood, diverse and expansive. It’s where Rocky...
-
Center City: The Original City of Philadelphia is Still its Hub
Center City is the bustling business hub of Philadelphia right in the center of the city. It’s the grid of streets that spans from the Schuylkill River on the East side to the Delaware River...
-
Enjoy Some Victorian Charm in Philadelphia’s Chestnut Hill Neighborhood
Dubbed the “Garden District of Philadelphia”, Chestnut Hill is the old, beautiful neighborhood that somehow feels both comfortable and luxurious. Victorian houses with shaded, wrap-around porches...
-
Northern Liberties is Philly’s Most Booming Neighborhood
When it comes to a hip, up-and-coming neighborhood, it does not get much trendier than Philadelphia’s Northern Liberties. With young professionals and families...
-
Abundant in Entertainment and Great Food, Fishtown Is Philly’s Hottest Neighborhood
The Fishtown neighborhood has been named a lot of things: New York City’s 6th borough, the hottest neighborhood in the country, the hipster heaven. Whatever you call it, one thing is true, Fishtown...
West Philly Is Philadelphia’s Most Eclectic Neighborhood
A diverse culture mashup, West Philadelphia is one of the most interesting and eclectic of Philly’s neighborhoods. It’s more an area than a neighborhood, a combination of many...
Philadelphia’s average rent reached $1,685 in April
Philadelphia’s average rent reached $1,685 in April, after a 1.5% increase since last year. A thriving economic and cultural hub with a 44% renter share, Philadelphia is a desirable place to live and work in, with apartment prices above the national average of $1,417. [View Full Report]
Philadelphia’s average rent was $1,685 in April
Philadelphia’s average rent reached $1,685 in April, after a 1.5% increase since last year. A thriving economic and cultural hub with a 44% renter share, Philadelphia is a desirable place to live and work in, with apartment prices above the national average of $1,417.
The average rent for an apartment in Philadelphia rose slower than in other surrounding cities, such as Upper Darby ($946), where prices went up by 4.5%. Meanwhile, apartment rates in Bensalem increased by 8.2%, reaching a $1,372 average.
What it costs to rent in the largest renter hubs in the Philadelphia area
West Chester, the city with the largest renter share (63%) in the Philadelphia area, has a $1,637 average rent, after an 4.7% annual increase. The second largest renter hub, Norristown, has a 62% renter share, with its apartments going for $1,375 on average (up by 1.8% since last year). Exton and its 52% share of renters follows, with a $1,829 average apartment price.
Pottstown also has a 52% renter share, with apartments going for $1,241 after a 7.3% yearly increase. Last but not least, Bryn Mawr's 51% share of renters places it fifth among the area’s renter hubs. The city’s average rent reached $1,764 in April, going up by 3.8% in the past year.
The priciest and cheapest cities for renters in the Philadelphia area
Exton is the priciest city for renters in the Philadelphia area, with apartments renting for $1,829 per month. Malvern’s $1,807 rate is the second most expensive, while apartments in Bryn Mawr ($1,764) come in third.
For renters in search of budget-friendly apartments, Upper Darby's $946 average rent is the cheapest in the Philadelphia area, followed by Drexel Hill's $1,178 rate. Morrisville rentals are the third least pricey on the list, with a $1,209 average rent as of April.
Philadelphia apartment rents in popular ZIP codes for renters
Among Philadelphia's most popular renter ZIP codes, 19103 has a $2,206 average rent, above the city average ($1,685). At the same time, rentals in ZIP 19104 go for $2,551, while prices in ZIP 19144 circle around $1,302. Apartment rents in ZIP 19131 clock in at around $1,561, lower than the Philadelphia average, while rates in ZIP 19102 average at $2,238.
Check out how average rents evolved across the Philadelphia area
Methodology
RENTCafe.com is a nationwide apartment search website that enables renters to easily find apartments and houses for rent throughout the United States.
The data on average rents included in our reports comes directly from competitively-rented (market-rate) large-scale multifamily properties (50+ units in size), via telephone survey. The data is compiled and reported by our sister company Yardi Matrix, a business development and asset management tool for brokers, sponsors, banks and equity sources underwriting investments in the multifamily, office, industrial and self-storage sectors. Fully-affordable properties are not included in the survey and are not reported in rental rate averages. Local rent reports include only cities with a statistically-relevant stock of large-scale multifamily properties of 50+ units.
Look for houses for rent near me
Do you want a house with a backyard, a grill area, that offers you privacy and enough space for your family? RENTCafé offers you the possibility to look for a house for rent near you, to find the one that best suits your family. Browsing through our listings, you can filter by size, price, pet policy, check out the location, to find the best house in your area. Whether you are looking to move to a particular neighborhood, or a specific ZIP code, may it be for its safety, beautiful buildings, or quiet atmosphere, you can find the rental house that your whole family will enjoy.
Your next home is out there, whether you are looking for a house for rent near you or anywhere in the U.S. Easily browse houses for rent with availability updated daily, and let's find the one that is perfect for you!
Frequently Asked Questions
How much is the rent for a house in Philadelphia, PA?
The price range of a house for rent is between $900 and $2,995. Find house for rent in Philadelphia, PA.
How many houses are available in Philadelphia, PA on RENTCafé?
There are 9 house rentals available on RENTCafé. Prices and availability in Philadelphia, PA were last updated on 23 Jun 2021.
What is the average size of a rental house in Philadelphia, PA?
The average Philadelphia, PA rental house size is 1,188 sq. ft.
What are the advantages of renting a house in Philadelphia, PA?
Renting a house in Philadelphia, PA comes with more space, both indoor and outdoor.
What are the downsides of renting a house apartment in Philadelphia, PA?
Houses for rent in Philadelphia, PA are usually located in non-central areas. You might be interested in studio apartments, 1 bedroom apartments, 2 bedroom apartments or 3 bedroom apartments, or browse all RENTCafé apartments for rent in Philadelphia, PA.
Philadelphia, PA Demographics
- Total Population1,579,075
-
Female 747,479Male 831,596
- Median Age34.4
Average Rent in Philadelphia, PA
- Philadelphia, PA Average Rental Price, May 2021 $1,688 /mo
Philadelphia, PA Apartment Rent Ranges
- $701-$1,00012%
- $1,001-$1,50040%
- $1,501-$2,00024%
- > $2,00024%
Philadelphia, PA Rent Trends
|Mar / 2018||Jul / 2018||Nov / 2018||Mar / 2019||Jul / 2019||Nov / 2019||Mar / 2020||Jul / 2020||Nov / 2020||Mar / 2021||May / 2021|
|$1,525||$1,545||$1,572||$1,590||$1,624||$1,640||$1,664||$1,663||$1,643||$1,672||$1,688|
|$1,314||$1,345||$1,354||$1,358||$1,390||$1,397||$1,400||$1,392||$1,396||$1,407||$1,428|
Philadelphia, PA Households
- Total Number of Households601,337
-
Family 325,916Non-family 275,421
-
Children 163,078No Children 438,259
- Average People Per Household2.55
- Median Household Income$45,927
- Median Housing Costs Per Month$994
Philadelphia, PA Education Statistics
- No High School5%
- Some High School43%
- Some College19%
- Associate Degree5%
- Bachelor Degree17%
- Graduate Degree11%
-
Most Expensive Rental237 Krams Ave $2,995
-
Least Expensive Rental5731 Commerce St $900
Renting a house in Philadelphia, PA
If you are interested in living in a serene house, with no noise restrictions and a backyard, you might want to search through rental homes. More and more people nowadays choose to rent houses because the advantages are unlimited. Social butterflies can throw parties without worrying about Mrs. Jenny calling the police and sunbathers can soak up some vitamin D in the intimacy of their lovely patio. With a large variety of options to choose from, you are sure to find a house out of the 6 homes for rent in Philadelphia, PA that will satisfy your need for comfort and privacy. Regardless of the size you’re searching for, we have your covered with plenty of duplexes or single family homes for rent. Because your money is important and you want to choose wisely, there are also cheap houses for rent in Philadelphia, PA starting from $985. If you’re open to spending a little more, there are many wonderful rental houses situated in some of the best residential areas in Philadelphia, PA.
Pick your house based on how much living space you require
If you want to rent but are not willing to cut down on your size requirements, rental homes are for you. While large apartments are also great, houses have their advantages. The average house in Philadelphia, PA provides plenty of living and storage space for you and your family, including pets. Whether you have little children or you dream of being able to accommodate your out-of-town guests, you can choose from houses up to 3 bedroom and 4 bedroom in size. If you and your loved one are newly-weds and need something no larger than a 1 bedroom or 2 bedroom home, select the number of beds from the top menu. The right home is waiting to be discovered. | https://www.rentcafe.com/houses-for-rent/us/pa/philadelphia/ |
# Rectangular function
The rectangular function (also known as the rectangle function, rect function, Pi function, Heaviside Pi function, gate function, unit pulse, or the normalized boxcar function) is defined as
Alternative definitions of the function define rect ( ± 1 2 ) {\textstyle \operatorname {rect} \left(\pm {\frac {1}{2}}\right)} to be 0, 1, or undefined.
## History
The rect function has been introduced by Woodward in as an ideal cutout operator, together with the sinc function as an ideal interpolation operator, and their counter operations which are sampling (comb operator) and replicating (rep operator), respectively.
## Relation to the boxcar function
The rectangular function is a special case of the more general boxcar function:
where u {\displaystyle u} is the Heaviside function; the function is centered at X {\displaystyle X} and has duration Y {\displaystyle Y} , from X − Y / 2 {\displaystyle X-Y/2} to X + Y / 2. {\displaystyle X+Y/2.}
## Fourier transform of the rectangular function
The unitary Fourier transforms of the rectangular function are
Note that as long as the definition of the pulse function is only motivated by its behavior in the time-domain experience, there is no reason to believe that the oscillatory interpretation (i.e. the Fourier transform function) should be intuitive, or directly understood by humans. However, some aspects of the theoretical result may be understood intuitively, as finiteness in time domain corresponds to an infinite frequency response. (Vice versa, a finite Fourier transform will correspond to infinite time domain response.)
## Relation to the triangular function
We can define the triangular function as the convolution of two rectangular functions:
## Use in probability
Viewing the rectangular function as a probability density function, it is a special case of the continuous uniform distribution with a = − 1 / 2 , b = 1 / 2. {\displaystyle a=-1/2,b=1/2.} The characteristic function is
and its moment-generating function is
where sinh ( t ) {\displaystyle \sinh(t)} is the hyperbolic sine function.
## Rational approximation
The pulse function may also be expressed as a limit of a rational function:
### Demonstration of validity
First, we consider the case where | t | < 1 2 . {\textstyle |t|<{\frac {1}{2}}.} Notice that the term ( 2 t ) 2 n {\textstyle (2t)^{2n}} is always positive for integer n . {\displaystyle n.} However, 2 t < 1 {\displaystyle 2t<1} and hence ( 2 t ) 2 n {\textstyle (2t)^{2n}} approaches zero for large n . {\displaystyle n.}
It follows that:
Second, we consider the case where | t | > 1 2 . {\textstyle |t|>{\frac {1}{2}}.} Notice that the term ( 2 t ) 2 n {\textstyle (2t)^{2n}} is always positive for integer n . {\displaystyle n.} However, 2 t > 1 {\displaystyle 2t>1} and hence ( 2 t ) 2 n {\textstyle (2t)^{2n}} grows very large for large n . {\displaystyle n.}
It follows that:
Third, we consider the case where | t | = 1 2 . {\textstyle |t|={\frac {1}{2}}.} We may simply substitute in our equation:
We see that it satisfies the definition of the pulse function. Therefore, | https://en.wikipedia.org/wiki/Box_function |
The Identification Unit employs a civilian Identification Technician who processes crime scenes to document, collect and preserve physical evidence such as fingerprints, footwear impressions, blood and other biological fluids, as well as trace evidence such as hairs or fibers. The Identification Unit assists the Fire Department with photographic documentation and collection of evidence during arson investigations. The Identification Unit is also responsible for crime scene photography. The Newton Police Department utilizes digital photography, which lessens the need for costly film development. The Unit lends its expertise and assists other city departments with their photographic needs, and assists when needed with videotaping and broadcasting the city council meetings.
The Identification Unit maintains the evidence facility, logs in and secures evidence until final disposition of cases, then returns or purges evidence as necessary. The Identification Technician also testifies in court on the chain of custody for evidence, the collecting and processing of crime scenes, the processing of the evidence in the lab and the results of the examinations made by the Division of Criminal Investigation (DCI) Laboratory. | https://www.newtongov.org/364/Identification-Unit-Seized-PropertyEvide |
Each entry describes one UNI / NNI. Note that the table is indexed by 'atm' interfaces, rather than 'aal5' entities.
An arbitrary integer index that can be used to distinguish among multiple signalling entities for the same (physical) interface.
Indicates the mode in which the port is configured to run.
Describes the number of call/connection processing messages sent and received on each UNI or NNI.
Each entry contains signalling statistics for one UNI / NNI. Note that the table is indexed by ATM ports (ifType = 'atm') as opposed to AAL5 entities (ifType = 'aal5').
The number of CALL PROCEEDING messages transmitted on this interface.
The number of CALL PROCEEDING messages received on this interface.
The number of CONNECT messages transmitted on this interface.
The number of CONNECT messages received on this interface.
The number of CONNECT ACKNOWLEDGE messages transmitted on this interface.
The number of CONNECT ACKNOWLEDGE messages received on this interface.
The number of SETUP messages transmitted on this interface.
The number of SETUP messages received on this interface.
The number of RELEASE messages transmitted on this interface.
The number of RELEASE messages received on this interface.
The number of RELEASE COMPLETE messages transmitted on this interface.
The number of RELEASE COMPLETE messages received on this interface.
The number of RESTART messages transmitted on this interface.
The number of RESTART messages received on this interface.
The number of RESTART ACKNOWLEDGE messages transmitted on this interface.
The number of RESTART ACKNOWLEDGE messages received on this interface.
The number of STATUS messages transmitted on this interface.
The number of STATUS messages received on this interface.
The number of STATUS ENQUIRY messages transmitted on this interface.
The number of STATUS ENQUIRY messages received on this interface.
The number of ADD PARTY messages transmitted on this interface.
The number of ADD PARTY messages received on this interface.
The number of ADD PARTY ACKNOWLEDGE messages transmitted on this interface.
The number of ADD PARTY ACKNOWLEDGE messages received on this interface.
The number of ADD PARTY REJECT messages transmitted on this interface.
The number of ADD PARTY REJECT messages received on this interface.
The number of DROP PARTY messages transmitted on this interface.
The number of DROP PARTY messages received on this interface.
The number of DROP PARTY ACKNOWLEDGE messages transmitted on this interface.
The number of DROP PARTY ACKNOWLEDGE messages received on this interface.
Contains additional Q2931 signalling statistics.
The total number of connections established so far.
The number of currently-active connections.
The most recently transmitted cause code.
The most recently transmitted diagnostic code.
The most recently received cause code.
The most recently received diagnostic code.
Allows network managers to examine and configure the timers used for Q.2931 call processing.
Each entry contains timers for one UNI / NNI. Note that the table is indexed by ATM ports (ifType = 'atm') as opposed to AAL5 entities (ifType = 'aal5'). Sorry about the cryptic timer names, but that's what the ATM Forum calls these timers in the UNI V3.0 Specification.
SETUP message timeout, in seconds.
RELEASE message timeout, in seconds.
SAAL disconnection timeout, in seconds.
CALL PROCEEDING timeout, in seconds.
CONNECT timeout, in seconds. This timer is only used on the User side of a UNI.
RESTART reply timeout, in seconds. This should be less than the value for timer 't316'.
STATUS ENQUIRY timeout, in seconds.
DROP PARTY timeout, in seconds.
ADD PARTY timeout, in seconds.
The number of outgoing SDUs which were discarded.
The number of incoming PDUs which could not be received due to errors.
The number of transmission errors for outgoing PDUs.
The number of incoming PDUs which were discarded.
The number of outgoing PDUs which were discarded.
The number of BGN (Request Initialization) messages transmitted over the interface.
The number of BGN (Request Initialization) messages received over the interface.
The number of BGAK (Request Acknowledgement) messages transmitted over the interface.
The number of BGAK (Request Acknowledgement) messages received over the interface.
The number of END (Disconnect Command) messages transmitted over the interface.
The number of END (Disconnect Command) messages received over the interface.
The number of ENDAK (Disconnect Acknowledgement) messages transmitted over the interface.
The number of ENDAK (Disconnect Acknowledgement) messages received over the interface.
The number of RS (Resynchronization Command) messages transmitted over the interface.
The number of RS (Resynchronization Command) messages received over the interface.
The number of RSAK (Resynchronization Acknowledgement) messages transmitted over the interface.
The number of RSAK (Resynchronization Acknowledgement) messages received over the interface.
The number of BGREJ (Connection Reject) messages transmitted over the interface.
The number of BGREJ (Connection Reject) messages received over the interface.
The number of SD (Sequenced Connection-Mode Data) messages transmitted over the interface.
The number of SD (Sequenced Connection-Mode Data) messages received over the interface.
The number of SDP (Sequenced Connection-Mode Data with request for Receive State Information) messages transmitted over the interface. This object only applies to UNI 3.0.
The number of SDP (Sequenced Connection-Mode Data with request for Receive State Information) messages received over the interface. This object only applies to UNI 3.0.
The number of ER (Recovery Command) messages transmitted over the interface. This object is not applicable to UNI 3.0.
The number of ER (Recovery Command) messages received over the interface. This object is not applicable to UNI 3.0.
The number of POLL (Transmitter State Information with request for Receive State Information) messages transmitted over the interface.
The number of POLL (Transmitter State Information with request for Receive State Information) messages received over the interface.
The number of STAT (Solicited Receiver State Information) messages transmitted over the interface.
The number of STAT (Solicited Receiver State Information) messages received over the interface.
The number of USTAT (Unsolicited Receiver State Info) messages transmitted over the interface.
The number of USTAT (Unsolicited Receiver State Info) messages received over the interface.
The number of UD (Unnumbered User Data) messages transmitted over the interface.
The number of UD (Unnumbered User Data) messages received over the interface.
The number of MD (Unnumbered Management Data) messages transmitted over the interface.
The number of MD (Unnumbered Management Data) messages received over the interface.
The number of ERAK (Recovery Acknowledgement) messages transmitted over the interface. This object is not applicable to UNI 3.0.
The number of ERAK (Recovery Acknowledgement) messages received over the interface. This object is not applicable to UNI 3.0. | http://www.circitor.fr/Mibs/Html/D/DEC-ATM-SIGNALLING-MIB.php |
In this week’s issue of the journal Science, MIT researchers report that just four fairly vague pieces of information — the dates and locations of four purchases — are enough to identify 90 percent of the people in a data set recording three months of credit-card transactions by 1.1 million users.
When the researchers also considered coarse-grained information about the prices of purchases, just three data points were enough to identify an even larger percentage of people in the data set. That means that someone with copies of just three of your recent receipts — or one receipt, one Instagram photo of you having coffee with friends, and one tweet about the phone you just bought — would have a 94 percent chance of extracting your credit card records from those of a million other people. This is true, the researchers say, even in cases where no one in the data set is identified by name, address, credit card number, or anything else that we typically think of as personal information.
The paper comes roughly two years after an earlier analysis of mobile-phone records that yielded very similar results.
De Montjoye is joined on the new paper by his advisor, Alex “Sandy” Pentland, the Toshiba Professor of Media Arts and Science; Vivek Singh, a former postdoc in Pentland’s group who is now an assistant professor at Rutgers University; and Laura Radaelli, a postdoc at Tel Aviv University.
The data set the researchers analyzed included the names and locations of the shops at which purchases took place, the days on which they took place, and the purchase amounts. Purchases made with the same credit card were all tagged with the same random identification number.
For each identification number — each customer in the data set — the researchers selected purchases at random, then determined how many other customers’ purchase histories contained the same data points. In separate analyses, the researchers varied the number of data points per customer from two to five. Without price information, two data points were still sufficient to identify more than 40 percent of the people in the data set. At the other extreme, five points with price information was enough to identify almost everyone.
The researchers characterized price very coarsely, treating all prices that fell within a few fixed ranges as functionally equivalent. So, for instance, a purchase of $20 at some store on some day in one person’s history would count as a match with a purchase of $40 by someone else at the same store on the same day, since both purchases fell within the range $16 to $49. This was an attempt to represent the uncertainty of someone estimating purchase amounts from secondary information, such as an Instagram photo of the food on someone’s plate. The limits of each range were based on a fixed percentage of its median value: The range $16 to $49, for instance, is the median value of purchases ($32.50) plus or minus 50 percent, rounded to the nearest dollar.
Preserving anonymity in large data sets is a pressing concern because public and private entities alike see aggregated digital data as a source of novel insights. Retailers studying anonymized credit-card histories could certainly learn something about the tastes of their customers, but economists might also learn something about the relationship of, say, inflation or consumer spending to other economic factors.
So the MIT researchers also examined the effects of coarsening the data — intentionally making it less precise, in the hope of preserving privacy while still enabling useful analysis. That makes identifying individuals more difficult, but not at a very encouraging rate. Even if the data set characterized each purchase as having taken place sometime in the span of a week at one of 150 stores in the same general areas, four purchases (with 50 percent uncertainty about price) would still be enough to identify more than 70 percent of users.
In separate work, de Montjoye, Pentland, and other members of Pentland’s group have begun developing a system that would enable people to store the data generated by their mobile devices on secure servers of their own choosing. Researchers looking for useful patterns in aggregate data would send queries through the system, which would return only the pertinent data — such as, for instance, the average amount spent on gasoline during different time periods.
Doesn't sound easy to have access to 1 million user's 3 transactions each, without already knowing who they are. As I get from this explanation you have to have information about all users, not just the ones you want to identify.
What records are they searching? Who has access to the records they are searching?
I think what they are saying is with three or four credit card purchases the receipts themselves with out the credit card number on them could reveal the user when compared to a database of 1 million users even if the data was course. Course data means data lies in a range like 1PM to 3PM not 2:39PM or $1 to $5 instead of say $2.50. Time is more likely to be course because the Credit Card company might keep records down to the last second while the receipt was printed only to the minute.
They are searching through a databank of credit card transactions. So, the next time I find a misplaced billfold with three receipts, I can hire a hacker that has the credit card transactions of 1.1 million people to identify the owner. Big deal.
In this video, Robert Lee Hotz of The Wall Street Journal discusses how MIT researchers have found that individuals in an anonymous data set can be identified using just a few pieces of information about their shopping habits. “We're really being shadowed by our credit cards,” Lee Hotz explains.
A new MIT study examining anonymous credit card data shows that individuals can be identified using just a few pieces of information, writes Wall Street Journal reporter Robert Lee Hotz. “This touches on the fundamental limit of anonymizing data,” explains Yves-Alexandre de Montjoye.
In a piece for Scientific American, Larry Greenemeier writes about new MIT research showing how easy it is to identify individuals in anonymous data sets. “We have to think harder and reform how we approach data protection and go beyond anonymity, which is very difficult to achieve given the trail of information we all leave digitally,” says Yves-Alexandre de Montjoye.
Seth Borenstein and Jack Gillum write for the Associated Press about how MIT researchers have found individuals can be identified by examining a few purchases from anonymous credit card data. "We are showing that the privacy we are told that we have isn't real," explains Pentland.
MIT researchers have found that anonymous individuals in a data set can be identified using a few pieces of information, reports Natasha Singer for The New York Times. “We ought to rethink and reformulate the way we think about data protection,” explains Yves-Alexandre de Montjoye. | https://news.mit.edu/2015/identify-from-credit-card-metadata-0129 |
Abstract: A difficult result to interpret in Computerized Adaptive Tests (CATs) occurs when an ability estimate initially drops and then ascends continuously until the test ends, suggesting that the true ability may be higher than implied by the final estimate. This study explains why this asymmetry occurs and shows that early mistakes by high ability students can lead to considerable underestimation, even in tests with 45 items. The opposite response pattern, where low-ability students start with lucky guesses, leads to much less bias. The authors show that using Barton and Lord’s four-parameter model (4?M) and a less Lord’s four-parameter model (4?M) and a less informative prior can lower bias and root mean square error (RMSE) for high-ability students with a poor start, as the CAT algorithm ascends more quickly after initial underperformance. Results also show that the 4?M slightly outperforms a CAT in which less discriminating items are initially used. The practical implications and relevance for psychological measurement more generally are discussed. | https://libres.uncg.edu/ir/listing.aspx?id=5196 |
Q:
Solution of : SPOJ : DSUBSEQ
My code is giving wrong answer for the following question. I have tried many test cases but can't find the error.
http://www.spoj.com/problems/DSUBSEQ/
# include<bits/stdc++.h>
# define lli long long int
# define pb push_back
# define loop(i,a,b) for(int i=a;i<b;i++)
# define loopl(i,a,b) for(lli i=a;i<b;i++)
# define MAXN 1000
#define INF 1000000000
# define mod 1000000007
using namespace std;
int main()
{
int t;
cin>>t;
while(t--)
{
string s;
cin>>s;
int n=s.length();
int visited[26],dp[n+1];
memset(visited,-1,26);
loop(i,0,26) visited[i]=-1;
dp[0]=1;
loop(i,1,n+1)
{
dp[i]=2*dp[i-1];
if(visited[s[i-1]-'A']!=-1) dp[i]=(dp[i]%mod-dp[visited[s[i-1]-'A']]%mod + mod)%mod ;
visited[s[i-1]-'A'] = i-1 ;
}
cout<<dp[n]<<endl;
}
}
Sample test Cases:
INPUT:
3
AAA
ABCDEFG
CODECRAFT
OUTPUT:
4
128
496
But i am getting : wrong answer #1
This language is c++ . I am new to dynamic programming.
A:
The input contains lower test cases also.
It will be a good thing to convert every character to upper case
Update: Another thing you are doing wrong is not taking mod in this line dp[i]=2*dp[i-1]
Imagine a test where there are distinct 26 letters in that case ans will 2^26 will certainly overflow
Make it dp[i]=(2*dp[i-1])%mod
| |
Photo by Kristin Teig
One of my favorite crops from my husband’s farm are his fall carrots. I prefer the fall carrots because as the weather gets colder the vegetable sugars concentrate, yielding the sweetest carrots of the year. We use lots of carrots in this recipe, so that it’s more about the carrots than anything else. For the best flavor, serve it cold the day after you make it. You can substitute chickpeas for the black-eyed peas, if you prefer to use another type of bean.
- Yield
- Serves 6–8
Ingredients
- 1 cup dried black-eyed peas
- 1/2 teaspoon kosher salt, plus more to taste
- Pinch of saffron
- 2 tablespoons warm tap water
- 2 tablespoons extra-virgin olive oil
- 1 small onion, finely chopped
- 4 cups thin carrot rounds
- 1 red bell pepper, stemmed, seeded, and finely chopped
- 1 1/2 teaspoons Persian Spice Mix
- 1 teaspoon finely chopped garlic
- Freshly ground black pepper
- 1 teaspoon freshly squeezed lemon juice
- 2 teaspoons honey
- 2 tablespoons chopped fresh flat-leaf parsley leaves
Preparation
- In a medium-sized saucepan, combine the black-eyed peas and 4 cups water and bring to a boil over high heat. Lower the heat and simmer until tender, about 25 minutes. Off the heat, add the salt and set aside while the peas absorb some salt for 10 minutes. Drain.
- Meanwhile, put the saffron and water in a small bowl. Mix and set aside for at least 15 minutes and as long as overnight.
- Place a large sauté pan over medium-low heat and add the olive oil, onion, carrots, and red bell pepper, stirring until the peppers start to soften and the onion is translucent, about 10 minutes. Add the saffron (and its blooming water), Persian Spice Mix, garlic, and black-eyed peas. Season the carrots and peas with salt and pepper to taste and stew until the carrots and peppers are tender and the black-eyed peas are glazed.
- Remove from the heat and set aside to cool. Stir in the lemon juice, honey, and parsley. Serve cold or at room temperature. | https://www.epicurious.com/recipes/food/views/persian-style-carrots-and-black-eyed-peas |
Tri☆Stars とらい☆すたーず [Clock Up team Lilac]
Foreword: Fighting heroines are inspiring, but I also wanted to pursue possible connections with Zwei Worter.
Synopsis:
Luinstellar High School is a prestigious high school located in a magic world. One day, Kazuma, a student of the school, accidentally finds three statues of the Magi. He takes off jewels embedded into a chest of the statues. At this very moment.... The jewels go inside his body and three girls appear from the statues. Also, space-time suddenly strains and some shadows come out. One of them attacks Kazuma and he loses his consciousness.... He gets back his consciousness a while later and finds three girls lying nearby. According to them, they need the jewels to keep alive, so Kazuma decides to beat up the shadows and get back the jewels for them....
Youtube:https://www.youtube.com/playlist?list=PLs4Gp5VU4Fv8C_f1kq2IJFkWcXvpoXekn
Game type: Fantasy action story
Character Design rating: 7/10
Protagonist rating: 5/10
Story rating: 7/10
Game quality: 8/10
Overall rating: 7/10
That's a severe scores split which basically tells that there are both good and rough points. But first let's see if there is something in common with Zwei Worter, previous work of another Clock Up side team. Style is very similar. There are also 10 episodes with small episode preview before each episode. BGM and art very similar (CG are surprisingly beautiful, including HCG). Engine seems to be the same. We're also shown a huge red robot-like enemy as antagonist Diablo in the beginning. But similarities end there.
Tri☆Stars does not want to repeat the same mistake of Zwei Worter to be compared to Evangellion, so game tries to create a unique and original setting that would not be confused with anything else. Main character studies at a seemingly normal school in the year 2008, but he falls down and re-animates figures of three girls who appear to be magical girls sealed in stone in 996 year. And it's not A.D. chronology, but timeline starting with some ancient conflict of the magical church with the devil. The three awakened girls are actually Tri☆Stars - all of them become main heroines, as well as another magical girl and a childhood friend. Heroines are varied personality-wise - clingy Myulius, tsundere Victoria, serious and smart Liselle, genki childhood friend Tifaria, pragmatic and mysterious Zero. We soon realize that we attracted attention of the Seven servants of Diablo (First, Second, Third, Fourth, Fifth, Sixth, Seventh - naturally), and those become our main antagonists for the time being. Story is rather interesting as it's unclear why principal uses powers of Diablo, what power protagonist acquires, what's the deal with eternal confrontation. By the end of the game things escalate rather direly, so final conflict is interesting to watch.
Of course there are a lot of "buts". Story is the same for all the heroines. Battles are poorly done. Everyday life scenes dominate from the start almost to the end, and these scenes are sleepy, with bad tempo and absolutely not enough good humor. Mid-game is rather painful because of that. Protagonist actually acquires serious power, but he becomes useful only in the childhood friend Tifaria route since other heroines are magical girls and can fight themselves ok. But he's pretty weak in other routes, plus he hooks up with Myulius in the common route, so going for other girls feels like cheating already. There are two writers, and some heroines routes feel somewhat underdeveloped. Like Liselle gets pretty much zero bonding before H event, and Zero has only one H scene in her route. Routes of three Tri☆Stars feel similar, and only additional heroines Tifaria and Zero actually add some twists.
So, I still see enough merit in this game to call it a masterpiece since I even called Zwei Worter one. But its main problem is failed mid-game and lack of twists or surprising moves. Some feel that game would benefit from violation scenes and bad endings since it's a magical girl story, others feel sorry that one of antagonists with particularly well developed personality - Fifth The Tseito - can't be captured. Tri☆Stars plays it safe, but even if individual traits are good, there aren't outstanding elements to remember the game by after a passage of time. | https://forums.fuwanovel.net/blogs/entry/3127-tri%E2%98%86stars-%E3%81%A8%E3%82%89%E3%81%84%E2%98%86%E3%81%99%E3%81%9F%E3%83%BC%E3%81%9A-clock-up-team-lilac/ |
Reveal Similar Circumstances
When you have to give a customer bad news, consider revealing your own similar circumstances. A claims adjuster attending one of my seminars explained that she used to have a hard time telling some clients when their car had been stolen and damaged that they still had to pay the deductible – even though they weren’t to blame. Ironically, she’s had much better impact since the same thing happened to her aunt. Now, she shares her personal experience with clients and they feel less like they’ve been singled-out and victimized by the decision. Sometimes it’s true that misery loves company.
Today’s chuckle:
Laugh and the world laughs with you. Cry, and the world looks sheepish and suddenly remembers it had other plans.
Was this helpful? You’ll find more Tips on How to Break Bad News or subscribe to receive a new Business Building Tip every two weeks and stay up to date on all our upcoming events. | https://jeffmowatt.com/customer-service-training-tips/reveal-similar-circumstances/ |
Zinnia Growth Stages: Understanding Zinnia Life Cycle
If you’re looking to understand the different growth stages of a zinnia, you’ve come to the right place! In this article, we’ll walk you through the life cycle of a zinnia, from seed to flower to seedling to mature plant. We’ll discuss the different growth stages, what they mean for your zinnia, and how you can help it reach its full potential. So whether you’re a beginner or an experienced grower, read more about the Zinnia Growth Stages!
Table of Contents
Plant Growth Stage 1: Germination
Germination, which occurs when the seed’s outer shell splits open and roots form, marks the beginning of the first stage of a zinnia’s lifecycle, just like it does for other plants. Zinnia seeds need 7 to 10 days to germinate after being planted.
If you plan to plant zinnia seeds in an outdoor garden, we advise clearing the area of weeds and other plants that might hinder the plant’s development. Additionally, a day with little to no wind is ideal for sowing the seeds.
To guarantee that at least one zinnia plant germinates, plant 1–2 seeds. Keep the soil moist but not soggy. Add peat moss to the area where you’ve planted seeds to preserve moisture levels. Plant seeds when the temperature is between 69-76°F to help ensure germination.
Plant Growth Stage 2: Growth
As the zinnia sprouts appear, keep the soil damp. Apply just enough water twice daily to rehydrate dry soil, preferably in the morning and late in the day. Since frequent watering can wash away the soil’s nutrients, fertilizer addition may be a good idea. For the sprout to grow, it also needs at least 6 to 8 hours of direct sunlight.
Once your zinnias have grown to a height of at least 6 inches in a planter or flowerpot, move them to your garden or a bigger container. By doing this, the roots can mature fully. After being moved, the sprouting zinnia should be placed in a sunny area with at least 6 hours of direct sunlight.
Plant Growth Stage 3: Flowering
At this point, your zinnias should be in full bloom. To prolong the blooming process and encourage sturdier petals, fertilize with a liquid fertilizer solution once every two weeks at half strength. Watering can become less frequent as the flowers wilt. However, they still need to be watered twice daily when the soil is dry but don’t let the pots or planters sit in water for more than an hour to prevent root rot.
On the stem of each zinnia plant, there will be one flower. The flower can be one to seven inches in diameter and come in various colors, including white, yellow, orange, pink, red, lilac, purple, and multicolored. Zinnia blooms can have a single row of pointed, curvy, or twisty petals or multiple rows of these petals. From the middle of summer until the first freeze, zinnias bloom. After the first freeze, zinnias will not bloom again until next summer.
Zinnias are well-known for their capacity to continue blooming for an exceptionally extended period. There are many different estimations regarding how long the blooms will remain on zinnias, just as there are different experiences with each individual due to the way genetics can mutate DNA. Zinnias are a type of flowering plant.
The amount of time it will take someone to keep the blooms of a zinnia alive will be somewhere between sixty days and approximately five months.
Plant Growth Stage 4: Pollination
Hummingbirds and butterflies can pollinate zinnias because they are attracted to the blooms. Since it has both reproductive organs, this plant can also self-pollinate (stamens and pistons). Once the zinnias have finished blooming, you can trim the petals off the stem. This will prevent them from producing flowers again, resulting in a bushier plant. Remove any weeds or other debris from around the zinnia plant before planting. Then, prepare a potting soil mix by adding 2 parts garden soil and 1 part sand. Give the zinnia a good watering before placing it in the pot.
Plant Growth Stage 5: Retrieving Seeds
When the zinnia flowers have finished blooming, the petals will start to fall off. Once all the petals have fallen off, detach one or two pistons from the stem and place them in a jar of water. Within a week, you should see seeds forming on these pistons. Then let them dry for another 2-3 weeks until they become brown and brittle. A bloom harvested too soon may contain immature seeds that won’t sprout. When the flower turns dark brown and is completely dry to the touch, you’ll know it’s dried out completely. Before storing seeds for future planting, spread them out on a mesh screen to ensure they are dry on both sides. | https://www.gardenfine.com/zinnia-growth-stages/ |
Select images represented by thumbnails as listed below. Thumbnails are indicative only and not to be reproduced. The images are formatted as 300 dpi RGB JPEG files.
To download files please click on the download button.
Each image will be downloaded with the full caption and conditions of use guidelines. By downloading these images, you agree to accept these conditions.
Courtesy Nahodka Arts & Pace London.
Elena Kovylina, Still from the Video of her Performance Equality, 2009.
Collage with gold leaf, 11.5 x 13 cm, courtesy of the artist and David Zwirner Gallery.
2 images, Nature Does The Easiest Thing, 2011 (detail).
Plaster powder, powder paint, cellophane, sellotape, paint, polythene, thread, 210 x 1580 x 500 cm. Installation view, Before the law (group exhibition), Museum Ludwig, Cologne, 2011.
3 images, 2014. Ink on paper, 44 x 35 cm.
Copyright and courtesy: Marlene Dumas.
Commissioned by Manifesta 10 Saint Petersburg, Russia.
Drawing of So I Only Want To Love Yours, 2014.
Felt pen on paper, 21 x 29.7 cm (A4 size).
Courtesy the artist. Commissioned by Manifesta 10 Saint Petersburg, Russia.
HD video, color, sound, 60:00:53 min. Video still.
2 images of Marc Camille Chaimowicz, Photomontage no. 1 for The Hermitage, Room 305, London, winter 2013. Photomontage on paper, 21 x 29.7 cm. Courtesy the artist and Cabinet, London.
, 200 x 130 cm. Museum Ludwig (ML 01116, Cologne).
Sketch model for Surplus Leisure, 2014.
Amsterdam gel medium matt glue, 210 x 1,000 cm.
It is so. 2014. Oil on Canvas. 65 x 82 in.
Pack of 3 images of Tragic Love. 1993. Black & white photo, hand-colored. | http://manifesta10.org/ru/media/media-centre/ |
---
abstract: 'We define and study generalizations of simplicial volume over arbitrary seminormed rings with a focus on $p$-adic simplicial volumes. We investigate the dependence on the prime and establish homology bounds in terms of $p$-adic simplicial volumes. As the main examples we compute the weightless and $p$-adic simplicial volumes of surfaces. This gives a way to calculate classical simplicial volume of surfaces without hyperbolic straightening and shows that surfaces satisfy mod $p$ and $p$-adic approximation of simplicial volume.'
address:
- 'Fakultät für Mathematik und Informatik, FernUniversität in Hagen, 58084 Hagen, Germany'
- 'Fakultät für Mathematik, Universität Regensburg, 93040 Regensburg, Germany'
author:
- Steffen Kionke
- Clara Löh
bibliography:
- 'literatur.bib'
title: 'A note on $p$-adic simplicial volumes'
---
Introduction
============
The simplicial volume of an oriented compact connected manifold is the $\ell^1$-seminorm of the fundamental class in singular homology with ${\mathbb{R}}$-coefficients, which encodes topological information related to the Riemannian volume [@Gromov-vbc]. A number of variations of simplicial volume such as the *integral simplicial volume* or *weightless simplicial volume* over finite fields proved to be useful in Betti number, rank gradient, and torsion homology estimates [@FFM; @sauervolgrowth; @loeh-odd; @loehrg; @loehfp].
In the present article, we will focus on $p$-adic simplicial volumes. The basic setup is as follows: If $M$ is an oriented compact connected manifold and $(R, |\cdot|)$ is a seminormed ring (see Section \[sec:simplicial-volume-def\]), then the *simplicial volume of $M$ with $R$-coefficients* is defined as the infimum $$\sv{M ,\partial M}_R
:= \inf\biggl\{ \sum_{j=1}^k |a_j|
\biggm| \sum_{j=1}^k a_j \cdot \sigma_j \in Z(M,\partial M;R)
\biggr\}
\in {\mathbb{R}}_{\geq 0}$$ over the “$\ell^1$-norms” of all relative fundamental cycles of $M$. For ${\mathbb{R}}$ or ${\mathbb{Z}}$ with the ordinary absolute value one obtains the classical simplicial volume $\sv{M}$ and the integral simplicial volum $\sv{M}_{\mathbb{Z}}$. For a ring $R$ with the trivial seminorm this gives rise to the weightless simplicial volume $\sv{M}_{(R)}$ [@loehfp]. For other seminormed rings one obtains new, unexplored invariants. We prove a number of fundamental results that describe how these simplicial volumes for different seminormed rings are related.
Using the ring ${\mathbb{Z}}_p$ of $p$-adic integers or the field ${\mathbb{Q}}_p$ of $p$-adic numbers with the $p$-adic absolute value as underlying seminormed rings leads to $p$-adic simplicial volumes. The long-term hope is that $\sv{M,\partial M}_{{\mathbb{Z}}_p}$ and $\sv{M,\partial M}_{{\mathbb{Q}}_p}$ might contain refined information on $p$-torsion in the homology of $M$.
Dependence on the prime
-----------------------
Extending the corresponding result for ${\mathbb{F}}_p$-simplicial volumes [@loehfp Theorem 1.2], we show that the $p$-adic simplicial volumes contain new information only for a finite number of primes:
\[thm:equality-for-aa-primes\] Let $M$ be an oriented compact connected manifold. Then, for almost all primes $p$, $$\sv{M,\partial M}_{({\mathbb{F}}_p)} = \sv{M,\partial M}_{{\mathbb{Z}}_p} = \sv{M,\partial M}_{{\mathbb{Q}}_p} = \sv{M,\partial M}_{({\mathbb{Q}})}.$$
We prove this in Section \[subsec:aaprimes\]. While the inequalities $\sv{M,\partial M}_{({\mathbb{F}}_p)} \leq
\sv{M,\partial M}_{{\mathbb{Z}}_p}$ and $\sv{M,\partial M}_{{\mathbb{Q}}_p} \leq
\sv{M,\partial M}_{{\mathbb{Z}}_p}$ hold for all prime numbers $p$ (see Corollary \[cor:sandwichp\]) and $\sv{\args}_{({\mathbb{F}}_p)}$ and $\sv{\args}_{{\mathbb{Z}}_p}$ exhibit similar behaviour, we are currently not aware of a single example where one of these inequalities is strict.
Homology estimates
------------------
The $p$-adic simplicial volumes provide upper bounds for the Betti numbers. The following result is given in Corollary \[cor:bettiZp\] and Corollary \[cor:bettiQp\].
Let $M$ be an oriented compact connected manifold. Then for all primes $p$ and all $n \in {\mathbb{N}}$ the Betti numbers satisfy $$\begin{aligned}
b_n(M;{\mathbb{F}}_p) &\leq \sv{M,\partial M}_{{\mathbb{Z}}_p},\\
b_n(M;{\mathbb{Q}}) &\leq \sv{M,\partial M}_{{\mathbb{Q}}_p}.\end{aligned}$$
The first inequality uses the well-known Poincaré duality argument [@lueckl2 Example 14.28][@loehfp Proposition 2.6]. The second inequality is based on the additional torsion estimate (Propsition \[prop:rel-Betti-bounds-2\]) $$\dim_{{\mathbb{F}}_p} p^{m} H_n(M;{\mathbb{Z}}/p^{m+1}{\mathbb{Z}}) \leq p^m \sv{ p^{m} \cdot [M,\partial M]}_{{\mathbb{Z}}_p}$$ and the observation that the right hand side converges to $\sv{M,
\partial M}_{{\mathbb{Q}}_p}$ as $m$ tends to infinity. This suggests the following question:
For which oriented compact connected manifolds $M$ and which primes $p$ is there a strict inequality $$\sv{M,\partial M}_{{\mathbb{Q}}_p} < \sv{M,\partial M}_{{\mathbb{Z}}_p}
\qor
\sv{M,\partial M}_{({\mathbb{F}}_p)} < \sv{M,\partial M}_{{\mathbb{Z}}_p}?$$ Is a strict inequality related to $p$-torsion in the homology of $M$?
Surfaces and approximation
--------------------------
In Section \[sec:examples\], we compute the $p$-adic simplicial volumes for some examples. In particular, we compute the weightless simplicial volume of surfaces. Let $\Sigma_g$ be the oriented closed connected surface of genus $g$. For $b \geq
1$ we write $\Sigma_{g,b}$ to denote the surface of genus $g$ with $b$ boundary components.
\[thm:surfaces\] Let $R$ be an integral domain, equipped with the trivial absolute value. Then
1. $\sv{\Sigma_g}_{(R)} = 4g -2$ for all $g \in {\mathbb{N}}_{\geq 1}$ and
2. $\sv{\Sigma_{0,1}}_{(R)} = 1$ and $\sv{\Sigma_{g,b}}_{(R)} = 3b + 4g - 4$ for all $g \in {\mathbb{N}}$ and all $b \in {\mathbb{N}}_{\geq 1}$ with $(g,b) \neq (0,1)$.
Using this result we compute the ${\mathbb{Z}}_p$-simplicial volume of all surfaces (Corollary \[cor:surfaces-p-adic\]) and we give a new way to compute the classicial simplicial volume of surfaces, which avoids use of hyperbolic straightening (Remark \[rem:new-computation\]). Moreover, Theorem \[thm:surfaces\] also shows that surfaces satisfy mod $p$ and $p$-adic approximation of simplicial volume (Remark \[rem:stable\]).
Non-values
----------
Recent results show that classical simplicial volumes are right computable [@heuerloeh_trans], which in particular allows to give explicit examples of real numbers that cannot occur as the simplicial volume of a manifold. Based on the same methods we establish that also the $p$-adic simplicial volumes $\sv{M}_{{\mathbb{Z}}_p}$ and $\sv{M}_{{\mathbb{Q}}_p}$ are right computable; see Proposition \[prop:rightcomp\].
Acknowledgements {#acknowledgements .unnumbered}
----------------
C.L. was supported by the CRC 1085 *Higher Invariants* (Universität Regensburg, funded by the DFG).
Foundations {#sec:foundations}
===========
We introduce simplicial volumes with coefficients in rings with a submultiplicative seminorm, e.g., an absolute value. In particular, we obtain $p$-adic versions of simplicial volume. Moreover, we establish some basic inheritance and comparison properties of such simplicial volumes similar to those already known in the classical or weightless case.
Simplicial volume {#sec:simplicial-volume-def}
-----------------
Let $R$ be a commutative ring with unit. A *seminorm* on $R$ is a function $|\cdot|\colon R \to {\mathbb{R}}_{\geq 0}$ with $|1| = 1$ that is submultiplicative $$|st| \leq |s| |t|$$ and satisfies the triangle inequality $$|s+t| \leq |s|+ |t|$$ for all $s,t \in R$. If the seminorm is multiplicative, it is called an *absolute value*. A *seminormed ring* is pair $(R,
|\cdot|)$, consisting of a commutative ring $R$ with unit and a seminorm $|\cdot|$ on $R$. A seminormed ring $(R, |\cdot|)$ is a *normed ring* if $|\cdot|$ is an absolute value.
Seminormed rings give rise to a notion of simplicial volume:
Let $(R, |\cdot|)$ be a seminormed ring. Let $M$ be an oriented compact connected $d$-manifold, and let $Z(M, \partial M;R)
\subset C_d(M;R)$ be the set of all relative singular $R$-fundamental cycles of $(M,\partial M)$. Then the *simplicial volume of $M$ with $R$-coefficients* is defined as $$\sv{M ,\partial M}_R
:= \inf\biggl\{ \sum_{j=1}^k |a_j|
\biggm| \sum_{j=1}^k a_j \cdot \sigma_j \in Z(M,\partial M;R)
\biggr\}
\in {\mathbb{R}}_{\geq 0}.$$
The usual norm on ${\mathbb{R}}$ is an absolute value on ${\mathbb{R}}$. The corresponding simplicial volume $\sv{\args} := \sv{\args}_{\mathbb{R}}$ is the classicial simplicial volume, introduced by Gromov [@munkholm; @Gromov-vbc].
Similarly, the usual norm on ${\mathbb{Z}}$ is an absolute value on ${\mathbb{Z}}$. The corresponding simplicial volume is denoted by $\sv{\args}_{\mathbb{Z}}$, the so-called *integral simplicial volume*. Integral simplicial volume admits lower bounds in terms of Betti numbers [@lueckl2 Example 14.28], logarithmic homology torsion [@sauervolgrowth], and the rank gradient of the fundamental group [@loehrg].
\[ex:weightless\] Every non-trivial commutative unital ring $R$ can be equipped with the *trivial* seminorm $$\begin{aligned}
|\cdot|_{\text{triv}} \colon R & \longrightarrow {\mathbb{R}}_{\geq 0} \\
x & \longmapsto 1- \delta_{x,0}.
\end{aligned}$$ The simplicial volume corresponding to the trivial seminorm will be called *weightless* and will be denoted by $\sv{M ,\partial
M}_{(R)}$. The weightless simplicial volume over finite fields $R =
{\mathbb{F}}_p$ has been studied before [@loehfp].
Let $p$ be a prime number. The ring of $p$-adic integers is denoted by ${\mathbb{Z}}_p$ and the field of $p$-adic numbers by ${\mathbb{Q}}_p$. The usual $p$-adic absolute value gives rise to two (possibly distinct) notions of $p$-adic simplicial volume, namely, $\sv{\args}_{{\mathbb{Z}}_p}$ and $\sv{\args}_{{\mathbb{Q}}_p}$, respectively.
\[lem:quotient-seminorm\] Let $(R,|\cdot|)$ be a seminormed ring and let $I \subset R$ be a proper ideal. We define $$|s+I|_{R/I} := \inf\{ |s+i | \mid i \in I \}$$ for all $s + I \in R/I$. Then $|\cdot|_{R/I}$ is a seminorm on the quotient ring $R/I$.
We verify the submultiplicativity. Indeed, for all $s,t\in R$ one obtains $$\begin{aligned}
|(s+I)(t+I)|_{R/I} &= \inf\{ |st+i| \mid i\in I\} \leq \inf\{ |(s+i)(t+j)| \mid i,j\in I\} \\
&\leq\inf\{ |(s+i)| |(t+j)| \mid i,j\in I\} \leq |s+I|_{R/I} |t+I|_{R/I}.\end{aligned}$$ The triangle inequality follows from a similar argument.
There are two distinct seminorms on the rings ${\mathbb{Z}}/p^m{\mathbb{Z}}$ that will play a role in this article. Using Lemma \[lem:quotient-seminorm\], the $p$-adic absolute value $|\cdot|_p$ on ${\mathbb{Z}}_p$ induces a seminorm on ${\mathbb{Z}}/p^m{\mathbb{Z}}$. We will also denote this seminorm by $|\cdot|_p$; for $x \neq 0$ it is given by $$|x|_p = p^{-r}$$ if $x$ lies in $p^r {\mathbb{Z}}/p^m{\mathbb{Z}}$ but not in $p^{r+1}{\mathbb{Z}}/p^m{\mathbb{Z}}$. The corresponding simplicial volume will be denoted by $\|\cdot\|_{{\mathbb{Z}}/p^m{\mathbb{Z}}}$.
As in Example \[ex:weightless\] the rings ${\mathbb{Z}}/p^m{\mathbb{Z}}$ can be equipped with the trivial seminorm, which induces the weightless simplicial volume $\|
\cdot\|_{({\mathbb{Z}}/p^m{\mathbb{Z}})}$.
Changing the seminorm
---------------------
Let $R$ be a commutative ring with unit. We denote by $\mathcal{S}(R)$ the set of all seminorms on $R$. We equip the space of all seminorms with the topology of pointwise convergence, for which a basis of open neighbourhoods of a seminorm $\alpha$ is given by the sets $$U_{\text{pw}}(\varepsilon, F)
:= \bigl\{ \beta \in \mathcal{S}(R)
\bigm| \fa{x \in F} |\beta(x)-\alpha(x)| < \varepsilon
\bigr\},$$ where $\varepsilon \in {\mathbb{R}}_{>0}$ and $F$ is a finite subset of $R$. For every oriented compact connected manifold $M$, the simplicial volume defines a function $$\Vert M, \partial M \Vert_{\bullet} \colon \mathcal{S}(R) \to {\mathbb{R}}_{\geq 0}.$$
Let $M$ be an oriented compact connected manifold. The simplicial volume function $\Vert M, \partial M \Vert_{\bullet}$ is upper semi-continuous with respect to the topology of pointwise convergence.
Let $\alpha \in \mathcal{S}(R)$ and let $\varepsilon > 0$. Take a relative fundamental cycle $c = \sum_{j = 1}^k a_j \sigma_j \in Z(M,\partial M;R)$ with $|c|_{\alpha,1} < \Vert M, \partial M \Vert_{\alpha} + \varepsilon/2$. Now every seminorm $\beta \in U_{\text{pw}}(\varepsilon/2k, \{a_1, \dots, a_k\})$ satisfies $$\Vert M, \partial M \Vert_{\beta} \leq |c|_{\beta,1} \leq |c|_{\alpha,1} + \varepsilon/2 < \Vert M, \partial M \Vert_{\alpha} + \varepsilon$$ and we deduce that the simplicial volume is upper semi-continuous with respect to the topology of pointwise convergence.
Changing the coefficients
-------------------------
\[prop:monotonicity\] Let $(R, |\cdot|_R)$ and $(S,|\cdot|_S)$ be seminormed rings and let $f \colon R \longrightarrow S$ be a unital ring homomorphism that for some $\lambda > 0$ satisfies $|f(x)|_S \leq \lambda \cdot |x|_R$ for all $x \in R$. Then $$\sv {M,\partial M}_S \leq \lambda \cdot \sv{M,\partial M}_R$$ holds for all oriented compact connected manifolds $M$.
As $f$ is unital, the chain map $C_*(\operatorname{Id}_M;f) \colon
C_*(M;R) \longrightarrow C_*(M;S)$ induced by $f$ maps relative $R$-fundamental cycles to relative $S$-fundamental cycles of $(M,\partial M)$. Moreover, $\|C_*(\operatorname{Id}_M;f)\|\leq \lambda$ (whence $\|H_*(\operatorname{Id}_M;f)\|\leq \lambda$), because $\|f\| \leq \lambda$. Therefore, $$\sv{M,\partial M}_S
= \bigl\| [M,\partial M]_S \bigr\|_S
= \bigl\| H_*(\operatorname{Id}_M;f)([M,\partial M]_R)\bigr\|_S
\leq \lambda \sv{M,\partial M}_R,$$ as claimed.
\[cor:universalZ\] Let $R$ be a seminormed ring and let $M$ be an oriented compact connected manifold. Then $$\sv{M,\partial M}_R \leq \sv{M,\partial M}_{{\mathbb{Z}}}.$$
It follows from the triangle inequality that the canonical unital ring homomorphism ${\mathbb{Z}}\longrightarrow R$ satisfies the hypotheses of Proposition \[prop:monotonicity\] with the factor $\lambda=1$.
\[cor:sandwichp\] Let $p$ be a prime number and let $M$ be an oriented compact connected manifold. Then the following inequalities hold for all $m \geq 1$:
1. $ \sv {M,\partial M}_{({\mathbb{F}}_p)}
\leq \sv{M,\partial M}_{{\mathbb{Z}}/p^m{\mathbb{Z}}}
\leq \sv {M,\partial M}_{{\mathbb{Z}}_p}
\leq \sv {M,\partial M}_{{\mathbb{Z}}}$
2. $ \sv {M,\partial M}_{{\mathbb{Q}}_p}
\leq \sv {M,\partial M}_{{\mathbb{Z}}_p}
\leq \sv {M,\partial M}_{{\mathbb{Z}}}$.
3. $ \sv{M,\partial M}_{{\mathbb{Z}}/p^{m}{\mathbb{Z}}}
\leq \sv{M,\partial M}_{{\mathbb{Z}}/p^{m+1}{\mathbb{Z}}}$.
4. \[it:modpm-sandwich\] $ \sv{M,\partial M}_{{\mathbb{Z}}/p^{m}{\mathbb{Z}}}
\leq \sv{M,\partial M}_{({\mathbb{Z}}/p^{m}{\mathbb{Z}})}
\leq p^{m-1}\sv{M, \partial M}_{{\mathbb{Z}}/p^m{\mathbb{Z}}} $
For the first three assertions we only need to apply Proposition \[prop:monotonicity\] with $\lambda = 1$ (and Corollary \[cor:universalZ\]) to the canonical projections $${\mathbb{Z}}_p \longrightarrow {\mathbb{Z}}/p^{m+1}{\mathbb{Z}}\longrightarrow {\mathbb{Z}}/p^m{\mathbb{Z}}\longrightarrow {\mathbb{F}}_p$$ and to the canonical inclusion ${\mathbb{Z}}_p \longrightarrow {\mathbb{Q}}_p$. The last assertion follows from Proposition \[prop:monotonicity\] and the inequalities $$|\cdot|_p \leq |\cdot|_{\text{triv}} \leq p^{m-1}|\cdot|_p$$ between the $p$-adic and the trivial seminorm on the ring ${\mathbb{Z}}/p^m{\mathbb{Z}}$.
\[prop:density\] Let $(R,|\cdot|_R)$ and $(S,|\cdot|_S)$ be seminormed rings and let $f\colon R \longrightarrow S$ be unital ring homomorphism with $|\cdot|_S$-dense image. If $|f(x)|_S = |x|_R$ for all $x \in R$, then $$\sv{M,\partial M}_R = \sv{M,\partial M}_S$$ holds for all oriented compact connected manifolds $M$.
The inequality $\sv{M,\partial M}_S \leq \sv{M,\partial M}_R$ follows from Proposition \[prop:monotonicity\]. The converse inequality works as in the classical case, by approximating boundaries of chains [@mschmidt Lemma 2.9].
We briefly recall the argument. Let $d = \dim(M)$ and let $\varepsilon >0$. Take a fundamental cycle $c \in Z(M,\partial
M;S)$ with $|c|_{1,S} \leq \sv{M, \partial M}_S + \varepsilon$ and some fundamental cycle in $c' \in Z(M,\partial M;R)$. Then $b = c -
C_d(\operatorname{Id}_M;f)(c')$ is a boundary, i.e., $b = \partial_{d+1}(x)$ for some $x \in C_{d+1}(M;S)$. As the image of $f$ is dense, we find an element $x' \in C_{d+1}(M;R)$ that satisfies $$|C_d(\operatorname{Id}_M;f)(x')-x|_{1,S} \leq \varepsilon.$$ Then $c' + \partial_{d+1}(x')$ is a fundamental cycle in $Z(M,\partial M;R)$, which satisfies $$\begin{aligned}
\sv{M, \partial M}_R \leq |c' + \partial_{d+1}(x')|_{1,R} &= |C_d(\operatorname{Id}_M;f)(c' + \partial_{d+1}(x'))|_{1,S}\\
&= \left|c - b + \partial_{d+1}C_d(\operatorname{Id}_M;f)(x') \right|_{1,S}\\
&= \left| c - \partial_{d+1}\left(C_d(\operatorname{Id}_M;f)(x') - x\right) \right|_{1,S}\\
&\leq \sv{M, \partial M}_S + \varepsilon + (d+2)\varepsilon.
\end{aligned}$$ Taking $\varepsilon \rightarrow 0$ proves the claim.
\[cor:densityp\] Let $p$ be a prime number, let $|\cdot|_p$ denote the $p$-adic absolute value on ${\mathbb{Z}}$ and ${\mathbb{Q}}$, and let $M$ be an oriented compact connected manifold. Then $$\begin{aligned}
\sv{M,\partial M}_{{\mathbb{Z}}_p} & = \sv{M,\partial M}_{{\mathbb{Z}}, |\cdot|_p}
\\
\sv{M,\partial M}_{{\mathbb{Q}}_p} & = \sv{M,\partial M}_{{\mathbb{Q}}, |\cdot|_p}.
\end{aligned}$$
By definition ${\mathbb{Z}}$ is $|\cdot|_p$-dense in ${\mathbb{Z}}_p$ and ${\mathbb{Q}}$ is $|\cdot|_p$-dense in ${\mathbb{Q}}_p$. Therefore, we can apply Proposition \[prop:density\].
In fact, we have the following simultaneous approximation result:
\[cor:approximation-at-finitely-many-primes\] Let $M$ be an oriented compact connected manifold and let $T$ be a finite set of prime numbers.
1. For every ${\varepsilon}> 0$, there is a $c
\in Z(M,\partial M;{\mathbb{Z}})$ such that for all $p \in T$ $$|c|_{1,p} \leq \sv{M, \partial M}_{{\mathbb{Z}}_p} + {\varepsilon}.$$
2. For every ${\varepsilon}> 0$, there is a $c
\in Z(M,\partial M;{\mathbb{Q}})$ such that for all $p \in T$ $$|c|_{1,p} \leq \sv{M, \partial M}_{{\mathbb{Q}}_p} + {\varepsilon}\quad\text{and} \quad
|c|_{1,{\mathbb{R}}} \leq \sv{M, \partial M}_{{\mathbb{R}}} + {\varepsilon}.$$
For every prime $p\in T$ we pick (using Corollary \[cor:densityp\]) a relative fundamental cycle $c_p \in Z(M,\partial M;{\mathbb{Z}})$ that almost realizes the $p$-adic simplicial volume. The density of ${\mathbb{Z}}$ in $\prod_{p \in T} {\mathbb{Z}}_p$ [@Neukirch (3.4)] allows us to find integers $a_p \in {\mathbb{Z}}$ (for $p \in T$) with $\sum_{p\in T} a_p =
1$ and such that $a_p$ is close to $1$ in the $p$-adic absolute value, but close to $0$ in the $q$-adic absolute value for all $q \in T \setminus \{p\}$. Then $$c = \sum_{p\in T} a_pc_p$$ is a relative fundamental cycle and approximates the $p$-adic simplicial volumes for all $p \in T$.
Assertion (2) follows from the same argument using the density of ${\mathbb{Q}}$ in the ring ${\mathbb{R}}\times \prod_{p\in T} {\mathbb{Q}}_p$.
\[prop:ZpQp\] Let $M$ be an oriented compact connected manifold. If $p$ is a prime number such that $\sv{M,\partial M}_{{\mathbb{Q}}_p} < p$ then $$\sv{M,\partial M}_{{\mathbb{Q}}_p} = \sv{M,\partial M}_{{\mathbb{Z}}_p}.$$ If $M$ is closed, $\dim M$ is even, and $\sv{M}_{{\mathbb{Q}}_p} < 2p$, then $\sv{M}_{{\mathbb{Q}}_p} = \sv{M}_{{\mathbb{Z}}_p}$.
In particular: For almost all primes $p$, we have $$\sv{M,\partial M}_{{\mathbb{Q}}_p} = \sv{M,\partial M}_{{\mathbb{Z}}_p}.$$
By Corollary \[cor:sandwichp\], we only need to take care of the estimate $$\sv{M,\partial M}_{{\mathbb{Q}}_p} \geq \sv{M,\partial M}_{{\mathbb{Z}}_p}.$$ If $\Vert M,\partial M \Vert_{{\mathbb{Q}}_p} < p$, then every relative fundamental cycle $\sum_{j=1}^k a_j \cdot \sigma_j \in
C_*(M;{\mathbb{Q}}_p)$ with norm less than $p$ satisfies $|a_j|_p < p$ for all $j \in \{1,\dots, k\}$ and so $a_j \in {\mathbb{Z}}_p$.
Suppose that $M$ is closed, $\dim M$ is even and that $\sv{M}_{{\mathbb{Q}}_p} < 2p$. We claim that every fundamental cycle $c =
\sum_{j=1}^k a_j \cdot \sigma_j \in C_*(M;{\mathbb{Q}}_p)$ with $|c|_{1,p} <
2p$ lies in $C_*(M;{\mathbb{Z}}_p)$. Indeed, suppose that, say, $a_1
\not\in {\mathbb{Z}}_p$, then $|a_1|_p = p$ and $|a_j|_p \leq 1$ for all $j
> 1$. We multiply $c$ with $p$ to observe that the simplex $\sigma_1$ is a cycle modulo $p$; this is impossible, since an even-dimensional simplex has an odd number of faces, which are summed up with alternating signs.
For all primes $p$, we have $\sv{M,\partial M}_{{\mathbb{Q}}_p} \leq
\sv{M,\partial M}_{{\mathbb{Z}}}$ (Corollary \[cor:sandwichp\]). Therefore, each prime $p >
\sv{M,\partial M}_{{\mathbb{Z}}}$ satisfies the hypothesis of the first part.
We will continue the investigation of the relation between the ${\mathbb{Z}}_p$-version and the ${\mathbb{Q}}_p$-version with slightly different methods in Section \[subsec:aaprimes\].
Scaling the fundamental class
-----------------------------
The definition of simplicial volume $\sv{\args}_R$ with coefficients in a seminormed ring $R$ clearly can be extended to all homology classes in singular homology $H_*(\args;R)$ with $R$-coefficients.
\[prop:scalingp\] Let $p$ be a prime number and let $M$ be an oriented compact connected manifold. Then the sequence $(p^m \cdot \sv{p^m \cdot
[M,\partial M]_{{\mathbb{Z}}_p}}_{{\mathbb{Z}}_p})_{m \in {\mathbb{N}}}$ is monotonically decreasing and $$\sv{M,\partial M}_{{\mathbb{Q}}_p}
= \lim_{m \rightarrow \infty} \;
p^m \cdot \sv{p^m \cdot [M,\partial M]_{{\mathbb{Z}}_p}}_{{\mathbb{Z}}_p}.$$
If $m \in {\mathbb{N}}_{>0}$ and $c \in C_{\dim M}(M;{\mathbb{Z}}_p)$ is a relative cycle that represents $p^m \cdot [M,\partial M]_{{\mathbb{Z}}_p}$, then $p
\cdot c$ represents $p^{m+1} \cdot [M,\partial M]_{{\mathbb{Z}}_p}$ and hence $$p^{m+1} \cdot \sv{p^{m+1}\cdot [M,\partial M]}_{{\mathbb{Z}}_p} \leq p^{m+1} \cdot |p\cdot c|_1 = p^{m} \cdot |c|_1.$$ In addition, the chain $p^{-m} \cdot c$ is a relative ${\mathbb{Q}}_p$-fundamental cycle of $(M,\partial M)$ and so $\sv{M,\partial M}_{{\mathbb{Q}}_p}
\leq p^m \cdot |c|_{1,p}.
$ Taking the infimum over all such $c$ implies monotonicity of the sequence and shows that $$\sv{M,\partial M}_{{\mathbb{Q}}_p} \leq p^m \cdot \sv{ p^m \cdot [M,\partial M]_{{\mathbb{Z}}_p}}_{{\mathbb{Z}}_p}.$$ Conversely, let $\varepsilon \in {\mathbb{R}}_{>0}$ and let $c \in
Z(M,\partial M;{\mathbb{Q}}_p)$ with $|c|_{1,p} \leq \sv{M,\partial
M}_{{\mathbb{Q}}_p} + \varepsilon$. The relative cycle $c$ has only finitely many coefficients; hence, there exists an $r \in {\mathbb{N}}$ such that all coefficients of $c$ lie in $p^{-r} \cdot {\mathbb{Z}}_p$. Thus, for all $m \in {\mathbb{N}}_{\geq r}$, the relative cycle $p^m \cdot c$ represents $p^m \cdot [M,\partial M]_{{\mathbb{Z}}_p}$, which yields $$\sv{p^m \cdot [M,\partial M]_{{\mathbb{Z}}_p}}_{{\mathbb{Z}}_p}
\leq p^{-m} \cdot |c|_{1,p}
\leq p^{-m} \cdot \sv{M,\partial M}_{{\mathbb{Q}}_p} + p^{-m} \cdot \varepsilon$$ for all $m \in {\mathbb{N}}_{\geq r}$. Taking $\varepsilon \rightarrow 0$ then proves the claim.
The degree estimate
-------------------
\[prop:degestimate\] Let $(R,|\cdot|_R)$ be a seminormed ring and let $f \colon
M\longrightarrow N$ be a continuous map between oriented compact connected manifolds of the same dimension. If $\deg f \in R^\times
\cup \{0\}$, then $$\lvert\deg f\rvert_R \cdot \sv{N, \partial N}_R
\leq \sv{M,\partial M}_R.$$
If $\deg f = 0$, the assertion is obvious. Therefore, we may assume that $\deg f \in R^\times$. If $c \in Z(M, \partial M;R)$, then, by definition of the mapping degree, $1/\deg f \cdot C_*(f;R)(c) \in
Z(N, \partial N;R)$. Therefore, $$\lvert \deg f\rvert_R \cdot \sv{N,\partial N}_R
\leq \lvert \deg f \rvert_R \cdot \frac1{\lvert \deg f \rvert_R} \cdot \bigl|C_*(f;R)(c)\bigr|_{1,R}
\leq |c|_{1,R}.$$ We can now take the infimum over all $c$.
\[cor:degestimatep\] Let $p$ be a prime number and let $M \longrightarrow N$ be a continuous map between oriented compact connected manifolds of the same dimension.
1. Then $\lvert\deg f \rvert_p \cdot \sv{N,\partial N}_{{\mathbb{Q}}_p}
\leq \sv{M,\partial M}_{{\mathbb{Q}}_p}$.
2. If $p$ is coprime to $\deg f$, then $\sv{N,\partial N}_{{\mathbb{Z}}_p} \leq \sv{M,\partial M}_{{\mathbb{Z}}_p}$.
This is an immediate consequence of Proposition \[prop:degestimate\].
Let $R$ be a seminormed ring and let $f \colon M \longrightarrow N$ be a finite $\ell$-sheeted covering map of oriented compact connected manifolds of the same dimension. Then $$\sv{M,\partial M}_R \leq \ell \cdot \sv{N,\partial N}_R.$$
Let $c \in Z(N,\partial N;R)$, say $c = \sum_{j=1}^k a_j \cdot \sigma_j$. Then the transfer $$\tau(c) := \sum_{j=1}^k a_j \cdot \tau(\sigma_j) \in C_{\dim M}(M;R)$$ is a relative $R$-fundamental cycle of $(M,\partial M)$. Since $\tau(\sigma_j)$ is a sum of $\ell$ distinct singular simplices, the triangle inequality implies the claim.
Poincaré duality and homological estimates
==========================================
We will now use Poincaré duality to establish Betti number estimates, we will use the semi-simplicial sets associated with fundamental cycles to study the dependence on the primes, and we derive a simple product estimate. Variations of these arguments have been used before [@gromovmetric p. 301][@sauervolgrowth Section 3.2] in related situations.
Poincaré duality
----------------
Let $M$ be an oriented compact connected $d$-manifold, let $R$ be a ring with unit and let $c = \sum_{j=1}^k a_j\sigma_j \in Z(M,\partial
M;R)$. By Poincaré-Lefschetz duality [@Hatcher 3.43], for each $n \in {\mathbb{N}}$, the cap product map $$\begin{aligned}
H^{d - n}(M,\partial M;R) &\longrightarrow H_n(M;R) \nonumber
\\
[f] & \longmapsto
[f] \cap [c]
= \pm \biggl[\sum_{j=1}^k a_j \cdot f(\sigma_j|_{[n,\dots,d]}) \cdot \sigma_j|_{[0,\dots,n]}\biggr] \label{eq:duality-cap-formula}\end{aligned}$$ is an $R$-isomorphism. There is an analogous duality between $H^{d-n}(M;R)$ and $H_n(M,\partial M;R)$.
The semi-simplicial set generated by a fundamental cycle
--------------------------------------------------------
Let $M$ be an oriented compact connected $d$-manifold and let $c =
\sum_{j=1}^k a_j \sigma_j \in C_d(M;R)$ be a relative fundamental cycle in reduced form. For each $n$ we define $X_n$ to be the set of all $n$-dimensional faces of the simplices $\sigma_1,\dots, \sigma_k$. The ordinary face maps endow $X = (X_n)_{n \in {\mathbb{N}}}$ with the structure of a semi-simplicial set, which will be called the *semi-simplicial set generated by $c$*. The semi-simplicial subset of simplices of $X$ contained in the boundary $\partial M$ will be denoted $\partial X$. There is a canonical continuous map of pairs $$\rho_c \colon (X^{\rm{top}},\partial X^{\rm{top}}) \to (M, \partial M)$$ from the geometric realization $X^{\rm{top}}$ of $X$ into $M$. The cycle $c$ defines a relative homology class in $H_d(X,\partial X; R)$, which will be denoted by $[X,\partial X]$.
\[lem:comparison-simplicial\] Let $X$ be the semi-simplicial set generated by a relative fundamental cycle $c$ of an oriented compact connected manifold $M$. The maps $$\begin{aligned}
&H_n(\rho_c;R) \colon H_n(X;R) \to H_n(M;R), \\
&H_n(\rho_c; R) \colon H_n(X,\partial X;R) \to H_n(M, \partial M;R)\end{aligned}$$ are surjective for all $n$. The maps $$\begin{aligned}
&H^n(\rho_c;R) \colon H^n(M;R) \to H^n(X;R),\\
&H^n(\rho_c;R) \colon H^n(M,\partial M;R) \to H^n(X, \partial X;R)\end{aligned}$$ are injective for all $n \in {\mathbb{N}}$.
We write $c = \sum_{j=1}^k a_j \sigma_j \in C_d(M;R)$. In view of Poincaré duality (as in ), every class in $H_n(M;R)$ (respectively in $H_n(M,\partial M;R)$) can be represented by a cycle supported on the faces $\sigma_1|_{[0,\dots,n]}, \dots, \sigma_k|_{[0,\dots,n]}$. This proves surjectivity, since all these cycles lie in the image of $C_n(\rho_c;R)$.
The injectivity of $H^n(\rho_c;R) \colon H^n(M;R) \to H^n(X;R)$ follows from the surjectivity statement using the commutative diagram $$\begin{tikzcd}
H^n(M;R) \arrow{rr}{[M,\partial M]\cap } \arrow[d, "H^n(\rho_c;R)"]& & H_{d-n}(M, \partial M; R) \\
H^n(X;R) \arrow{rr}{ [X,\partial X]\cap }& & H_{d-n}(X,\partial X;R) \arrow[twoheadrightarrow]{u}{H_{d-n}(\rho_c;R)}
\end{tikzcd}$$ where the cap product with $[M,\partial M]$ is an isomorphism by duality. The last assertion follows similarly, interchanging relative and absolute homology.
Dependence on the prime {#subsec:aaprimes}
-----------------------
Let $M$ be an oriented compact connected $d$-manifold. We want to show that $$\sv{M,\partial M}_{({\mathbb{F}}_p)} = \sv{M,\partial M}_{{\mathbb{Z}}_p} = \sv{M,\partial M}_{{\mathbb{Q}}_p} = \sv{M,\partial M}_{({\mathbb{Q}})}$$ holds for almost all prime numbers $p$. The argument given here is based on the idea of the corresponding result for weightless simplicial volumes [@loehfp Theorem 1.2], reformulated in the language of semi-simplicial sets. We recall that the equality $\sv{M,\partial M}_{{\mathbb{Z}}_p} = \sv{M,\partial M}_{{\mathbb{Q}}_p}$ holds for almost all primes by Proposition \[prop:ZpQp\].
By Corollary \[cor:universalZ\], the inequality $\sv{M,\partial
M}_{({\mathbb{F}}_p)} \leq \sv{M,\partial M}_{{\mathbb{Z}}} $ holds for all primes. In particular, the weightless simplicial ${\mathbb{F}}_p$-volume is always attained on a relative cycle with at most $k := \sv{M,\partial
M}_{{\mathbb{Z}}}$ simplicies.
Say $d := \dim_M$. There are only finitely many distinct isomorphism classes of pairs $(X,\partial X)$ consisting of a $d$-dimensional semi-simplicial set $X$ generated by at most $k$ simplices of dimension $d$ and a semi-simplicial subset $\partial X$; we write $S^d_k$ for a set of representatives of these isomorphism classes. Let $(X,\partial X) \in
S^d_k$. The boundary map $\partial_d \colon C_d(X, \partial X;{\mathbb{Z}})
\to C_{d-1}(X,\partial X; {\mathbb{Z}})$ is a linear map between free ${\mathbb{Z}}$-modules of finite rank and, as such, has a finite number of elementary divisors. In particular, there is a cofinite set $W$ of primes that do not divide any elementary divisor of a boundary map $\partial_d$ of an $(X, \partial X) \in S^d_k$.
Let $p \in W$ and let ${\mathbb{Z}}_{(p)}$ denote the localization of ${\mathbb{Z}}$ at the prime ideal $(p) \subseteq {\mathbb{Z}}$. Let $c$ be a relative fundamental cycle in $C_d(M;{\mathbb{F}}_p)$ that realizes the mod $p$ simplicial volume. We will show that $c$ lifts to a relative cycle $\tilde{c} \in C_d(M;{\mathbb{Z}}_{(p)})$ supported on the same set of simplices; by Proposition \[prop:monotonicity\] this yields $$\sv{M,\partial M}_{({\mathbb{Q}})} \leq \sv{M,\partial M}_{({\mathbb{Z}}_{(p)})} \leq \sv{M, \partial M}_{({\mathbb{F}}_p)}.$$ Using that $\sv{M,\partial M}_{({\mathbb{F}}_p)} \leq \sv{M,\partial
M}_{({\mathbb{Q}})}$ holds for almost all primes [@loehfp Proof of Theorem 1.2], the equality of the two weightless terms for almost all primes follows. Moreover, ${\mathbb{Z}}_{(p)}$ is a dense subring of ${\mathbb{Z}}_p$ and therefore $\sv{M,\partial M}_{{\mathbb{Z}}_p} \leq \sv{M,
\partial M}_{({\mathbb{F}}_p)}$; by Corollary \[cor:sandwichp\] this implies equality.
Thus it remains to show that $c$ admits a lift: Consider the semi-simplicial set $X$ generated by $c$. Since $p\in W$, all elementary divisors of the boundary map $\partial_d \colon C_d(X, \partial
X;{\mathbb{Z}}_{(p)}) \to C_{d-1}(X,\partial X; {\mathbb{Z}}_{(p)})$ are $1$. In other words, after a suitable choice of bases the ${\mathbb{Z}}_{(p)}$-linear map $\partial_d$ can be represented by a diagonal matrix with entries $0$ and $1$. Based on this description it is an elementary observation that every element in the kernel of $\bar{\partial}_d\colon C_d(X,
\partial X;{\mathbb{F}}_p) \to C_{d-1}(X,\partial X; {\mathbb{F}}_p)$ lifts to an element in the kernel of $\partial_d$ [@loehfp]. To conclude we note that a class in $H_d(M,\partial M;{\mathbb{Z}}_{(p)})$ is a fundamental class if and only if it reduces to a fundamental class in $H_d(M,\partial M; {\mathbb{F}}_p)$.
Betti number estimates {#subsec:betti}
----------------------
\[prop:rel-Betti-bounds\] Let $M$ be an oriented compact connected manifold, let $(R,|\cdot|_R)$ be a seminormed principal ideal domain with $|x|_R
\geq 1$ for all $x \in R \setminus \{0\}$, and let $n \in {\mathbb{N}}$. Then $$\operatorname{rk}_R H_n(M;R) \leq \sv {M,\partial M}_R.$$
We proceed as in the closed case [@FLPS Lemma 4.1]: Let $d := \dim M$ and let $c \in Z(M,\partial M;R)$, say $c = \sum_{j=1}^k a_j \cdot \sigma_j$. By Poincaré-Lefschetz duality, the cap product map given in is an $R$-isomorphism. Hence, $H_n(M;R)$ is a subquotient of an $R$-module that is generated by $k$ elements. Therefore, $\operatorname{rk}_R H_n(M;R) \leq
k$. Moreover, the condition on $|\cdot|_R$ implies that $k \leq
|c|_{1,R}$. Taking the infimum over all $c$ gives the desired estimate.
\[cor:bettiZp\] If $p \in {\mathbb{N}}$ is prime and $M$ is an oriented compact connected manifold, then, for all $n \in {\mathbb{N}}$, we have $$b_n(M; {\mathbb{F}}_p)
\leq \sv{M,\partial M}_{({\mathbb{F}}_p)}
\leq \sv{M,\partial M}_{{\mathbb{Z}}_p}.$$
The first inequality follows from Proposition \[prop:rel-Betti-bounds\], the second inequality is contained in Corollary \[cor:sandwichp\].
Here is a refined $p$-torsion estimate of the same spirit:
\[prop:rel-Betti-bounds-2\] Let $M$ be an oriented compact connected manifold and let $n,m \in {\mathbb{N}}$. Then $$\dim_{{\mathbb{F}}_p} p^{m} H_n(M;{\mathbb{Z}}/p^{m+1}{\mathbb{Z}}) \leq \sv{ p^{m} \cdot [M,\partial M]}_{({\mathbb{Z}}/p^{m+1}{\mathbb{Z}})}.$$
Let us first note that $p^m H_n(M;{\mathbb{Z}}/p^{m+1}{\mathbb{Z}})$ indeed carries a canonical ${\mathbb{F}}_p$-vector space structure.
We now proceed as in the proof of Proposition \[prop:rel-Betti-bounds\] and pick a cycle $c = \sum_{j=1}^k a_j \cdot \sigma_j$ that represents $p^m[M,\partial M]$. We may assume that $k = \sv{ p^{m}[M,\partial
M]}_{({\mathbb{Z}}/p^{m+1}{\mathbb{Z}})}$.
By Poincaré-Lefschetz duality (see ), the cap product with $c$ yields a surjection $$H^{d - n}(M,\partial M;{\mathbb{Z}}/p^{m+1}{\mathbb{Z}}) \longrightarrow p^m H_n(M;{\mathbb{Z}}/p^{m+1}{\mathbb{Z}})$$ and thus every homology class on the right hand side can be represented by a chain on the simplices $\sigma_1|_{[0,\dots,n]},
\dots, \sigma_k|_{[0,\dots,n]}$; i.e., the right hand side is isomorphic to a subquotient of $({\mathbb{Z}}/p^{m+1}{\mathbb{Z}})^k$. Since every subquotient of $({\mathbb{Z}}/p^{m+1}{\mathbb{Z}})^k$ can be generated by at most $k$ elements, we deduce that $$\dim_{{\mathbb{F}}_p} p^m H_n(M;{\mathbb{Z}}/p^{m+1}{\mathbb{Z}}) \leq k = \sv{ p^{m}\cdot [M,\partial M]}_{({\mathbb{Z}}/p^{m+1}{\mathbb{Z}})}. \qedhere$$
\[cor:bettiQp\] If $p \in {\mathbb{N}}$ is prime and $M$ is an oriented compact connected manifold, then, for all $n \in {\mathbb{N}}$, we have $$b_n(M; {\mathbb{Q}})
\leq \sv{M,\partial M}_{{\mathbb{Q}}_p}.$$
It follows from the universal coefficient theorem that $$H_n (M;{\mathbb{Z}}/p^{m+1}{\mathbb{Z}}) \cong ({\mathbb{Z}}/p^{m+1}{\mathbb{Z}})^{b_n(M;{\mathbb{Q}})} \oplus
T_{m+1}$$ for a finite abelian group $T_{m+1}$ of exponent at most $p^{m+1}$. In particular, we observe that $\dim_{{\mathbb{F}}_p} p^{m}H_n
(M;{\mathbb{Z}}/p^{m+1}{\mathbb{Z}}) \geq b_n(M;{\mathbb{Q}})$.
Now Proposition \[prop:rel-Betti-bounds-2\] and Corollary \[cor:sandwichp\] show that $$\begin{aligned}
b_n(M;{\mathbb{Q}}) &\leq \sv{ p^{m}[M,\partial M]}_{({\mathbb{Z}}/p^{m+1}{\mathbb{Z}})}\\
&\leq p^{m} \sv{ p^{m}[M,\partial M]}_{{\mathbb{Z}}/p^{m+1}{\mathbb{Z}}}\\
&\leq p^{m} \sv{ p^{m}[M,\partial M]}_{{\mathbb{Z}}_p}.\end{aligned}$$ As $m$ tends to $\infty$ we apply Proposition \[prop:scalingp\] to complete the proof.
Maximality of the fundamental class
-----------------------------------
Similarly to the weightless case [@loehfp Proposition 2.6, Proposition 2.10], also in the $p$-adic case the fundamental class has maximal norm, which in particular leads to a basic estimate for products.
\[prop:fclmax\] Let $(R,|\cdot|)$ be a seminormed ring that satisfies $|x| \leq 1$ for all $x \in R$, let $M$ be an oriented compact connected manifold, let $n \in {\mathbb{N}}$, and let $\alpha \in H_n(M;R)$ or $\alpha \in H_n(M, \partial M;R)$. Then $$\| \alpha \|_{1,R} \leq \sv{M,\partial M}_R.$$
The proof works as in the weightless case [@loehfp Proposition 2.6]: Let $\alpha \in H_n(M;R)$ (the other case works in the same way); moreover, let $\varphi \in H^{d-n}(M,\partial M;R)$ be Poincaré dual to $\alpha$, i.e., $\varphi \cap [M]_R = \alpha$.
Let $f \in C^{d-n}(M;R)$ be a relative cocycle representing $\varphi$ and let $c = \sum_{j=1}^k a_j \sigma_j \in Z(M,\partial M;R)$. Then the explicit Poincaré duality formula shows that $$z := \pm\sum_{j=1}^k a_j \cdot f(\sigma_j|_{[n,\dots,d]}) \cdot \sigma_j|_{[0,\dots, n]}$$ is a cycle representing $\alpha$. In particular, the hypothesis on the seminorm on $R$ implies that $$\|\alpha\|_{1,R} \leq |z|_{1,R}
\leq \sum_{j=1}^k |a_j| \cdot \bigl|f(\sigma_j|_{[n,\dots,d]})\bigr|
\leq \sum_{j=1}^k |a_j| = |c|_{1,R}.$$ Taking the infimum over all $c$ proves that $\|\alpha\|_{1,R} \leq \sv{M,\partial M}_R$.
\[cor:prod\] Let $(R,|\cdot|)$ be a seminormed ring that satisfies $|x| \leq 1$ for all $x \in R$, and let $M$ and $N$ be oriented closed connected manifolds. Then $$\max\bigl( \sv M_R, \sv N_R \bigr)
\leq \sv {M \times N}_R
\leq {{\dim M + \dim N} \choose {\dim M}} \cdot \sv M _R \cdot \sv N_R.$$
The upper bound is the usual homological cross product argument [@bp Theorem F.2.5]. Let $x \in N$. Then the inclusion $M \longrightarrow M \times \{x\} \longrightarrow M \times
N$ and the projection $M \times N \longrightarrow M$ show that $H_*(M;R)$ embeds isometrically into $H_*(M \times N;R)$ [@loehfp proof of Proposition 2.10]. We can then apply Proposition \[prop:fclmax\] to $M$ and $N$.
In particular, Proposition \[prop:fclmax\] and Corollary \[cor:prod\] apply to ${\mathbb{Z}}_p$:
Let $p$ be a prime and let $M$ and $N$ be oriented closed connected manifolds. Then $$\max\bigl( \sv M_{{\mathbb{Z}}_p}, \sv N_{{\mathbb{Z}}_p} \bigr)
\leq \sv {M \times N}_{{\mathbb{Z}}_p}
\leq {{\dim M + \dim N} \choose {\dim M}} \cdot \sv M _{{\mathbb{Z}}_p} \cdot \sv N_{{\mathbb{Z}}_p}.$$
It might be tempting to go for a duality principle between singular homology with ${\mathbb{Q}}_p$-coefficients and bounded cohomology with ${\mathbb{Q}}_p$-coefficients. However, one should be aware that the considered $\ell^1$-norm on the singular chain complex is an archimedean construction (the $\ell^1$-norm) of a non-archimedean norm (the norm on ${\mathbb{Q}}_p$). In this mixed situation, no suitable version of the Hahn-Banach theorem can hold.
Basic examples {#sec:examples}
==============
Spheres, tori, projective spaces
--------------------------------
\[exa:sphere\] Let $d \in {\mathbb{N}}$. It is known that [@loeh-odd] $$\sv{S^d}_{{\mathbb{Z}}}
=
\begin{cases}
1 & \text{if $d$ is odd}\\
2 & \text{if $d$ is even}.
\end{cases}$$ Now let $p$ be a prime number.
- If $d$ is odd, then $\sv{S^d}_{{\mathbb{Z}}_p} = \sv{S^d}_{{\mathbb{Q}}_p} = 1$: The Betti number estimate gives $1 = b_0(S^d;{\mathbb{Q}}) \leq \sv{S^d}_{{\mathbb{Q}}_p}
\leq \sv{S^d}_{{\mathbb{Z}}_p}$ (Corollary \[cor:bettiQp\], Corollary \[cor:sandwichp\]). Moreover, we also have $\sv{S^d}_{{\mathbb{Z}}_p} \leq \sv{S^d}_{\mathbb{Z}}= 1$.
- If $d$ is even, then $\sv{S^d}_{{\mathbb{Z}}_p} = \sv{S^d}_{{\mathbb{Q}}_p} = 2$: We have $$\sv{S^d}_{{\mathbb{Q}}_p} \leq \sv{S^d}_{{\mathbb{Z}}_p} \leq \sv{S^d}_{\mathbb{Z}}= 2$$ Thus, it suffices to show that $\sv{S^d}_{{\mathbb{Q}}_p} \geq 2$. In view of Corollary \[cor:densityp\], we only need to show that $\sv{S^d}_{{\mathbb{Q}},|\cdot|_p} \geq 2$. Let $c \in Z(S^d; {\mathbb{Q}})$. Then we can write $c$ in the form $$c = \sum_{j \in J} \frac{a_j}m \cdot \sigma_j
+ \sum_{k \in K} p \cdot \frac{b_k}m \cdot \tau_k,$$ where $J$ and $K$ are finite sets, the coefficients $a_j$, $b_k$ and $m$ are integral and where $p$ does *not* divide $m$ or the $a_j$ with $j \in J$. Then $$m \cdot c = \sum_{j \in J} a_j \cdot \sigma_j + p \cdot \sum_{k \in K} b_k \cdot \tau_k$$ is a cycle in $C_d(S^d;{\mathbb{Z}})$, representing $m \cdot
[M]_{\mathbb{Z}}$. Because $p$ does not divide $m$, we obtain that $J\neq
\emptyset$. (If $J$ were empty, then $\sum_{k\in K} b_k \cdot
\tau_k$ would also be a cycle and whence $m\cdot [M]_{\mathbb{Z}}$ would be divisible by $p$, which is impossible).
Let $\varrho$ denote the constant singular $(d-1)$-simplex on the one-point space $\bullet$. Applying the chain map induced by the constant map $S^d \longrightarrow \bullet$ to the equation $\partial (m \cdot c) = 0$ shows that $$\begin{aligned}
0 & = \sum_{j \in J} a_j \cdot \varrho + p \cdot \sum_{k \in K} b_k \cdot \varrho
\\
& = \biggl( \sum_{j \in J} a_j + p \cdot \sum_{k \in K} b_k
\biggr) \cdot \varrho
\end{aligned}$$ holds in $C_{d-1}(\bullet;{\mathbb{Z}})$. Therefore, we obtain $$0 = \sum_{j \in J} a_j + p \cdot \sum_{k \in K} b_k.$$ Because $J \neq \emptyset$ and $p$ does not divide any of the $a_j$ with $j \in J$, we see that $J$ contains at least two elements $i,j$. In particular, $$|c|_{1,p} \geq |a_i|_p + |a_j|_p \geq 1+ 1 = 2.$$ Therefore, $\|S^d\|_{{\mathbb{Q}},|\cdot|_p} \geq 2$.
\[cor:lowerboundp\] Let $p$ be a prime number and let $M$ be an oriented closed connected (non-empty) manifold.
1. Then $\sv{M,\partial M}_{{\mathbb{Q}}_p} \geq 1$.
2. If $\dim M$ is even, then $\sv{M,\partial M}_{{\mathbb{Q}}_p} \geq 2$.
As $M$ is non-empty and closed, there exists a map $M
\longrightarrow S^{\dim M}$ of degree $1$. We can now apply the degree estimate (Corollary \[cor:degestimatep\]) and the computation for spheres (Example \[exa:sphere\]).
\[exa:torus\] Let $p$ be a prime number. Then the $2$-torus $T^2$ satisfies $$\sv{T^2}_{{\mathbb{Z}}_p} = \sv{T^2}_{{\mathbb{Q}}_p} = 2.$$ On the one hand, we can easily represent the fundamental class of $T^2$ by two singular triangles; on the other hand, Corollary \[cor:lowerboundp\] gives the lower bound.
\[exa:proj\] Let $d \in {\mathbb{N}}$ be odd and let $p$ be a prime number.
*Case $p > 2$:* If $p > 2$, then $\sv{{\mathbb{R}}P^d}_{{\mathbb{Z}}_p} =
\sv{{\mathbb{R}}P^d}_{{\mathbb{Q}}_p} = 1$: From Corollary \[cor:lowerboundp\], we obtain $\sv{{\mathbb{R}}P^d}_{{\mathbb{Z}}_p} \geq \sv{{\mathbb{R}}P^d}_{{\mathbb{Q}}_p} \geq
1$. Furthermore, the double covering $S^d \longrightarrow {\mathbb{R}}P^d$ and the computation for spheres (Example \[exa:sphere\]) show that $$\sv{{\mathbb{R}}P^d}_{{\mathbb{Z}}_p} \leq \sv{S^d}_{{\mathbb{Z}}_p} = 1
\qand
\sv{{\mathbb{R}}P^d}_{{\mathbb{Q}}_p} \leq \sv{S^d}_{{\mathbb{Q}}_p} =1$$ using $p > 2$ in Corollary \[cor:degestimatep\].
It should be noted that $\sv{{\mathbb{R}}P^d}_{\mathbb{Z}}=
2$ [@loeh-odd Proposition 4.4][@loehfp Example 2.7].
*Case $p=2$:* We have $\sv{{\mathbb{R}}P^d}_{{\mathbb{Z}}_2} = 2$, because $\sv{{\mathbb{R}}P^d}_{({\mathbb{F}}_2)} = 2$ [@loehfp Example 2.7] and $\sv{{\mathbb{R}}P^d}_{{\mathbb{Z}}} = 2$ [@loeh-odd Proposition 4.4] as well as $\sv{\args}_{({\mathbb{F}}_2)} \leq \sv{\args}_{{\mathbb{Z}}_2} \leq \sv{\args}_{\mathbb{Z}}$ (Corollary \[cor:sandwichp\]).
In addition, we claim that $\sv{{\mathbb{R}}P^d}_{{\mathbb{Q}}_2} = 2$. We know that $\sv{{\mathbb{R}}P^d}_{{\mathbb{Q}}_2} \leq \sv{{\mathbb{R}}P^d}_{{\mathbb{Z}}_2} =
2$. *Assume* for a contradiction that $\sv{{\mathbb{R}}P^d}_{{\mathbb{Q}}_2}
< 2$. Then Proposition \[prop:ZpQp\] implies that $\sv{{\mathbb{R}}P^d}_{{\mathbb{Q}}_2} = \sv{{\mathbb{R}}P^d}_{{\mathbb{Z}}_2} = 2$, which yields a contradiction.
Surfaces
--------
Recall that $\Sigma_g$ denotes the oriented closed connected surface of genus $g$ and $\Sigma_{g,b}$ denotes the surface of genus $g$ with $b \geq 1$ boundary components.
The case $\Sigma_0 \cong S^2$ is already contained in Example \[exa:sphere\].
We first prove the inequalities “$\geq$”. Let $M$ be $\Sigma_g$ or $\Sigma_{g,b}$ and let $K$ denote the field of fractions of $R$. We endow $K$ with the trivial absolute value. Using the inequality $\sv{M,\partial M}_{(R)} \geq \sv{M,\partial M}_{(K)}$ from Proposition \[prop:monotonicity\], we see that it is sufficient to establish the lower bound for the field $K$.
Let $c = \sum_{j=1}^k a_j \sigma_j \in C_2(M;K)$ be a fundamental cycle of minimal norm, i.e., $| c |_1 = k$ is minimal. Consider the semi-simplicial set $X$ generated by $c$ and its chain complex $$C_0(X,\partial X;K) \stackrel{\partial_1}{\longleftarrow} C_1(X,\partial X;K) \stackrel{\partial_2}{\longleftarrow} C_2(X,\partial X;K) = C_2(X;K).$$ We observe that $\dim_K C_2(X;K) = k$ and we claim that the kernel of $\partial_2$ is $1$-dimensional; i.e., it is the line spanned by $c$. Assume for a contradiction that $\dim_K \ker(\partial_2) \geq
2$. In this case the relative fundamental cycles supported on $\{\sigma_1,\dots,\sigma_k\}$ form an affine subspace of dimension at least $1$ in $C_2(M;K)$. Using elementary linear algebra we deduce that there is a fundamental cycle supported on a proper subset of $\{\sigma_1,\dots,\sigma_k\}$, which contradicts the minimality of $k$.
The $k$ different simplices in $X_2$ have at most $3 k$ distinct faces. Moreover, since $H_2(M,\partial M; K) \to H_{1}(\partial M;K)$ maps the relative fundamental class of $M$ to a fundamental class of $\partial M$, the boundary of $c$ touches every connected component of $\partial M$ at least once. In other words, at most $3k - b$ faces of the simplices in $X_2$ are not contained in $\partial M$. As $c$ is a relative cycle, every face of $X_2$ that is not contained in the boundary occurs at least twice. We conclude that $2 \dim_K
C_1(X,\partial X; K) \leq 3k - b$.
By Lemma \[lem:comparison-simplicial\] we have the inequality $$\dim_K H_1(X,\partial X; K) \geq H_1(M, \partial M; K) = 2g + b - 1 + \delta_{b,0}$$ and the following calculation completes the first part of the proof $$\begin{aligned}
k -b +2 &= 3k -b - 2(k -1) \\
&\geq 2\dim_K C_1(X,\partial X;K) - 2 \dim_K B_1(X,\partial X; K)\\
&\geq 2 \dim_K H_1(X,\partial X; K) \geq 4g + 2b -2 + 2\delta_{b,0}.\end{aligned}$$ Moreover, in the pathological case of $\Sigma_{0,1}$, we have $\sv{\Sigma_{0,1}}_{(R)} \geq 1$ by Proposition \[prop:rel-Betti-bounds\].
In order to show that the lower bound is sharp, we construct explicit relative fundamental cycles with the desired number of $2$-simplices; this is done in Proposition \[prop:surfacesupper\] below for the integral simplicial volume. As integral simplicial volume is an upper bound for $\sv{\cdot}_{(R)}$ (Proposition \[prop:monotonicity\]), this suffices to complete the proof.
\#1[ \#1 circle (0.1);]{} \#1\#2[ \#1 circle (\#2); ]{} \#1\#2
(\#2,0) arc (0:180:\#2); (\#2,0) arc (360:180:\#2);
(30:2) arc (30:148:2); (150:2) arc (150:268:2); (30:2) arc (30:-88:2); (0,0) node [$\oplus$]{};
(0,-2) .. controls +(60:1.5) and +(0:1) .. (0,1); (0,-2) .. controls +(120:1.5) and +(180:1) .. (0,1); (0,0.5) circle (0.5); ; ; (0,1.5) node [$\oplus$]{}; (0,-0.5) node [$\oplus$]{};
\[prop:surfacesupper\] Let $g \in {\mathbb{N}}$ and $b \in {\mathbb{N}}$. Then (with respect to the standard archimedean absolute value on ${\mathbb{Z}}$)
1. $\sv{\Sigma_g}_{{\mathbb{Z}}} = 4g - 2$ if $g \geq 1$ and
2. $\sv{\Sigma_{0,1}}_{{\mathbb{Z}}} = 1$ and $\sv{\Sigma_{g,b}}_{{\mathbb{Z}}} = 3b + 4g - 4$ for all $g \in {\mathbb{N}}$ and all $b \in {\mathbb{N}}_{\geq 1}$ with $(g,b) \neq (0,1)$.
The lower bounds follow from Theorem \[thm:surfaces\] and Proposition \[prop:monotonicity\]. Therefore, it suffices to establish the upper bounds: Let $g,b \in {\mathbb{N}}$. In the following pictures, boundary components are dashed and holes are shaded with stripes.
- In the case $g =0$, $b=1$, a single $2$-simplex suffices (Figure \[fig:g0b12\], left).
- In the case $g=0$ and $b=2$, two $2$-simplices suffice (Figure \[fig:g0b12\], right).
- If $g=0$ and $b\geq 3$, then Figure \[fig:g0b3\] shows how to construct a fundamental cycle consisting of $$1+ 1 + 3 (b-2) = 3 b -4$$ $2$-simplices. Here, we use $(b-2)$ triple building blocks of Figure \[fig:triplebuildingblock\], each consisting of three $2$-simplices (Figure \[fig:multitriple\]).
- If $g \geq 1$ and $b=0$, we use the classical decomposition of the $4g$-gon (whose edges will be identified according to the labels) into $4g -2$ simplices with the signs and orientations indicated in Figure \[fig:g1b0\].
- If $g \geq 1$ and $b \geq 1$, we can use the construction of Figure \[fig:g1b1\] with $(b-1)$ triple building blocks, where the upper part of the polygon is decomposed as in the closed higher genus case (Figure \[fig:g1b0\]). Hence, $$4 g - 2 + 1 + 3 (b-1)
= 4 g + 3 b - 4$$ $2$-simplices suffice.
(0,0) .. controls +(30:2) and +(-30:2) .. (0,4); (0,0) .. controls +(150:2) and +(210:2) .. (0,4); (0,0) .. controls +(60:1.5) and +(0:1) .. (0,3); (0,0) .. controls +(120:1.5) and +(180:1) .. (0,3); (0,4) – (0,3); (0,2.5) circle (0.5); (0,1) node [$\oplus$]{}; (0.5,3.2) node [$\oplus$]{}; (-0.5,3.2) node [$\ominus$]{};
(0,0) .. controls +(30:2) and +(-30:2) .. (0,4) – (0,4) .. controls +(210:2) and +(150:2) .. (0,0) – cycle; (0,0) .. controls +(30:2) and +(-30:2) .. (0,4); (0,0) .. controls +(150:2) and +(210:2) .. (0,4); (0,2) circle (0.5); (0,2) circle (0.5); (1,2) node [$+$]{}; (-1,2) node [$-$]{};
(0,0) .. controls +(30:2) and +(-30:2) .. (0,4) – (0,4) .. controls +(-70:2) and +(70:2) .. (0,0) –cycle; (0,0) .. controls +(150:2) and +(210:2) .. (0,4) – (0,4) .. controls +(-110:2) and +(110:2) .. (0,0) – cycle; (0,0) .. controls +(30:2) and +(-30:2) .. (0,4); (0,0) .. controls +(150:2) and +(210:2) .. (0,4); (0,0) .. controls +(70:2) and +(-70:2) .. (0,4); (0,0) .. controls +(110:2) and +(-110:2) .. (0,4); (0.9,2.2) circle (0.25); (0.9,2.2) circle (0.25); (-0.9,2.2) circle (0.25); (-0.9,2.2) circle (0.25); (0.65,1.5) node [$-$]{}; (1.1,1.5) node [$+$]{}; (-0.65,1.5) node [$+$]{}; (-1.1,1.5) node [$-$]{}; (0,2) node [$\dots$]{};
(0,0) .. controls +(30:2) and +(-30:2) .. (0,4) – (0,4) .. controls +(210:2) and +(150:2) .. (0,0) – cycle; (0,0) .. controls +(30:2) and +(-30:2) .. (0,4); (0,0) .. controls +(150:2) and +(210:2) .. (0,4); (0,2) circle (0.5); (0,2) node [$n$]{}; (1,2) node [$+$]{}; (-1,2) node [$-$]{};
(0,-2) .. controls +(0:2.2) and +(0:2.2) .. (0,1) – (0,-2) – cycle; (0,-2) .. controls +(175:1.5) and +(185:1.5) .. (0,1); (0,-2) – (0,1); (0,1) .. controls +(270:1.5) and +(185:1.2) .. (0,1) – cycle; (0,1) .. controls +(270:1.5) and +(185:1.2) .. (0,1); (0,-2) .. controls +(0:2.2) and +(0:2.2) .. (0,1); (0.8,-0.5) circle (0.5); ; (0.8,-0.5) node [$b-2$]{}; ; ; (0,1.5) node [$\oplus$]{}; (-0.5,-1) node [$\oplus$]{}; (1.2,-1.2) node [$+$]{}; (0.2,-1.2) node [$-$]{};
in [0,...,4]{} [ (270:2) – (45\*:2);]{} in [0,...,7]{}[ ]{} (315:1.7) node [$\oplus$]{}; (0:1.3) node [$\ominus$]{}; (60:1) node [$\ominus$]{}; (120:1) node [$\oplus$]{}; (180:1.3) node [$\oplus$]{}; (225:1.7) node [$\ominus$]{};
(270:2) .. controls +(45:4) and +(100:4) .. (225:2) – (225:2) .. controls +(100:1) and +(90:1.5) .. (270:2) – cycle; (270:2) .. controls +(45:4) and +(100:4) .. (225:2); (-0.2,-0.2) circle (0.5); (-0.2,-0.2) node [$b-1$]{}; (270:2) .. controls +(90:1.5) and +(100:1) .. (225:2); (225:2) .. controls +(90:1.1) and +(-10:1) .. (225:2) – cycle; (225:2) .. controls +(90:1.1) and +(-10:1) .. (225:2); in [0,...,7]{}[ ]{} (255:1.65) node [$\oplus$]{}; (0.3,0.4) node [$+$]{}; (0,-1) node [$-$]{};
\[cor:surfaces-p-adic\] Let $g \in {\mathbb{N}}_{\geq 1}$ and let $p \in {\mathbb{N}}$ be prime.
1. Then $\sv{\Sigma_g}_{{\mathbb{Z}}_p} = 4g -2$.
2. If $p > 2g-1$, then $\sv{\Sigma_g}_{{\mathbb{Q}}_p} = 4g -2$.
*Ad 1.* From Theorem \[thm:surfaces\] and Proposition \[prop:surfacesupper\], we obtain $$4g -2 \leq \sv{\Sigma_g}_{({\mathbb{F}}_p)} \leq \sv{\Sigma_g}_{{\mathbb{Z}}_p} \leq \sv{\Sigma_g}_{{\mathbb{Z}}} \leq 4g-2$$ and thus the claimed equality.
*Ad 2.* This follows from the first part and Proposition \[prop:ZpQp\].
\[rem:new-computation\] Let $g \in {\mathbb{N}}_{\geq 2}$. Then the arguments above show that we can prove the identity $\sv{\Sigma_g} = 4g - 4$ for the classical simplicial volume without hyperbolic straightening: From Proposition \[prop:surfacesupper\] we know that $\sv{\Sigma_g}_{\mathbb{Z}}= 4 g -2$ (we proved this via semi-simplicial sets of cycles, without using hyperbolic straightening).
\(a) We have $\sv{\Sigma_g} \leq 4g - 4$: For the sake of completeness, we recall Gromov’s argument [@Gromov-vbc]. For each $k \in {\mathbb{N}}$, there exists a $k$-sheeted covering $\Sigma_{g_k} \longrightarrow \Sigma_g$, where $g_k = k
\cdot g -k +1$. Hence, we obtain (Proposition \[prop:degestimate\]) $$\sv{\Sigma_g} \leq \inf_{k \in {\mathbb{N}}} \frac{\sv{\Sigma_{g_k}}_{{\mathbb{Z}}}}k
= \inf_{k \in {\mathbb{N}}} \frac{4 \cdot (k \cdot g - k + 1) - 4}k = 4 g - 4.$$
\(b) We have $\sv{\Sigma_g} \geq 4g - 4$: Because ${\mathbb{Q}}$ is dense in ${\mathbb{R}}$, we have (Proposition \[prop:density\]) $$\sv{\Sigma_g }
= \sv{\Sigma_g}_{\mathbb{Q}}= \inf \Bigl\{ \frac{\sv{m \cdot [\Sigma_g]_{\mathbb{Z}}}_{{\mathbb{Z}}}}{m} \Bigm| m \in {\mathbb{N}}_{>0} \Bigr\}.$$ Let $m \in {\mathbb{N}}_{>0}$ and let $c = \sum_{j=1}^k a_j\sigma_j \in
C_2(\Sigma_g;{\mathbb{Z}})$ be a cycle with $[c] = m \cdot
[\Sigma_g]_{\mathbb{Z}}$ and $a_1, \dots, a_k \in \{-1,1\}$ as well as $|c|_1 = k$. Because $c$ is a cycle, we can find a matching of the edges (and their signs) in the simplices of $c$ such that the associated semi-simplicial set is a two-dimensional pseudo-manifold. As no proper singularities at the vertices can occur in dimension $2$, this pseudo-manifold leads to a manifold, whence a surface. In other words, there is an oriented compact surface $\Sigma$ (which we may assume to be connected) and a continuous map $f \colon \Sigma \longrightarrow \Sigma_g$ with $$H_2(f;{\mathbb{Z}}) ([\Sigma]_{\mathbb{Z}}) = [c] = m \cdot [\Sigma]_{\mathbb{Z}}\in H_2(\Sigma_g;{\mathbb{Z}})
\qand
\sv \Sigma_{\mathbb{Z}}\leq k = |c|_1.$$ Smoothly approximating $f$ and looking at the corresponding harmonic representative shows that [@eellswood p. 264] $$m = |\deg f|
\leq \frac{g(\Sigma) - 1}{g - 1}.$$ Therefore, we obtain $$\frac{|c|_1}{m}
\geq \frac{\sv{\Sigma}_{\mathbb{Z}}}m
\geq \frac{\bigl(4 g(\Sigma) - 2\bigr) \cdot (g - 1)}{g(\Sigma) - 1}
\geq 4 g - 4.$$ Taking the infimum over all such cycles $c$ proves the estimate.
On mod $p$ and $p$-adic approximation of simplicial volume
----------------------------------------------------------
Let $M$ be an oriented compact connected manifold and let $F(M)$ denote the set of all (isomorphism classes of) finite connected coverings of $M$. Moreover, let $R$ be a seminormed ring. Then the *stable $R$-simplicial volume* of $M$ is defined by $$\| M,\partial M \|^\infty_R := \inf_{(p \colon N \rightarrow M) \in F(M)} \frac{\sv{N,\partial N}_R}{|\deg p|}.$$ Similar to the case of Betti numbers or logarithmic torsion, one might wonder for which manifolds $M$ and which coefficients $R$, we have $\|M,\partial M\| = \|M,\partial M\|^\infty_R$. This question has been studied for ${\mathbb{Z}}$-coefficients [@FFM; @FLPS; @loehrg; @fauser; @ffl; @flmq] and to a much lesser degree for ${\mathbb{F}}_p$-coefficients [@loehfp]. However, it was, for instance, not even known whether the simplicial volume of surfaces satisfies mod $p$ approximation. With the methods developed in the previous section, we can solve this problem for surfaces:
\[rem:stable\] The simplicial volume of surfaces satisfies integral, mod $p$, and $p$-adic approximation by the corresponding normalised simplicial volumes of finite coverings: Let $g \in {\mathbb{N}}_{\geq 1}$ and let $p$ be a prime. Then we have $$\begin{aligned}
\| \Sigma_g \|
& = 4g - 4
= \|\Sigma_g\|^\infty_{{\mathbb{Z}}}
= \|\Sigma_g\|^\infty_{({\mathbb{F}}_p)}
= \|\Sigma_g\|^\infty_{{\mathbb{Z}}_p}.
\end{aligned}$$ The first two equalities are contained in Remark \[rem:new-computation\]. Using Proposition \[prop:surfacesupper\] and Corollary \[cor:surfaces-p-adic\], we can (in the same way) prove the last two equalities.
Let $M$ be an oriented closed connected aspherical $3$-manifold. Then it is known that [@flmq] $$\|M\|^\infty_{{\mathbb{Z}}} = \frac{\mathrm{hypvol}(M)}{v_3}.$$ Hence, by Corollary \[cor:sandwichp\], we also have $$\|M\|^\infty_{({\mathbb{F}}_p)}
\leq \|M\|^\infty_{{\mathbb{Z}}_p}
\leq \|M\|^\infty_{{\mathbb{Z}}}
= \frac{\mathrm{hypvol}(M)}{v_3}$$ for all primes $p$.
In the context of homology torsion growth, it would be interesting to determine whether $\| M\|^\infty_{({\mathbb{F}}_p)} = \|
M\|^\infty_{{\mathbb{Z}}_p} = \|M\|$ holds for all oriented closed connected aspherical $3$-manifolds.
Let $p$ be an odd prime. Avramidi, Okun, Schreve established that the ${\mathbb{F}}_p$-Singer conjecture fails (in all high enough dimensions) [@avramidiokunschreve], i.e., there exist oriented closed connected aspherical $d$-manifolds $M$ with residually finite fundamental group and an $n \neq d/2$ such that the associated ${\mathbb{F}}_p$-Betti number gradient in dimension $n$ is non-zero.
By Corollary \[cor:bettiZp\], also the mod $p$ and the $p$-adic simplicial volume gradients are non-zero. In particular, the stable integral simplicial volume of $M$ is non-zero.
It would be interesting to determine whether the classical simplicial volume of $M$ is zero or not.
The Betti number estimates from Section \[subsec:betti\] give corresponding estimates between the homology gradients and the stable integral simplicial volumes over the given seminormed ring. These can be turned into homology gradient estimates for groups as follows: If $G$ is a (discrete) group that admits a finite model $X$ of the classifying space $BG$, we can embed $X$ into a high-dimensional Euclidean space ${\mathbb{R}}^N$ and then thicken the image of the embedding to a compact manifold $M$ with boundary, which is homotopy equivalent to $X$, whence has the same homology (gradients) as the group $G$. One can then study the behaviour of simplicial volumes of $M$ to get upper bounds for homology gradients of $G$.
Non-values
==========
Analogously to the case of classical simplicial volume [@heuerloeh_trans], we have:
\[thm:nonvalue\] Let $p \in {\mathbb{N}}$ be prime and let $A \subset {\mathbb{N}}$ be a set that is recursively enumerable but not recursive. Then there is *no* oriented closed connected manifold $M$ whose simplicial volume $\sv{M}_{{\mathbb{Q}}_p}$ equals $$2 - \sum_{n \in {\mathbb{N}}\setminus A} 2^{-n}.$$ The same statement also holds for $\sv{M}_{{\mathbb{Z}}_p}$.
The proof is based on the following notion: A real number $x \in {\mathbb{R}}$ is *right-computable* if the set $\{ a
\in {\mathbb{Q}}\mid x < a\}$ is recursively enumerable [@zhengrettinger]. For example, all algebraic numbers are (right-)computable [@eisermann Section 6] and there are only countably many right-computable real numbers.
The numbers in this theorem are known to be *not* right-computable: This can be easily derived from known properties of (right-)computable numbers and Specker sequences [@weihrauch]. Therefore, this theorem is a direct consequence of the observation in Proposition \[prop:rightcomp\] below.
\[prop:rightcomp\] Let $p \in {\mathbb{N}}$ be prime and let $M$ be an oriented closed connected manifold. Then the real numbers $\sv{M}_{{\mathbb{Q}}_p}$ and $\sv{M}_{{\mathbb{Z}}_p}$ are right-computable.
We proceed as in the corresponding proof for ordinary simplicial volume [@heuerloeh_trans Theorem E]: The same combinatorial argument as for the integral norm $\|\cdot\|_{1,{\mathbb{Z}}}$ [@heuerloeh_trans Lemma 4.4] shows that the set $$S :=
\bigl\{ (m,a) \in {\mathbb{N}}\times {\mathbb{Q}}\bigm| \|m \cdot [M]_{{\mathbb{Z}}}\|_{{\mathbb{Z}},|\cdot|_p} < a
\bigr\}
\subset {\mathbb{N}}\times {\mathbb{Q}}$$ is recursively enumerable. In particular, $$\bigl\{ a \in {\mathbb{Q}}\bigm| \|[M]_{{\mathbb{Z}}}\|_{{\mathbb{Z}},|\cdot|_p} < a
\bigr\}
= \mathrm{pr}_2 \bigl(S \cap \bigl( \{1\} \times {\mathbb{Q}}\bigr)\bigr)$$ is recursively enumerable, which means that $\sv{M}_{{\mathbb{Z}}_p} = \|
[M]_{\mathbb{Z}}\|_{{\mathbb{Z}},|\cdot|_p}$ (Corollary \[cor:densityp\]) is right-computable.
Moreover, in combination with Proposition \[prop:scalingp\] and Corollary \[cor:densityp\], we obtain that $$\begin{aligned}
\bigl\{ a \in {\mathbb{Q}}\bigm| \sv{M}_{{\mathbb{Q}}_p} < a \bigr\}
& = \Bigl\{ a \in {\mathbb{Q}}\Bigm| \exi{m \in {\mathbb{N}}_{>0}}
\|p^m \cdot [M]_{{\mathbb{Z}}}\|_{{\mathbb{Z}},|\cdot|_p} < \frac a{p^m} \Bigr\}
\\
& = \Bigl\{ a \in {\mathbb{Q}}\Bigm| \exi{m \in {\mathbb{N}}_{>0}} \Bigl( p^m, \frac a{p^m}\Bigr) \in S
\Bigr\}.
\end{aligned}$$ Because $S$ is recursively enumerable, this set is also recursively enumerable. Hence, $\sv{M}_{{\mathbb{Q}}_p}$ is right-computable.
The same arguments also can be used to show the corresponding results for oriented compact connected manifolds with boundary.
| |
Walking in circles is not common for dogs. It can indicate a health problem that requires medical attention. Walking in circles and disorientation can be the result of pain, ear infection, or head injuries.
Dogs communicate with their owners via their behaviors and body language. It is troubling for dog owners to understand what their pooch is trying to say when it displays unusual behaviors.
If your dog is walking in circles and is disoriented, there can be an underlying medical condition that requires your attention. There can be neurological, psychological, or even physical issues with your dog.
It is important to note that dogs usually circle before they sit down. This is a common behavior among dogs and is nothing to worry about. Your dog can be suffering from an issue if the spinning is consistent and compulsive.
This article talks about the causes of circling in dogs and what causes them to become disoriented.
Contents
Dog Walks in Circles and is Disoriented
Dogs occasionally walk in circles to remove something from their behinds or chase their tails. Frequent circling and disorientation indicate a sickness or health problem. Pain, injuries, or cognitive dysfunction are probable causes of disorientation and excessive circling.
Every dog owner has seen their dog walking in circles at some point in their lives. Dogs circle themselves when they are chasing their tails or when they want to reach something on their rear end.
Occasional circling is not something to worry about.
If the occurs frequently circling, coupled with disorientation, it can be a sign of an underlying issue. There are several causes of disorientation in dogs. The most common are head injuries.
Check your dog for any signs of injuries and provide proper treatment if that is the case.
Some other symptoms which indicate a medical issue instead of natural behavior include:
- Falling or tripping over
- Blindness
- Panting
- Confusion
- Head tilting
- Vomiting
Your pooch is suffering from a medical condition if it displays any of these signs along with circling.
6 Reasons Why Your Dog Walks in Circles and is Disoriented
There are several reasons why dogs walk in circles and become disoriented. Head injuries, ear infections, pain, and vestibular syndrome are the most common causes of disorientation in dogs. It is crucial to identify the cause behind the problem so proper care and treatment can be provided.
Disorientation is not common among dogs and indicates a serious issue. Dogs usually walk in circles before they lay down or poop. Circling on the ground, sofa, or bed helps make dogs more comfortable lying down.
Obsessive and frequent circling and disorientation can occur for various reasons.
1. Ear Infections
Ear infections are one of the most common causes of uncontrollable circling in dogs. These infections can cause severe discomfort to your dog, making it shake its head and lose its balance.
There are many other symptoms of ear infections:
- Swelling and Redness.
- Discharge from the ears.
- Scratching their ears frequently.
- Shaking their head.
These infections can affect your dog’s inner, outer, or middle ear. Allergies, bacteria, mites, or hypothyroidism can cause ear infections which are painful and frustrating for dogs.
Consult a veterinarian as soon as possible if your dog has contracted an ear infection. These infections can spread to other parts of the ear and can cause severe complications if left untreated.
2. Pain and Injuries
Pain from injuries can make your dog run in circles and exhibit unusual behaviors. Eardrum damage and head trauma can leave your dog disoriented, causing circling.
Dogs are good at hiding their pain, making it difficult to know if they have an injury. Identify the signs of a physical issue if your dog is circling and acting strangely:
- Heavy breathing.
- Tucking its tail.
- Loss of appetite.
- Whimpering when touched or lifted.
If your dog shows any signs of an injury, take it straight to a veterinarian to provide proper treatment.
3. Canine Cognitive Dysfunction Syndrome
Cognitive dysfunctions hinder dogs’ ability to maintain normal bodily functions. Your dog can be feeling disoriented due to Canine Cognitive Dysfunction Syndrome also known as Dog Dementia.
The effects of this condition include the following:
- Decreased alertness.
- Forgetfulness.
- Confusion.
- Disorientation.
This condition is yet to be understood fully. A lot more research is required to understand the causes and implications of this condition, but a healthy diet with all the required nutrients can help avoid it.
4. Vestibular Syndrome
The vestibular system comprises the middle and inner ear. This system is responsible for maintaining the normal balance of the body. Any dysfunction in the vestibular system can cause your dog to roam in circles and lose its balance.
Observe your dog’s behavior and notice signs of vomiting, drooling, head tilting, or tripping down. Proper veterinary care is required in the case of vestibular disorders to prevent further complications.
5. Stroke
It is possible that your dog is walking in circles due to a stroke. Underlying diseases and medical conditions can lead to strokes in dogs, but the exact causes behind them are not clear. Rapid eye movements, limping, and seizures are also signs of a stroke.
Strokes can be life-threatening if left uncared for. Take your dog to a veterinarian as soon as possible to provide the required care and treatment.
6. Brain Inflammation
Necrotizing Meningoencephalitis is a type of brain disease which causes inflammation in the brain. This disease causes several issues in dogs with circling being one of the most common. It is an autoimmune disease, commonly known as meningitis.
Other symptoms of brain inflammation in dogs include the following:
- Behavioral changes
- Seizures
- Confusion
This disease is rare among dogs and requires prompt medical care. A veterinarian can check your dog for signs of Necrotizing Meningoencephalitis and provide the necessary treatment.
Is It Normal for Dogs to Walk in Circles?
It is normal for dogs to occasionally walk in circles. Dogs exhibit these behaviors when they are getting ready to lie down to make their sleeping place more comfortable. Excessive circling or disorientation is not normal and should be addressed.
Dogs exhibit strange behaviors sometimes. Walking in circles is one such behavior.
Dogs usually circle around to make their sleeping place more comfortable. This is also the reason why dogs scratch the ground before laying down. Chasing their tails or trying to remove something from their backs are other reasons for walking in circles.
If your dog is circling rapidly and obsessively, there is an underlying problem that needs to be rectified. Disorientation is not normal for dogs and indicates an injury to the head or the vestibular system.
How to Stop Dogs from Walking in Circles
Examine your dog for any signs of physical injuries or ear infections that indicate a medical issue to stop it from walking in circles. Contact a veterinarian to provide adequate treatment to avoid further complications.
Watching your dog walk around in circles can be frustrating.
There are several steps you can take to stop your dog from displaying this behavior.
- Observe your dog’s behavior to determine whether the circling is caused by a medical issue.
- Check for any signs of injuries on your dog’s head.
- Ensure nothing is stuck on their rear end.
- Check your dog’s ear for infections and mites.
- Identify whether your dog is in pain.
- Test your dog’s ability to focus.
If you are unable to identify the stressor for your dog’s behavior, book an appointment with a veterinarian to get your pooch checked up for any medical issues.
Conclusion
Walking in circles occasionally is not something to worry about. It is a cause for concern when the circling is frequent, coupled with disorientation.
Observe and identify the probable cause for your canine’s unusual behavior. Occasional circling or disorientation is not a cause for concern. If the condition persists for a couple of days, taking your dog to a vet is the right course of action.
FAQs
Why is My Dog Walking in Circles and Standing in Corners?
Canine Cognitive Dysfunction Syndrome can be a probable cause if your dog walks in circles and stands in corners. This condition is more likely to affect older dogs and can cause disorientation and forgetfulness.
What Causes Dogs to Become Disoriented?
Dogs become disoriented when their vestibular system does not function effectively. If your dog has experienced head trauma or ruptured its eardrums, it can cause disorientation and loss of balance in dogs. Consult a veterinarian to provide proper care and treatment to your pooch. | https://misfitanimals.com/dogs/dog-walks-in-circles-and-is-disoriented/ |
Sixteen-year-old Lil travels to Crete in hopes of connecting with her deceased mother’s mysterious past.
While on Crete attending a Future Leaders International conference, Lil finds herself in the middle of a mystery of mythological proportions. Lil and teammates Sydney, Charlie, and Kat attend various workshops intended to test their intelligence, creativity, and strength. Together they must solve a series of riddles and compete in various challenges in order to win scholarships. But when their mentor, Bente, is killed during a burglary, the four girls must use their combined talents to solve the mystery of the labyrinth hidden beneath the manor house where the conference is held and thwart a group of religious zealots determined to steal the ancient artifacts hidden within. An interesting plot is undermined by forgettable characters, stilted dialogue, and uneven pacing. And while the intermingling of mythology and modern times is alluring, the concept has been attempted before and executed more successfully—it’s unlikely that readers won’t be making comparisons to Percy Jackson. The setting is lush and the labyrinth, suitably spooky, but unnecessary details and confusing descriptions make following the action difficult. | https://www.kirkusreviews.com/book-reviews/erin-e-moulton/keepers-of-the-labyrinth/ |
Everyone from the ACLU to the Koch brothers wants to reduce the number of people in prison and in jail. Liberals view mass incarceration as an unjust result of a racist system. Conservatives view the criminal justice system as an inefficient system in dire need of reform. But both sides agree: Reducing the number of people behind bars is an all-around good idea.
To that end, AI—in particular, so-called predictive technologies—has been deployed to support various parts of our criminal justice system. For instance, predictive policing uses data about previous arrests and neighborhoods to direct police to where they might find more crime, and similar systems are used to assess the risk of recidivism for bail, parole, and even sentencing decisions. Reformers across the political spectrum have touted risk assessment by algorithm as more objective than decision-making by an individual. Take the decision of whether to release someone from jail before their trial. Proponents of risk assessment argue that many more individuals could be released if only judges had access to more reliable and efficient tools to evaluate their risk.
Joi Ito is an Ideas contributor for WIRED, and his association with the magazine goes back to its inception. Ito has been recognized for his work as an activist, entrepreneur, venture capitalist and advocate of emergent democracy, privacy and internet freedom. He is coauthor with Jeff Howe of Whiplash: How to Survive Our Faster Future. As director of the MIT Media Lab and a professor of the practice of media arts and sciences, he is currently exploring how radical new approaches to science and technology can transform society in substantial and positive ways. His biggest challenge now, however, is keeping up with his nine-month-old child.
Yet a 2016 ProPublica investigation revealed that not only were these assessments often inaccurate, the cost of that inaccuracy was borne disproportionately by African American defendants, whom the algorithms were almost twice as likely to label as a high risk for committing subsequent crimes or violating the terms of their parole.
We’re using algorithms as crystal balls to make predictions on behalf of society, when we should be using them as a mirror to examine ourselves and our social systems more critically. Machine learning and data science can help us better understand and address the underlying causes of poverty and crime, as long as we stop using these tools to automate decision-making and reinscribe historical injustice.
Most modern AI requires massive amounts of data to train a machine to more accurately predict the future. When systems are trained to help doctors spot, say, skin cancer, the benefits are clear. But, in a creepy illustration of the importance of the data used to train algorithms, a team at the Media Lab created what is probably the world’s first artificial intelligence psychopath and trained it with a notorious subreddit that documents disturbing, violent death. They named the algorithm Norman and began showing it Rorschach inkblots. They also trained an algorithm with more benign inputs. The standard algorithm saw birds perched on a tree branch, Norman saw a man electrocuted to death.
So when machine-based prediction is used to make decisions affecting the lives of vulnerable people, we run the risk of hurting people who are already disadvantaged—moving more power from the governed to the governing. This is at odds with the fundamental premise of democracy.
States like New Jersey have adopted pretrial risk assessment in an effort to minimize or eliminate the use of cash-based bail, which multiple studies have shown is not only ineffective but also often deeply punitive for those who cannot pay. In many cases, the cash bail requirement is effectively a means of detaining defendants and denying them one of their most basic rights: the right to liberty under the presumption of innocence.
While cash bail reform is an admirable goal, critics of risk assessment are concerned that such efforts might lead to an expansion of punitive nonmonetary conditions, such as electronic monitoring and mandatory drug testing. Right now, assessments provide little to no insight about how a defendant’s risk is connected to the various conditions a judge might set for release. As a result, judges are ill-equipped to ask important questions about how release with conditions such as drug testing or GPS-equipped ankle bracelets actually affect outcomes for the defendants and society. Will, for instance, an ankle bracelet interfere with a defendant’s ability to work while awaiting trial? In light of these concerns, risk assessments may end up simply legitimizing new types of harmful practices. In this, we miss an opportunity: Data scientists should focus more deeply on understanding the underlying causes of crime and poverty, rather than simply using regression models and machine learning to punish people in high-risk situations.
Such issues are not limited to the criminal justice system. In her latest book, Automating Inequality, Virginia Eubanks describes several compelling examples of failed attempts by state and local governments to use algorithms to help make decisions. One heartbreaking example Eubanks offers is the use of data by the Office of Children, Youth, and Families in Allegheny County, Pennsylvania, to screen calls and assign risk scores to families that help decide whether case workers should intervene to ensure the welfare of a child.
To assess a child’s particular risk, the algorithm primarily “learns” from data that comes from public agencies, where a record is created every time someone applies for low-cost or free public services, such as the Supplemental Nutrition Assistance Program. This means that the system essentially judges poor children to be at higher risk than wealthier children who do not access social services. As a result, the symptoms of a high-risk child look a lot like the symptoms of poverty, the result of merely living in a household that has trouble making ends meet. Based on such data, a child could be removed from her home and placed into the custody of the state, where her outcomes look quite bleak, simply because her mother couldn’t afford to buy diapers.
Rather than using predictive algorithms to punish low-income families by removing their children, Eubanks argues we should be using data and algorithms to assess the underlying drivers of poverty that exist in a child’s life and then ask better questions about which interventions will be most effective in stabilizing the home.
This is a topic that my colleague Chelsea Barabas discussed at length at the recent Conference on Fairness, Accountability, and Transparency, where she presented our paper, “Interventions Over Predictions: Reframing the Ethical Debate for Actuarial Risk Assessment.” In the paper, we argue that the technical community has used the wrong yardstick to measure the ethical stakes of AI-enabled technologies. By narrowly framing the risks and benefits of artificial intelligence in terms of bias and accuracy, we’ve overlooked more fundamental questions about how the introduction of automation, profiling software, and predictive models connect to outcomes that benefit society.
To reframe the debate, we must stop striving for “unbiased” prediction and start understanding causal relationships. What caused someone to miss a court date? Why did a mother keep a diaper on her child for so long without changing it? The use of algorithms to help administer public services presents an amazing opportunity to design effective social interventions—and a tremendous risk of locking in existing social inequity. This is the focus of the Humanizing AI in Law (HAL) work that we are doing at the Media Lab, along with a small but growing number of efforts involving the combined efforts of social scientists and computer scientists.
This is not to say that prediction isn’t useful, nor is it to say that understanding causal relationships in itself will fix everything. Addressing our societal problems is hard. My point is that we must use the massive amounts of data available to us to better understand what’s actually going on. This refocus could make the future one of greater equality and opportunity, and less a Minority Report–type nightmare. | https://www.wired.com/story/ideas-ai-as-mirror-not-crystal-ball/ |
This website is currently undergoing some development work, and should any issues arise with the form temporarily being unavailable or out of action, please call 0833 332 8900 to make your payment over the phone instead. We anticipate that it will be fully operational on Thursday 2nd July 2018. Apologies for any inconvenience caused.
Please complete the form below and click Next to submit your payment. Please note that we are experiencing some issues, and once your payment has been made you may see a 'page not found' error. Your payment should have gone through, and a confirmation sent to your email address. Should any issues arise with the form temporarily being unavailable or out of action, please call 0833 332 8900 to make your payment over the phone instead. Apologies for any inconvenience caused. | https://business-watch.co.uk/parking-notice-payment-form/ |
This month begins a series of articles. We will explore what is sometimes called the Life Energy Field or aura. Some people speak of Universal Energy. The aura is a subset of Universal Energy and sometimes called the Human Energy Field.
Have you ever found yourself thinking “I get good vibes from that person” or “Wow, there was some bad energy in that meeting”? Ever sense that someone was present even before you looked up? All of these situations demonstrate what happens when your aura comes into contact with other people’s auras. It happens all the time and we are developing new sensitivities to it. Just as an electrocardiogram measures the electrical current of the heart, there is now technology which measures the electromagnetic field around the body — which is what the aura is. Just as electrical energy fluctuates, so, too, does the energy of an aura.
The aura is actually comprised of seven layers. It is an organized system of points, pulses, lines and spirals. The layers are not sequential, as a layer cake is, but rather interpenetrate one another. The vibrations of guitar strings combine and come together to form sound even though each produces a sound by distinct vibration. The auric fields work similarly. We know that strings vibrate at different rates depending on their thickness and the materials from which they are made.
The different fields of the auric layer also vibrate at different rates and have a different composition. The first, third, fifth and seventh layers are highly structured. The second, fourth and sixth layers are more fluid. The chakras also extend out through all the layers although each layer also has an association with a particular chakra. In some ways, each auric layer has a set of seven chakras because at each layer the chakra is operating at a different vibration. You may think about this like clouds in the sky. At different heights, the clouds have different characteristics. One cloud may extend through different heights and will have the characteristics of that height: vapors or water droplets or ice crystals. The texture might be fluid or hard. They might be thick or thin. Dense or fluffy.
Next week we will begin exploring each layer. | http://homegrownandhealthy.com/exploration-of-the-chakra-system/ |
Everything in the Universe is distributed and scattered around: stars permeate our galaxy, we all live in different parts of the world and furniture in your house is positioned seemingly in and organised fashion throughout your rooms. But it’s the relationship of these distributions that can make them interesting. How are distributions concentrated? By how much are they spread? Why are the distributions positioned in a certain way?
In this post on Displaying Data, I will be looking at graphical representations of ranged and distributed data.
Histograms
While the graph below may look like your ordinary Bar Graph, there’s a distinct difference: Bar Graphs visualises and compare categorical data, while Histograms will visualise the distribution of data over a continuous interval or time period. Each bar in a Histogram represents the tabulated frequency at each interval (1, 2, 3…).
Therefore, Histograms are useful for giving an estimate of where values are concentrated in the data. They also help show where the extremes are and whether there are any gaps or anomalies in the data.
Population Pyramids
In this type of chart, two Histograms are paired back-to-back to visualise the sex and age distribution of a population. One side represents male, the other female. The x-axis is used to plot the number of people in a population, while the y-axis divides the graph into age groups.
Population Pyramids are useful for detecting changes or differences in population patterns. Multiple Population Pyramids can be used alongside each other to compare different populations.
The shape of a Population Pyramid can say a lot about the state that particular population is in. For example, a Population Pyramid with a wider top half and narrower base, suggests an ageing population. You can see some recent, real-life examples of this in LSE Cities’ article Urban Age Cities Compared.
Point or Stripe Plots
Here points or stripes are plotted against a single axis. You can see from where the points (or stripes) clutter around the most to be where most of the data is concentrated. This type of graph is also good for simplistically showing how the data is distributed along the scale.
Span Charts
You might know this chart under another name: range bar/column graph, difference graph or as a high-low graph. But it’s function is all the same: to display dataset ranges between a minimum value and a maximum one, which is shown by the start and end of each bar.
This makes Span charts ideal for making comparisons between simple ranges. However, Span charts are limited: they only focus the reader on the extreme values and give no information on the values in-between or the averages or the distribution of the data. This is where the next chart is useful.
Box & Whisker Plots
Typically used in descriptive statistics, Box & Whisker Plots are a convenient way of displaying data through their quartiles. This chart is divided into different ‘parts’ to give a more detailed analysis of the data: the center line (median), the box (upper to lower quartiles) and the whiskers (upper to lower extreme).
Although Box & Whisker are hard to read without training, once you understand them you can see in the data the average, median and each percentile. You will be also able to detect any outliers, if the data is symmetrical, how tightly the data is grouped and if the data is skewed in a particular direction.
Violin Plots
Similar to a Box & Whisker Plot, except that they also show the probability density of the data at different values by using a pair of conjoined kernel density plots over a box plot.
Stem & Leaf Plots
Stem & Leaf Plots (or Stemplots), are a way of organising and displaying data via their place value to show the data distribution. While Stemplots don’t visually encode the data into a graph, they instead display the data raw as a useful reference tool. A good example of this is the public transport schedule below, which has displayed the train times for both north and south bounds trains.
So Stemplots help give a quick overview of the data distribution, are useful for highlighting outliers and finding the mode.
The major downside to Stemplots are that they’re limited in the size of dataset they can handle. If it’s too little, then the Stemplot becomes unless. Too much and the Stemplot becomes over-cluttered.
Dot Distribution Maps
This type of map helps show how data is distributed geographically, in order to detect spacial patterns, which might relate to the location. The clustering of points on the map displays where values are concentrated geographically.…
If you are considering visualising distributed or ranged data and are still unsure with the best visualisation to apply, please feel to get in contact with us and we’ll be happy to talk you. | http://visu.al/blog/displaying-data-distribution-ranges/ |
Although pelicans are awkward looking birds, they are very graceful in flight. When flying, they flap their wings only 1.3 times per second while a ruby-throated hummingbird beats its wings 50 to 70 times per second.
The white pelican does not dive for its food. Instead it dips its bill under water to gather fish. Several birds may gather in a circle to herd fish together. | http://www.brzoo.org/animals/meet-our-animals/white-pelican |
China raises speed of bullet train
BEIJINGChina increased yesterday the maximum speed of bullet trains on the Beijing-Shanghai line to 350 kilometers per hour, six years after a fatal accident led to a speed cap.
The speed limit had been reduced to 300 kilometers per hour after 40 people died in a high-speed train crash near Wenzhou in July 2011.
The acceleration cuts the 1,318-kilometre (819-mile) Beijing-Shanghai journey to 4 hours and 28 minutes, saving passengers nearly an hour.
Starting on Sept. 21, a total of 14 trains were running between Beijing and Shanghai at the higher speed daily, the official Xinhua news agency reported.
“These trains are so popular that the tickets for today already sold out a week ago,” Xinhua cited Huang Xin, an official with the China Railway Corporation, as saying.
The connection between the two metropolises is one of the country’s busiest, carrying more than 100 million passengers a year. China’s high-speed rail network is the largest in the world. | https://www.hurriyetdailynews.com/china-raises-speed-of-bullet-train--118230 |
Below, you'll see any text that was highlighted with comments from the reviewer.
Message to Readers
just changed the title, it's actually draft #4
Peer Review
Definitely the characters and the imagery. They were both so interesting!
I really liked the way you described Tag's need to read. And the descriptions of of the book store were just *chef's kiss*. I think that you did a great job with both his inner and exterior worlds.
All the scenes flowed really well together. I know you're trying to cut more, so the only scene that I think you could shorten is maybe the part where Brooks is talking about Elly's brothers since it doesn't really seem that it has much relevance. Also I liked the bit about Tag getting his nickname from playing Tag with Brooks, but I think it's a bit unnecessary since you can make the contention that Tag is short for Montag without having a story behind it, and this could also help you shorten the story a bit.
I think you created the setting really well by focusing on the most important details, and I especially liked the part where you're describing the smells.
I really enjoyed reading this! I loved Tag and Brooks' friendship and I gasped when Tag saw the girl at the end. Your writing style is so nice as well!
Reviewer Comments
I saw that pin too! It is an interesting idea, and there is actually a book that follows that same premise (I think that the pin mentioned it?) It's called Between the Lines by Jodi Picoult and I believe there is a sequel as well. I honestly thought that it was pretty average, but you might be interested in it. Back to the story though, good job! | https://writetheworld.com/groups/1/shared/200097/version/411017/peer_review/show/71708 |
Foreign Portfolio Investment Transactions Yield Net Inflows in August
Registered investments for the month of August 2018 amounted to $1.1 billion, reflecting a 16.9 percent improvement from the $959 million figure in July 2018. Likewise, a 19.7 percent year-on-year growth may be noted from the $936 million level recorded during the previous year.
The United Kingdom, the United States, Singapore, Hong Kong, and Luxembourg were the top five investor economies for the month, with combined share to total at 80.5 percent. About 79.9 percent of investments registered during the month were in listed securities in the Philippine Stock Exchange (PSE) (pertaining mainly to property companies, holding firms, banks, food, beverage and tobacco firms, and telecommunication companies). The balance (19.4 percent) went mostly to Peso government securities (GS), while the remaining amount (less than 1 percent) went to both other Peso denominated debt instruments (OPDIs) and Peso time deposits (Peso TD). Net inflows were noted for transactions in PSE-listed securities ($39 million), Peso GS ($180 million), OPDIs ($6 million), and Peso TD (less than $1 million).
Outflows for the month ($895 million) were lower by 1.2 percent and 9.9 percent, respectively, compared to those recorded in July 2018 ($906 million) and August 2017 ($994 million). The US continued to be the main destination of outflows, receiving 81.5 percent of total remittances.
On the overall, transactions for the month yielded net inflows of $226 million which is a significant increase from net inflows of $53 million in July 2018 and in contrast with the net outflows of $58 million recorded for August 2017. This may be attributed to investors’ reaction to: (i) good second quarter corporate earnings results, (ii) the forthcoming infrastructure initiatives of the government, and (iii) the recent resumption of trade talks between the US and China which all lifted market sentiments.
Registration of inward foreign investments with the Bangko Sentral ng Pilipinas (BSP) is optional under the liberalized rules on foreign exchange transactions. The issuance of a BSP registration document entitles the investor or his representative to buy foreign exchange from authorized agent banks and/or their subsidiary/affiliate foreign exchange corporations for repatriation of capital and remittance of earnings that accrue on the registered investment. Without such registration, the foreign investor can still repatriate capital and remit earnings on his investment but the foreign exchange will have to be sourced outside the banking system. | http://www.phstocks.com/foreign-portfolio-investment-transactions-yield-net-inflows-in-august/ |
Famous space landers that left Earth's surface to explore the Moon's mysteries
Soviet Union's Luna 9 was the first spacecraft to make a soft landing on the Moon and send images to Earth. It was launched by PAO S.P. Korolev Rocket and Space Corporation and landed on the Moon on 31 January 1966. The probe proved that the Moon's surface can support the weight of a lander and that an object would not sink into a loose layer of dust as some models predicted. Image: NASA
<2/8
NASA's Surveyor 1 is the first in a series of seven robotic spacecraft that were sent to the Moon to gather data in preparation for Apollo missions. It was sent on 30 May 1966 and it took three thrusters to slow the lander. Image: NASA
<3/8
Luna 13 was the Soviet Union's second spacecraft that step on the Moon successfully. It landed on 21 December 1966 and measured the soil's physical and mechanical properties and its radiation characteristics. The mission was carried out by the Lavochkin Association which is a Russian aerospace company. Image: NASA
<4/8
NASA's second spacecraft, Surveyor 3, made it to the Moon's surface on 17 April 1967. Using a surface sampler to study the soil on the Moon, it conducted experiments to see how the lunar surface would react against the weight of an Apollo lunar module. Image: NASA
<5/8
Surveyor 5 reached the Moon on 8 September 1967. But the mission nearly failed due to a leak in the spacecraft's thruster system. Quick thinking on the part of the engineers salvaged the mission. It discovered that the surface of the Moon is likely basaltic rather than powdery. Image: NASA
<6/8
Surveyor 6 that landed on 7 January 1968 was the only spacecraft of the series to land in the lunar highland region. It had the most extensive set of instruments, with which it conducted a number of scientific experiments on the lunar soil. Image: NASA
<7/8
Luna 21 was launched in 1973 by the Soviet Union' Lavochkin Association. It successfully delivered the Lunokhod 2 rover to the surface of the Moon. Luna 21 was sent to the Moon to observe solar X-rays, measure local magnetic fields, and study the mechanical properties of the lunar soil. Image: NASA/Russian Academy of Science
<8/8
Beresheet, which means "In the Beginning" in Hebrew, was the first attempt made by Israel to land on the Moon's surface. It was carried out by a private Israeli organisation called SpaceIL. This mission was a failure because during the landing, in April 2019, the lander had a massive crash. Image: SpaceIL | |
Saving $30,000 in a year is a lot. It takes a well-thought-out plan and some hard decisions to reach that level of saving. However, it’s within your reach, as I’ll show later in this article.
Here’s the short answer:
To save $30,000 in one year, you need to spend $2,500 less than you earn every month. The most effective way is to make frugal decisions on your most significant fixed expenses.
Saving as much as $30K a year is not for everyone. If you’re trying to figure out how much you should save per year, read this article instead:
How Much Should YOU Save Per Year? (Examples and Charts)
In the rest of the article, I’ll get into the details and nuances of how YOU can achieve this.
I’ll give you what I think is the number one way to improve your finances, and finally, save $30,000 per year.
How YOU Can Save $30,000 In One Year
To make things more practical, we’re gonna break it down month by month.
$30,000/year = $2,500/month
In other words, if you save $2,500 every month, you’ll have stacked up $30K in one year.
Therefore, we can rephrase the original goal of saving $30K/year to saving $2.5K/month.
That’s a lot of money… But it’s far from impossible. The general advice you’ll see floating around is to save 20% of your income. Unless you’re making $150,000 annually, $30K way over that.
Suggested reading: Is Saving 20% Of Your Income Enough?
I’ll show you, in three simple steps, how to save $30K within a year.
Actually, once you’ve done the work each step requires, you’ll save $30K every year until you undo them!
Step one and two is all about mapping out where you currently stand, financially speaking. Step three is about gapping the distance from where you are to where you want to go (saving $30K/year).
#1: Calculate Your Average Monthly Expenses
Calculating your expenses is similar to making a budget, but not quite the same.
Budgeting is planning for the future, while calculating expenses focuses on the past and present.
This is the basic philosophy behind the steps:
First, we figure out where you are, and then, figure out what to do about it.
Suggested reading: Is Saving 40% Of Your Income Good?
Let’s try to figure out how much, and how, you are spending your money. Follow the steps given below:
- Get a list of your expenses for the last three months. We’ll take the average of three months because it varies a bit from month to month. Lists like this can often be found in your online bank account. If you don’t know how to figure this out, you should start tracking your spending by saving receipts and logging them weekly.
- Sort all the expenses into “categories”. By categories I mean stuff like “housing”, “transportation”, “food”, etc. When budgeting, we’ll call these categories “items”.
- Add all the expenses in each category together. For example, if you had ten different expenses of $14 each on the “transportation” category in the last three months, add it all together and write “$140” as your total expense for transportation. At the end of this, you should have the total amount spent in the last three months in all the different categories.
- Divide all the “total expenses” for each category by three. Remember, we got the expenses from the last three months. We need to divide it by three to get the average monthly expense.
- Add it all together. This will show you your average monthly expenses, based on the last three months.
After completing the five steps listed above, you will know the following:
- How much money you on average spend on each of the categories per month.
- How much money you on average spend in total every month.
I like to sort these facts and numbers into a table. Here’s an example of what I would make (with random numbers):
If you’re scared by how much money you spend every month, read one of these articles:
- Living On $1,800 A Month: Budget, Lifestyle And More
- Yes, You Can Live On $2,000 A Month! Here’s How & Where:
With your spending habits illuminated, let’s move on:
#2: Calculate Your Average Monthly Income
Income can be more than just your salary. For todays purpose, all money that comes into your account counts as “income”.
It’s also important to get the monthly average income. This means you should include “once a year stuff” like money gifts you might get on Christmas, or Birthdays, a potential tax refund, etc. Just remember to divide it by twelve to get the monthly average!
Just like in step #1, I like to organize this in a table.
Below you’ll see an example, once again with random numbers:
If you’re struggling to find all your income sources, you might want to ask you bank for bank statements. If you can, log into your bank’s website, search for all times when money is going into your account(s), and figure out where it comes from.
Once you’ve made something similar to the table above, you’re ready for the final step!
#3: Make A Budget
Now that you know your monthly expenses and income, you’re ready for the final step: making a budget.
Given that you follow this budget, you will ensure that $30,000 in savings accumulate within one year.
Follow these three steps:
- Take your monthly average income and subtract your monthly average expenses. This tells you how much money you have left (or overspend) every month on average. This is your “baseline” monthly savings/debt growth.
- Make a budget such that your total expenses are $2,500 lower than your total income. Do this by manipulating the different expenses in the list of expenses you made in step #2 when calculating your monthly expenses. Make some hard decisions, and cut down wherever you’re able to do so. More tips on this in the next section!
- Stick to the budget! If you follow the budget for 12 months, you’re guaranteed to save $30,000 in one year! (given your income doesn’t decrease, or other important factors change!)
To summarize, this is what you need to do:
Make a budget that is $2,500 below your monthly income and follow that budget like an obsessed person for 12 months.
As promised, here’s an example of a budget that’ll save $30K in a year:
If you’re too far away from the target, you can consider reading one of these articles instead:
- How To Save $8,000 In One Year
- How To Save $2,000 In One Year
- Yes, Saving $1,500 A Month ($18K/year) Is Good! (and how to do it)
If you’re not that far away, or just really want to reach $30K/year, the next section will help you get there:
Best Way To Reach $30,000 In Savings In One Year:
When most people try saving more money, they opt for “easy” targets and low-hanging fruit. This isn’t just lazy, but also ineffective.
You have to be willing to make sacrifices if you want to save as much as $30,000 in one year. You’re not getting there by slashing your Netflix subscription…
Here’s why:
Cutting out the small things makes a small impact on your finances.
The hands-down best way to go about it, in my experience, is like this:
- Figure out the three items/categories, in order, you spend the most on. Usually, it’s housing, transportation and food, in that order.
- Cut down on the biggest things, in a permanent manner. For example, moving into a cheaper apartment makes you save lots of money (because housing is a huge expense), and you’ll save the money EVERY MONTH without having to do any additional work in the following months.
- Do NOT spend the saved money on other things. Automate your savings such that your disposable income, your “fun money”, is the same as before you cut down on housing/transportation/food.
I’ll show you an example of a single person with “typical” expenses, cutting down on “the big stuff” to illustrate how much you can save:
This hypothetical person saved almost eleven thousand dollars every year by focusing on the big stuff. Here’s how he did it:
He found a 25% cheaper apartment/house to live in, that was much closer to work. This saved him 25% on housing, and 70% of his transportation expenses. He also learned to cook three-five dinners, and saved 20% on food-related expenses like eating out.
I personally did this, and it turned my finances upside down!
I saved 40% of my income by moving into a cheaper apartment closer to work.
The apartment was only 20% cheaper, but since housing was such a huge expense, 20% amounted to several thousands of dollars every year.
In addition, living closer to work made transportation much cheaper (almost free) because I didn’t have to drive to work and back every day! In fact, I sold my car and bought a bike. A huge money saver, and good for my health as well. On rainy/snowy days I took the bus.
An additional benefit of this move is that it’s hard to undo:
It takes a lot of work to move back to the old apartment, or something similar to it. For this reason, I didn’t have to in any more work, stay disciplined month after month, or even think about it. The 40% was automatically saved every month/year!
To summarize, here’s the basic philosophy you need to adopt to save $30,000 in one year:
Make frugal decisions regarding the “big things” that are hard to undo.
To get more details on this, read my article about saving money despite paying high bills.
Another Way To Reach $30,000 Per Year In Savings
If your budget is already tight, you can instead focus on increasing your income.
I’ve tried a bunch of different things, ranging from e-commerce, dropshipping, direct sales (mlm type of thing), selling stuff on the street (legal “stuff”, of course, with permission from both the police and the property owners!), but nothing seemed to work as well as I was told, or though it would.
The only three things I’ve personally had consistent success with, are the following:
- Moonlighting. (getting an additional part-time job working evnigns or nights)
- Increase your salary. (asking for a raise or finding a new job)
- Start a blog-like website like this one. (In my opinion, it’s the number one way to earn money online, hands down.)
If the final suggestions sparks some interest, here’s the basics idea behind the business:
Create a simple website and write a bunch of HELPFUL articles answering questions people ask on google. This brings traffic to your site. Put ads on your site, and that traffic earns you money!
That’s it. It’s not more complicated than that. The fact that you’re reading this is proof that it works!
The key is to really try an make the content valuable, and not just some generic BS. To make sure you put your heart into it, you should write about something you actually care about.
I know a bunch about finance and investing, and love to keep learning about it. Therefore, that’s what I write about.
However, you can write about almost anything.
Here’s what you need to do in order to “make it” in this business:
Brainstorm ideas of niches you know a little bit about, or you find really interesting. Hobbies are usually what people go for. Create a simple WrodPress website. Write 30-50 articles of roughly 1000-2000 words answering questions people write in the google search bar. Put some ads on your site. That’s it!
If you do this right, and spend roughly 10-20 hours per week on it, you’ll likely earn anywhere from $250-$1000 every month within a year.
That’s $3,000 – $12,000 of additional income every year, bringing you much closer to the goal of saving $30,000 in a year.
And the best part: It is passive income!
After the articles are written, you just need to maintain the website, which takes roughly 1-2 hours per week. The articles bring in money whether you keep working or not.
This is the right way to think about this kind of business: You don’t work to get paid. You work to increase your pay.
If you’re seriously considering this, I would HIGHLY recommend checking out these three YouTubers (in ranked order):
- Income School (I would never have succeeded without this channel)
- Passive Income Geek (Real “down-to-earth” Danish guy with practical and thorough advice)
- Passive Income Unlocked (A team of people giving tips and tricks to “make it” in this business)
- Jasper Pieterse (Just a regular dude documenting his journey in this kind of business. Good inspiration!)
I am NOT affiliated with any of them, and I make no money if you check them out. I just like their content and want to share it.
These are the ones that showed me the way. They all have courses and such, but I never paid for any guidance. However, I kind of regret it as it would have made the journey much shorter and easier.
This is a great video by Income School, telling you basically everything you need to know:
Conclusion: How To Save $30,000 In One Year:
To save $30,000 in one year, you need to limit your expenses to $2,500 less than your monthly average income. The easiest way to achieve this is to make frugal decisions on your largest fixed expenses.
The number one way to consistently save more money is to move into a cheaper apartment closer to where you spend most of your time. For me, this was work and friends. This simple move enabled me to save 40% of my income, every month and every year.
The concrete steps to follow are:
- Calculate your monthly expenses.
- Calculate your average monthly income.
- Make a budget with total expenses $2,500 below your average monthly income.
- Stick to the budget.
I wish you the best! | https://solberginvest.com/blog/how-to-save-30000-in-one-year/ |
Accepting What Is and Releasing What Has Passed
Autumn is here and with the changing season comes an opportunity for change within our own lives. Fall teaches us how beautiful it can be to let go. To let the leaves of our past transform into vibrant red, orange, and yellow memories – Memories we shed without fear knowing new revitalization is on the way. The days are getting shorter and we prepare for winter by watching nature release to the earth what no longer serves. What a great example and leader for our own lives.
Let us be like nature and gracefully let go. Accepting the circle of all things that arise and dissipate in this life. All things have a beginning and an end.
Our breath is fluid within us from the first inhale to the final exhale. A constant reminder of the ebb and flow within our own internal ecosystem. Fall is a great time to get outside and breathe deep into our lives and let go of what no longer serves us so that we too can transform.
Accepting What Is
Acceptance is the answer to all things. It puts in the ever eternal now and helps us be at peace with all that is. If you seek lasting joy and happiness then the wisdom of acceptance is part of your path. When we attach to anything too much it ends up owning and controlling us. This includes the ego, our internal source of all dissatisfaction.
Taking a walk beneath the changing colors of autumn reflects to us how joyful release can be. In spring we wonder at the new growth of the tree so delicate and fresh, and in fall we explore the crisp rich colors crunching beneath our feet. The air is fresh and we find ourselves another year older. Now is a great time to ask ourselves…
- What am I ready to let go of?
- What am I ready to accept?
- What areas of my life need change?
We tend to hold on to things, people, jobs, and memories as though they define who we are or our importance in this world. We forget that all things flow like a leaf breaking from a branch to float along a stream. Change is inevitable. You must choose to either accept what is or resist and be angry when change happens without your acceptance or permission because it will. To live in the beauty of this moment you must let go of fear and stand in harmony with the cycle of all things.
Willing to Let Go
Trees are so beautiful and strong. Confidently they shed their lush covering and allow themselves to be exposed to the harsh world of winter. The trees understand that in shedding what no longer serves them they can draw more energy within to help them survive all winter so that in spring they can begin the process again.
There are things in your life right now that are holding you back. Things that are stealing your energy and preventing you from making the necessary transformation and growth in your own life. Now is the time to ask…
- What am I afraid will happen if I let this go?
- Where is my energy better spent?
- What inside of me is ready to transform?
Letting go takes trust. Autumn is a reminder of the earth’s promise to us. We know that every season will be followed by the next, yet we have a challenge accepting this in our own lives. A new day is waiting, but we must shed the old growth to make room for the new.
Reflect on your life right now. Everything in your life is a result of new beginnings and things you’ve let go. The house, the job, the relationships, all of it a cycle in and of itself. We can see this in nature and our own lives, but we all get stuck on what is possible when we focus on comfort over growth.
Winter is dark and cold but promises renewal if we are willing to risk being exposed to the hard truths of life. Allow fall to help you let go, find strength, and prepare for what’s to come.
Breathing Into Fall
Autumn is a time to reflect on what has come to pass. A time of harvest, and a time to clear out stagnation in our internal landscape to prepare for a new season. Breathwork helps move trapped energy through our body. It’s a way of processing unresolved emotions and suppressed feelings without having to relive trauma. It is a way for our internal world to support us through the seasonal changes of our own lives.
We can use our breath to release from our body and mind what is no longer needed. We can let go and focus our energy on growth, joy, acceptance, and gratitude. Mindful, conscious breathing is a way to move out the trapped emotions of your past so you can let go and confidently move towards new growth.
Below is a guided breathing meditation. It is written to be done seated or lying down at home. You can also modify it to be used while walking through the autumn fall – A way to help move stuck energy and allow yourself to gracefully let go and move into acceptance of what is.
Guided Breath Meditation for Letting Go
There is an audio recording of this guided breath meditation on my YouTube Channel. You can access it here.
- Take a comfortable seat or lay down. Allow your back to float long and find ease in the body.
- Soften your gaze and bring your attention to the tip of your nose. Becoming aware of each inhale and each exhale.
- Slowly begin to lengthen each inhale through the nose.
- Exhale out the mouth, slow, long, and mindfully.
- Mindfully bring each breath deep into your belly as you allow it to expand on the inhale and relax on the exhale.
- Let your breath be the most interesting thing in your world, allowing thoughts to float in and out of your mind like clouds passing on a summer day.
- Notice your body relaxing more with each exhale.
- Close your eyes and see yourself in nature.
- Notice the colors
- Notice the temperature
- Notice the smells
- Notice the sounds
- Notice all you can about your scenery
- Notice the emotions you are feeling
- After you have settled into a clear vision, focus your thoughts on one thing you would like to release attachment from.
- Keep focused on that one thing and deepen your breath around it. Use every inhale to bring in gratitude for all it has given to your life and use every exhale to gracefully release it to the world.
- Continue this for 10 or so rounds of breath.
- Now shift your attention to the transformation.
- Imagine your life once you’ve let go of this one thing.
- Imagine the positive impact this change will have.
- Dwell here in this new reality as long as it feels good. When you are ready to end this breathing meditation, take a few deep rounds of breath and move into a final stretch
- Circle sweep your arms wide and up to the sky on an inhale, bringing your hands together in prayer.
- Exhale as you guide your hands down through your center line and allow them to rest in your heart center for 2-3 more rounds of breath.
- When you are ready, blink open your eyes, and allow the sensations you are feeling to guide you into acceptance.
Breathwork is a great way to let go and heal. If you are interested in learning more follow me on Instagram at Breath_Mindset and join my 4-week online course, Breath Mindset. | http://heart-lightstudios.com/what-autumn-can-teach-us-about-gracefully-letting-go/ |
I just made a pot of these for a dinner with my in-laws and they were both quite complimentary! Seems like lentils are one of those beans that people rarely get excited about, but if my husband requests them and the in-laws enjoy them then I think I have a winning recipe.
My family loves these beans any time of year, but especially when the weather gets cool. This soup, that is more like a stew, will definitely stick to your ribs! Make it an easy one-pot meal by serving it with your favorite chips or crackers, or go all out and serve it with corn bread. Either way you have a great meal.
This recipe has dill and cumin notes accented with the salty and tangy combo of liquid aminos and lemon. If you haven't tried Violife's feta now would be the time; it is a perfect pairing to sprinkle on top!
If you haven't already checked out my quick tips for slow cookers vs. pressure cookers for newbies in the Black Beans post you can find it here. But since lentils do not require soaking you could absolutely make this recipe in a pot on your stove top! Just follow the package directions to ensure that your beans will be soft. Otherwise, my quick preparation tips are the same. If you can get frozen mirepoix from your grocery store all you will have to do is dice your potatoes and dump everything into the pot!
Not only do these bean go well together with any grain, serve them with rice or quinoa for example. But if you tend to over cook like I do, then you can easily use the leftovers to make burger patties or a loaf (recipes coming soon). So don't be afraid to cook up a BIG pot!
Lentil Soup
PREP TIME 10 | COOK TIME 30 | TOTAL TIME 40
Servings: 10-12
Category: Beans
Cuisine: Soup
Ingredients
Lentils 4 cups
Potatoes diced 1 cups
Spinach chopped 1/2 cups
Carrots diced 1/2 cups
Celery diced 1/2 cups
Onions 1/4 cups
Braggs Aminos 2 Tbsp
Nutritional Yeast 1/4 cups
Lemon juice 2 Tbsp
Bay leaves 2 each
Diced tomatoes 1 15 oz can
Garlic minced 2 Tbsp
Onion powder 1 Tbsp
Cumin 2 tsp
Salt 1 tsp
Garlic powder 1 tsp
Dill 2 tsp
Instructions
Slow Cooker Method:
Add all ingredients to the pot EXCEPT tomatoes.
Cook on HIGH for about 6 hours or until beans are tender.
Add tomatoes and season to taste.
Pressure Cooker Method:
Place all ingredients except for tomatoes into the pot.
Set on 25 minute bean setting. Slow or natural release is not necessary.
Once done, add tomatoes and season to taste.
Notes
*You can sub soy sauce if you don't have Bragg's on hand. Soy sauce is a stronger flavor so start with a tablespoon and add more if needed. | http://www.vegobistro.com/single-post/2019/08/14/Lentil-Soup |
SK Telecom President Ryu Young-sang announces the telecom company's plan to raise more than 1 trillion won ($890 million) this year to develop information and communication technologies together with SK Square and SK hynix.
Korea's banks failed to inform customers of their rights to demand interest rate cuts, raising concerns that customers might be paying far more interest than they should be.
The number of people spending their summer vacations in shopping malls is on the rise with the pandemic and a prolonged rainy season. An increased number of people visited Times Square Mall in Yeongdeungpo District
Until-recently empty streets in Myeong-dong, in central Seoul, are packed with visitors and tourists Thursday.
A man has his hair tied up in a topknot during a traditional coming-of-age ceremony held at the Gwangju Hyanggyo, a state-owned Confucianism school in Gwangju, Wednesday.
An employee of a cosmetics company is having a teleconference with a potential buyer abroad, Wednesday, at the “Brand K” event held in Coex, Samseong-dong in southern Seoul.
Officials from the Justice Party’s youth headquarters and youth student committee shout slogans at a press conference held in front of the Sejong Center for the Performing Arts in central Seoul on Wednesday.
Korean employees working for 143 local firms in Vietnam exit a hotel in Quang Ninh Province, northern Vietnam, on Wednesday after two weeks of mandatory quarantine. 340 Koreans were allowed entry and airlifted to Vietnam on April 29.
Government officials and leaders from the finance industry attends an opening event of the Financial Data Exchange,a platform to sell and buy financial, telecommunication and corporate data, in Jongno-district, central Seoul, Monday
The Toys "R" Us store in Jamsil, western Seoul, is packed with families making purchases. | https://koreajoongangdaily.joins.com/section/tags/Industry/pic |
This pretty soup is the essense of spring, and very easy to make.
Share this!
Save ▼
To Recipe Box
To Cookbook
To Meal Plan/Menu
To Grocery List
Rate & Review
SERVINGS
4
Ingredients
1 pound fresh asparagus
2 cups milk or Half-and-Half
1 cup chicken or vegetable stock
1 teaspoon coriander, ground (optional)
Directions
PREP
10
mins
COOK
10
mins
READY IN
20
mins
Wash asparagus and trim off tough ends. Cut into 1 inch pieces. Reserve a few tops for garnish, if you wish.
Cook asparagus in boiling water for 3-4 minutes, until bright green, but still a bit crunchy. Drain.
Heat milk and stock together, with coriander, if using. Add asparagus and purée (use an immersion blender right in the soup pot, or divide into batches and purée in a regular blender).
Taste for seasoning and serve, garnished with asparagus tops, if using.
Similar Recipes
Cheesy Cream of Broccoli Soup
cream of chicken soup
frozen broccoli
milk
onion
Cream of Vegetable Soup
butter
chicken
dried basil
flour
garlic powder
Cream of Broccoli Soup
broccoli
milk
olive oil
onion
paprika
More Similar Recipes
Advertisement
Cream of Asparagus Soup Recipe Reviews
There currently aren't any reviews or comments for this recipe. Be the first!
Rate It!
Add A Review Now! | https://m.recipetips.com/recipe-cards/t--2444/cream-of-asparagus-soup.asp |
A, B, C, D ... the letters of the alphabet are always the same, but their shape can change and "say" different things. By modifying their appearance, you can communicate EMOTIONS.
You are angry, you are appeased, you feel fragile, ...? Write a word (or just your name) to reflect your mood.
To "write" you can use:
• toothpicks or twigs and ink;
• strings, laces;
• torn paper, scotch tape;
• glasses, pots (for circles);
• paper clips, wire ...
Everything except markers and pencils ... ;-)
#DiyKanal is a series of creative exercises imagined by Brussels based illustrator Teresa Sdralevich to enlighten children’s quarantine. | https://kanal.brussels/en/news/diy-emotypo-1 |
Size: 7″ x 5.5″ — this one is little!
But before we talk about the paper inside, let’s jump in and take a closer look at this journal.
INSIDE FEATURES – After that great little quote page, you’ll find the special pages, but again, things are a bit odd here. You have a KEY page that gives you space for a bullet code and a color code…. but then after that one key page, you have an additional 3 key pages (yes, folks, that’s a total of 4 key pages). Then you’ve got the INDEX pages – 4 of those – then you start right in on the numbered dot grid pages. There’s no pocket in the back, but you’ve got 2 color coordinated bookmarks (one navy, one tan) and an elastic pen loop and the closure band all in tan.
PAPER – I won’t mention the strange paper thing again (OK maybe I will… does anyone else find it strange that you have 42 blank pages? I could see maybe 8 or 12, but 42? I don’t get it). The 8 perforated pages in the back are actually split into 4 sections so you can tear off a small notecard sized piece of paper. These would work for shopping lists or a way to tip-in an extra sheet of paper for super long task lists or quotes on a daily or weekly spread. Or when you have someone ask you for a ‘piece of paper’ so they can jot down a note, but you don’t want to tear a page out of your main journal – these little notecards in the back would work.
The paper quality says it’s 100gsm, ivory paper. This notebook reinforced for me that not all 100gsm paper is created equally. Compared to other journals with 100gsm paper, this one ghosts much more than others. In fact, I pulled out my Leuchtturm 1917 to compare and found that ghosting almost equal – maybe not as bad as LT1917, but definitely close.
CONSTRUCTION – I was a little worried about this notebook when I first got it. When I opened it the spine creeked and the pages were rippley (is that even a word?). But I did took a few minutes and broke in in the spine – Ryder Carroll has a video of how to do this. Much better. In fact, now the journal lays flat on the desk without a problem. The binding is Smythe sewn and the construction of the journal seems very good. It would have been nice to have a back pocket on the book to cover up the bumpiness of the end pages going over the pen loop and elastic band on the inside back cover.
CONCLUSION – This journal grew on me. I wanted to hate it but the more I held it and flipped through the pages and wrote on the paper, the more I liked it. It’s a happy medium between the A5 size and the A6 size. It’s not quite a B6 (1/2″ too wide to be B6) and it doesn’t really fit into any other size category. It’s in a class by itself, I guess. But it’s interesting that the more I hold it, the more I like it. If you’re the type of person who doesn’t like to carry your entire bullet journal or planner with you but need a smaller notebook to carry in your purse or bag – this would be an excellent choice. Maybe fill up all those blank pages with sketches and doodles and the front page of the journal with planning and task lists. | https://stationerynerd.com/little-more-notebook/ |
5 Tips to Prevent Medical Errors - PathSOS
Medical errors happen when something that was planned doesn’t work out, or when the wrong plan was used in the first place. Medical errors can occur at any time during patient care: in the diagnostic stage, treatment planning, surgery, and medications or during procedures.
Most errors result from problems created by the fragmented healthcare system, especially in India. This lack of communication between patients and doctors and various medical facilities breaks the information flow. This causes important information to be lost somewhere. Also, the labs and treating doctors do not communicate adequately which may result in misdiagnosis or incomplete diagnosis that may compromise patient care.
In India, errors also happen when doctors and patients don’t communicate clearly. For example, due to the extra workload, doctors may not have enough time to help their patients make informed decisions. This is especially important when discussing options for how to treat cancer best.
Research has shown that involved and informed patients are more likely to do what they need to do to make the treatment work as well as possible. Tips to prevent medical errors.
Informed patients have better treatment outcomes.
- Diagnostic Errors are one of the leading causes of medical errors Keep your records carefully and show them to your treating doctor on a regular basis.
- Take written prescriptions from your doctor every time you visit.
- Keep all your medical records and past history of medications with you and show them regularly to your treating doctor
- At the time of surgery, make sure that the facility where you are visiting is accredited
- Ask questions about your treatment planning to your doctor to your satisfaction
At PathSOS we look at the critical aspect of cancer care, the diagnosis. We make sure that your patient gets the most accurate diagnosis before the treatment commences or anytime during the treatment. The cancer diagnosis is the mainstay of treatment planning therapy selection, though it may not be apparent to cancer patients. Get PEACE OF MIND, through our biopsy review process by international oncologists. These are the tips in this post that “how to prevent medical errors”.
SEND YOUR TUMOUR BIOPSY SLIDES TO PATHSOS AND BE SURE OF YOUR CANCER DIAGNOSIS AND CANCER TREATMENT PLANNING. | https://www.pathsos.net/blog-detail/tips-to-prevent-medical-errors |
Paola Free Library Exhibit & Display Policy
The Paola Free Library is pleased to offer artists, collectors and organizations the opportunity to display their work to the community. Exhibit space is open to individuals and organizations. Organizations shall designate one person as a representative. Exhibit periods are arranged with the Director and/or Library Staff with final approval given by the Library Director.
Application for exhibits is made on a first-come, first-served basis. The Library shall have the final decision on the content and arrangement of all exhibits and displays. The Library expressly reserves the right to reject and display in whole or in part which it deems in its sole discretion to be inappropriate based upon local community standards.
The exhibitor may be required to show samples of the proposed exhibit.
Exhibits in the Library are seen by everyone during regular business hours, including children and adults. The materials of the exhibits and displays must therefore meet what is generally known as “the standard acceptable to the community.” Every item must meet the Library’s standard of value and quality, and the Library reserves the right to reject any part of an exhibit or display.
Because exhibits and displays are used to present fields of interest as varied as possible, the Library is not able to devote space to specific “weeks” and “days” year after year.
Partisan political and religious matters are strictly avoided in the exhibits and displays. Also avoided are petitions, solicitations of any kind, sales, surveys or other materials designed to obtain opinion or quantifiable responses.
Whenever possible, the Library will incorporate books or materials from the Library’s collection which have a relevance to the subject of the display.
The areas available to the public for exhibits and displays are:
(1) locked glass exhibit case located in the entrance lobby
(2) locked glass exhibit case located in the display area west of the circulation desk upstairs
(3) From one to four display panels on the north wall of the circulation room upstairs
(4) locked display case located downstairs between Junior and Easy rooms
The artist/collector is responsible for setting up and removing the display. All publicity related to exhibits and displays shall be submitted to the Library for approval before using.
Exhibits and displays will be scheduled for an agreed-upon period of time, which will be at the discretion of the library. If the exhibit/display is not set up by the assigned period, the exhibit/display may be canceled by the Library. If the exhibitor/collector must cancel a show, it is expected that he/she contact the Library as soon as possible to see if another date may be arranged.
Due to space limitations, the Library cannot provide storage for the property of groups or individuals displaying in the Library.
The Library shall not be held responsible and is expressly relieved from any and all liability by reason of injury, loss or damage to any person or property in or about the premises occurring during the exhibitors use of the premises.
No admission may be charged. Exhibitors must sign the “Agreement to Exhibit” form. | https://www.paolalibrary.org/policies-and-procedures/exhibit-display-policy/ |
A Magistrate's Complaint can be filed by any person who wishes to seek redress for an offence that they believe has been committed against them. The person filing the complaint is known as the Complainant and the person against whom the application is made is known as the Respondent. If the Complainant is a minor, the complaint must be filed through the Complainant's parent or guardian.
Before preparing any documents, you may wish to consider taking the online Pre-Filing Assessment, which can guide you on the general eligibility requirements as well as the types of offences covered and types of remedies offered. The Pre-Filing Assessment can be accessed here.
Once you have familiarised yourself with the Magistrate’s Complaints process, follow the steps below to file a Complaint.
Step 1: Download, complete and print out the Magistrate's Complaint form.
Step 3: File the Magistrate's Complaint, together with the accompanying documents, at Counter 7 of the Community Justice & Tribunals Division located at Level 1 of the State Courts.
A fee of $20.00 is payable once the completed form is submitted.
When the Court directs that a Notice be issued to both the Complainant and the Respondent, the Community Justice & Tribunals Division will prepare the Notices informing parties of the date and time they are required to appear before a Magistrate or a Justice of the Peace (JP) in Chambers for a procedure known as "Criminal Mediation". In appropriate cases, parties may also be notified to attend mediation at the Community Mediation Centre (CMC). The CMC has a panel of trained mediators who are respected members of society from all walks of life.
If the matter is settled at CMC, parties will sign a Settlement Agreement. If settlement is not reached, fresh Notices will be sent to the Complainant and the Respondent to appear before the Magistrate or JP on another day.
When a Magistrate directs that a Notice be issued to both the Complainant and the Respondent, it means that the matter is fixed for criminal mediation before a Magistrate or JP in Chambers.
The Complainant must be present on the day of the criminal mediation. Otherwise, it will be deemed that the Complainant is no longer interested in pursuing the complaint and the Magistrate will strike out the complaint.
When both parties appear before the Magistrate or JP for criminal mediation, the Magistrate or JP may mediate the matter. If a settlement is reached, the complaint will be withdrawn, and no further action will be taken. If there is no settlement, the Complainant may wish to proceed to trial by way of a private summons. A Summons will be issued once the Complainant has prepared necessary charges against the Respondent. There is a fee of $20 payable for each Summons.
personal service: an authorised person (usually the Court Process Server if the Complainant does not have a lawyer) must hand the Summons to the Respondent personally. The Complainant will have to accompany the Court process server to serve the summons on the respondent. Transport charges for the service of the Summons are to be borne by the Complainant.
It should be noted that a Summons cannot be served on a Respondent who is residing outside Singapore. If the Complainant does not know the current address of the Respondent, the Court will ascertain the address through available official records (where possible) to effect service accordingly.
However, if it can be shown that the Respondent is no longer residing at the address that is reflected in the available official records, the Complainant has a duty to find out the Respondent's current local address where he resides. If the address cannot be determined, the Complaint cannot proceed.
The case will proceed to hearing once the Summons has been served on the Respondent. At the hearing, the Respondent will be asked to enter a plea of guilty or not guilty. If the Respondent elects to plead guilty, the Court will adjourn the matter (either before or after the plea is taken) for the Complainant to write to the Criminal Investigation Department (CID) for the Respondent's criminal records to be forwarded to the Court before the sentence is imposed. At the next adjourned date, the Respondent will be sentenced accordingly once the criminal records have been forwarded to the Court.
The matter will proceed to trial if the Respondent elects to claim trial to the charge(s). The parties will give evidence in open court to prove their respective cases. The usual procedures of calling witnesses to support their case as well as the cross examining of witnesses will take place in open court. Cross-examination refers to the questioning of witnesses on what is said in evidence by the other party or his lawyer. The respective parties or lawyers representing the case will then present a summary of the case with the supporting arguments to the trial judge. The duration of the trial may take one or several days depending on the complexity of the case. At the end of the hearing, the Court will decide whether the Respondent is guilty of the offence(s) as charged.
If the Respondent is absent on the day of the trial, the Court may issue a Warrant of Arrest against him. If a Warrant of Arrest is issued, the matter will be handed over to the Warrant Enforcement Unit* (a division of the Singapore Police Force) for execution. This may not take effect immediately. Once the Warrant of Arrest is executed, the police will arrest the Respondent and produce him in court. The Court will offer the option to post bail for the Respondent, and the case may be re-fixed for mediation for parties to resolve the matter. If the mediation fails, the Complainant has the option to proceed with the trial against the Respondent at a later date.
The contact number is Tel: 6557 5017/ Fax: 6220 5083. | https://www.statecourts.gov.sg/cws/FilingMagistrateComplaint/Pages/Filing-a-Magistrate-Complaint.aspx |
What are examples of themes in a story?
Examples. Some common themes in literature are "love," "war," "revenge," "betrayal," "patriotism," "grace," "isolation," "motherhood," "forgiveness," "wartime loss," "treachery," "rich versus poor," "appearance versus reality," and "help from other-worldly powers."
What are the 5 themes of a story?
This song covers the five main elements of a story: setting, plot, characters, conflict and theme. Whether you're studying a short story, a novel, an epic poem, a play or a film, if you don't find these five elements, you're not looking hard enough.
What are the 4 main themes?
The Sign of the Four - Themes overview
Related Question What are some examples of themes in a story?
What is a theme of a story?
The term theme can be defined as the underlying meaning of a story. It is the message the writer is trying to convey through the story. Often the theme of a story is a broad message about life.
How do you write a theme in an essay?
What is example of tone?
Some other examples of literary tone are: airy, comic, condescending, facetious, funny, heavy, intimate, ironic, light, modest, playful, sad, serious, sinister, solemn, somber, and threatening.
What are major themes?
A major theme is an idea that a writer repeats in his work, making it the most significant idea in a literary work. A minor theme, on the other hand, refers to an idea that appears in a work briefly and that may or may not give way to another minor theme.
What is theme in a drama?
A theme is a recurring idea that's present throughout the work. The important thing is that the original script stimulates ideas, wherever those ideas take you.
What is the theme of a mystery book?
In many ways, the mystery story is a direct descendent of the magic tale. Its central theme of sacrifice, the acts of discovery and punishment that release collective guilt, and its "impossible" puzzles of space and time echo the themes of the fantastic.
Can a story be without a theme?
A story MUST have a theme. It may have several, or it may be a theme so convoluted it's hard to spot it, but it will be there. Travel, self-discovery, self-improvement (or opposite), love, greed, morality versus survival, and so on. Without a theme, you'll have a word noodle, not a story.
What are the major themes in things fall apart?
Major Themes. Things Fall Apart is a book that contains a ton of ideas, but three of the big ones are manliness, tradition, and fate. Okonkwo grows up very concerned about being a man, probably because his father was such a loser. The Ibo measure a man by his yams, wives, titles, and accomplishments in war.
What is the theme of the story kabuliwala?
Theme. The main theme of this story is filial affection—the deep love that fathers have for their children. In the story we encounter three examples of filial affection—the narrator and his daughter Mini; the Kabuliwala "Rahmat" and his own daughter in Afghanistan; and the Rahmat "Kabuliwala" and Mini.
What are 5 examples of tone?
It can be joyful, serious, humorous, sad, threatening, formal, informal, pessimistic, or optimistic. Your tone in writing will be reflective of your mood as you are writing.
What are the 10 tones?
10 different types of tones
What are examples of imagery?
Common Examples of Imagery in Everyday Speech
What is theme writing?
Theme-writing refers to the conventional writing assignments (including five-paragraph essays) required in many composition classes since the late-19th century. Also called school writing.
What is the theme of Romeo and Juliet?
Love is naturally the play's dominant and most important theme. The play focuses on romantic love, specifically the intense passion that springs up at first sight between Romeo and Juliet. In Romeo and Juliet, love is a violent, ecstatic, overpowering force that supersedes all other values, loyalties, and emotions.
What are some themes for friendship?
Friendship
What is the theme of the story the lost child?
The underlying theme of the story “The Lost Child” is the universality of a child's desire for everything that he claps his eyes on. All that the child witnesses—from the toys lining the street, to the dragon flies in the mustard field, to the snake swaying to the tunes of a snake charmer's pungi—obsesses the child.
How many themes can a story have?
A story will often have more than one theme, especially if it's a novel as opposed to a short story. Short stories, due to their length, tend to only have one major theme, while a longer novel has time to elaborate on several themes at once. To return to our example, The Great Gatsbyhas several themes.
How do you find the theme of a passage?
Theme can be found anywhere in the comprehension paragraph given but usually it is in the opening line and having connectivity with the concluding i.e. the ending line. The theme is the underlying meaning of the comprehension paragraph and there is at least one line in the paragraph confirming that underlying message.
Is crime a book theme?
The theme of a book is an idea about life or human nature or elements of society that the author shares with her readers. For mystery writers a major theme is “crime does not pay.” A minor theme might be “overcoming adversity”—despite failed relationships a character finds a new romance. The theme is usually inferred.
What genre is Harry Potter?
Is punishment a theme?
In Crime and Punishment, some of the themes that are explored include alienation, utilitarianism, and repercussions for our actions. Another theme is about repercussions for our actions. Raskolnikov never really gets away with the murders because his fear of being caught and guilt are debilitating.
What does slime stand for in theme?
SLIME stands for - Subject, Learn, Imagine, Memory, Exit. Story, Lesson, Inference, Message, Expression. Subject, Lesson, Idea, Message, Evidence. Story, Life, Idea, Monkeys, Evidence.
How do you explain theme to a child?
How do you find the theme for kids?
What can a theme not be?
Definition: Theme is the message conveyed by a text that applies to multiple other texts. Sub-definition: It cannot be described in a single word and it implies a conflict or an argument about the core idea and usually both.
Which choice is the best definition for theme?
Theme is a universal lesson and main idea is what a story is about. Theme is what a story is about and main idea is the universal lesson it teaches.
Can a theme be a question?
If another theme could be summed up in one word, it would be the question "Why?" The very fact that one of the major themes is a question is itself significant. It is a statement about war and about life. In both, there are more questions than answers.
Does every short story have to have a theme?
Short stories often have just one theme, whereas novels usually have multiple themes. The theme of a story is woven all the way through the story, and the characters' actions, interactions, and motivations all reflect the story's theme.
Do all stories have theme?
Most stories have themes, even if we're not consciously aware of them. The real difference is how well they're developed in the story. So we can improve our stories by identifying our theme and making sure we're using it well. Themes are often intertwined with a story's premise.
Latest Posts
- How Do I Add An Agenda To A Meeting?
- What’s Sweet And Healthy?
- Is Assignment And Homework The Same?
- What Is The Most Important Thing On A Balance Sheet?
- What Are The Qualities Of A Good Menu?
- What Are The Disadvantages Of Gmail?
- How Do I Make A Bar Graph In Excel 2020?
- How Do I Use Content Editor In SharePoint?
- How Many Ounces Does A 4×8 Bubble Mailer Weigh?
- How Do You Sell Customer Data?
- How Do I Create A Fillable Form In Office 365?
- Why Is My Signature Not Showing Up In Gmail?
- What Is 110lb Paper?
- How Do I Edit A PDF In SlideShare?
- How Do I Make A Double Sided Bookmark In Word? | https://almazrestaurant.com/what-are-some-examples-of-themes-in-a-story/ |
Q:
Presentation of cyclic group
Let $p$ a prime. Prove that the group $G=\langle x,y: x^{p}=y^{p}=x^{-2}y^{-1}xy=1\rangle$ is cyclic of order $p$.
A:
There is a morpihsm $G\to \mathbb Z_p$ sending $x$ to $0$ and $y$ to $1$, as you can check from the relations. On the other hand, since $y$ has order $p$ in $G$, there is a morphism $\mathbb Z_p\to G$ mapping $1$ to $y$.
Can you check they are mutually inverse ?
A:
A slightly different approach (I think):
Since $G' = \langle [x,y]\rangle$ and $[x,y]=x$, $|G'|$ is $p$ or $1$.
Clearly, $G/G'\cong C_p$. (Substitute $[x,y]=1$ into the relations.)
Therefore $|G| = p$ or $p^2$. In particular $G$ is abelian.
Thus $G'=1$ and $G\cong C_p$.
| |
The Best Vegan Veggie Burger Recipe
This vegan veggie burger recipe is the best of all time. It will satisfy your burger craving without the meat. This vegan recipe is perfect for a quick and healthy weeknight meal. As an added bonus, these burgers can be pulled together in under 30 minutes, which is always a win. Plus, they’re a great way to sneak some veggies into your family’s diet too.
About this recipe
This vegan veggie burger recipe is so flavorful you’ll hardly notice it’s completely meat free. I topped my burgers with butter lettuce, creamy avocado, dairy free provolone cheese, vegan mayo, and English cucumbers. The cucumbers add that extra crunch we all love. Also, please keep in mind, that not all potato rolls are vegan. However, Arnold’s brand potato rolls are vegan.
For a lower carb option, skip the potato bun and use butter lettuce cups instead. Pair these vegan burgers with some roasted vegetables or zucchini fries for one delicious meal. If you’re new to vegan recipes, this is the best veggie burger recipe. Plus, this recipe is a great way to add some plant based meals into your routine.
Storage
Store any leftover burgers in an airtight container in the refrigerator for three to four days. Leftovers can be enjoyed as a sandwich, over a salad or tucked inside of a wrap. Looking for more vegan recipes? Try my plant based alfredo pasta, or my vegan pulled pork recipe. Both make great weeknight meals.
Four Ingredient Vegan Burgers
Ingredients
- 1 large sweet potato
- 1 can chickpeas, 15 ounce can
- ½ cup sweet onion
- ½ cup panko breadcrumbs
- salt and pepper
Equipment
- skillet
- mixing bowl
- food processor
Instructions
- Microwave or boil the sweet potato until it's soft in the center. Once it cools, remove the skin and slice it into small cubes. Place the cubes in a large mixing bowl and set aside.
- Drain 1 can of chickpeas and be sure to remove the outer skin. You can do this by simply pinching the side if the chickpea. Add the prepared chickpeas to the bowl with the sweet potato cubes.
- Next, add the diced sweet onion to a non-stick skillet. Cook for 5 minutes or until the onions are translucent but not brown. Gently combine the sweet potato, sautéed onions and chickpeas using a spatula. Season with salt and pepper.
- Then, grab your food processor and add half of the sweet potato mixture into the processor. Blend until smooth. Combine the smooth mixture with the remaining chunky mixture and gently fold everything together.
- Grab a plate and add ½ cup of panko breadcrumbs to the plate.
- Form the sweet potato mixture into burger patties firmly press the patties into the panko breadcrumbs. Be sure to cover all sides of the patties.
- Gently spray a medium sized skillet with non-stick cooking spray. Cook the burgers in the pan over medium heat. Once they begin to brown on both sides remove them from the pan.
- Add your favorite toppings and enjoy! | https://healthyishfoods.com/the-best-vegan-veggie-burger-recipe/ |
Vertebrate
Fossils
Vertebrate
FossilsFor
Sale
About
Vertebrate Fossils
A
vertebrate is an animal with a backbone, which incidentally
comprises only about 5% of all described animal species.
Vertebrates are formally contained in subphylum Vertebrata,
the largest subphylum of Phylum Chordata that fishes, dinosaurs,
reptiles, and of course mammals including humans. For the
organization of taxa in Fossil Mall, vertebrate constitutes
a moniker for anything not included in invertebrates or any
other specific category such as fish fossils, dinosaur and
reptile fossils, and so forth. An often used simple taxonomy
for Subphylum Vertebrata comprises seven paraphyletic classes:
Agnatha (jawless fish), Chondrichthyes (cartilaginous fishes),
Osteichthyes (bony fishes), Amphibia (amphibians), Reptilia
(reptiles that include dinosaurs), Aves (birds that are descended
from and considered a type of dinosaur), and Class Mammalia
(mammals).
Vertebrate
evolution can be dated to a time known as the Cambrian
Explosion some 525 million years ago. The fossil record shows
that this Cambrian
period was when most
animal phyla first appeared. The earliest known
vertebrates are thought to be the basal chordates Myllokunmingia
and Haikouichthys, both coming from the lower Cambrian Chengjiang
Maotianshan Shales. Jawed vertebrates appear in the Ordovician
fossil record, and become common in the Devonian, a period
often called the age of fishes. The two groups of bony fishes,
the actinopterygii and sarcopterygii, evolved and became
common in the Devonian, while jawless fishes except lampreys
and hagfish
became extinct. Transitional animals between fish and amphibians
first appear in the Devonian fossil record.
Examples
of Vertebrate fossils:
Early
Cambrian Haikouella lanceolata Fossils from the Chengjiang
Maotianshan Shales
| |
Record shares transfers in Confirmation statement is inevitable. Generally, you are required to disclose your shares transactions during the year with Companies House.
For this purpose, it is important to complete your Confirmation statement correctly when you have shares transfers during the year. Otherwise, Companies House will reject your confirmation statement if the shareholders’ section of the form is incomplete or with errors.
Shares transfers illustration
For example, you have 100 shares issued to yourself when your limited company was incorporated. Subsequently, you transferred one share to your friend and 49 shares to your wife on 01 January 2019.
The table below shows how to record shares transfers in Confirmation statement.
Correspondingly, you enter the date of transfer of 01 January 2019 next to your name only. Do not put the date of the transfer of 1 January 2019 in your wife or your friend name.
Accordingly, Companies House would recognize the transfer date of 01 January 2019 is the date of your wife and your friend received the shares.
Particularly, ensure the total number of your limited company’s shares in the column, the number of shares currently held equal to total shares issued by your limited company. Otherwise, Companies House will return your Confirmation statement to your company’s registered office for amendment.
On the other hand, you may contact Companies House if you have questions about your shares transfers or filing your confirmation statement.
The Confirmation Statement is a snapshot of your company information registered with Companies House. Your company is required to submit this document at least once every 12 months. Failure to file your confirmation statement is a criminal offence. | https://conciseaccountancy.com/record-shares-transfers-in-confirmation-statement/ |
Can A Snake Eat Human Food?
- November 19, 2021
- 0
In addition to eggs, poultry, fish, pork, and beef, snakes can also consume human food when it is unprocessed. In other words, the food should be given as it is in its raw form and in its basic form. It is not possible to give your snake fried or saucy food since there are already other ingredients in the food that could cause your snake to become ill.
Table of contents
What Foods Are Bad For Snakes?
However, you should avoid kale, spinach, broccoli, cabbage, and romaine lettuce, since they contain an ingredient that prevents reptiles from absorbing calcium properly.
Can Snakes Eat Fruits?
Plants, fruits, and vegetables are not eaten by snakes. Plants were not eaten by them, as they were by a long line of terrestrial lizards. Plants will never be eaten by pet snakes. It is not uncommon for them to consume plant matter unintentionally because of the herbivore prey they eat.
What Can I Feed My Snake Instead Of Mice?
Can Snakes Eat Bread?
Bread is not eaten by snakes. It is not an appetizing food because it is neither an herbivore nor an omnivore. The snake is a carnivores, and it does not consume vegetables, fruits, or any processed food.
Do Snakes Eat Berries?
It is not possible for snakes to eat berries because they are carnivorous. Snakes are not attracted to snake berry plants very often, which is good news for them. Snake berry plants are not poisonous and do not need to be avoided if you are interested in getting one.
What Human Food Can Snakes Eat?
Snakes can be fed uncooked human food. Eggs, red meat, and white meat are all examples of this. It is always best to offer whole red or white meat to snakes when feeding them.
Can Snakes Live Without Eating Mice?
It is a shame that not all snakes eat rodents, birds, and other large prey species. Snakes that eat eggs, snails, or other species that don’t like mice and other rodents are suitable for your collection.
What Can I Feed My Ball Python If I Don’t Have Mice?
Ball pythons typically eat mice and rats, but you may want to offer some variety in order to keep them happy. In addition to chicks, quails, gerbils, hamsters, guinea pigs, and multimammate mice, you may also want to offer feeder options.
What Can Snakes Not Eat?
Snakes can be injured or even killed by live rodents, which can scratch or bite you. Snakes that do not have these foods in their diet in the wild should not be fed eggs, fish, insects, or other foods. Ensure that your snake doesn’t get too big by sticking to a feeding schedule. | https://www.wovo.org/can-a-snake-eat-human-food/ |
So Organized strives to exceed our clients’ expectations. I will work with you to ensure that the project 100% meets your expectations and is effective and functional for your needs and desires.
I first start with a complimentary 45-minute in home consultation to establish your needs, budget and priorities.
I will follow up with a plan and estimate that addresses what we discussed, and we'll schedule our organizing session(s).
I will work with you, or on our own to declutter, purge and organize your space.
We will evaluate the results and establish a plan for the future. | https://soorganized.io/services/ |
Holocene tufa deposits in the Test valley, Hampshire, UK, have been examined as part of a wider project into prehistoric activity and landscape change in the central southern chalklands. Molluscan and ostracod analysis on three tufa sequences shows that the tufa began developing around 9300 BP in open marsh/open woodland environment. Although one of the sequences (Boss 10) demonstrates a wooded environment throughout, another sequence (Boss 360n/60e) indicates that the initial woodland is followed by reversion to more open conditions. Both profiles then show a short 'drier' woodland phase. Subsequent to this, one of the profiles then shows wetter woodland conditions developing (Boss 10), the other (Boss 360n/60e) more open wetland. Deposition rates are comparable to tufa deposits elsewhere and indicate that the short-lived 'drier' woodland phase perhaps lasted only a few hundreds of years. There are also indications of woodland clearance, associated with a charcoal peak, although no causal mechanism can be proposed. The formation of substantial portions of Holocene tufa deposits within open or only A lightly wooded landscapes has not previously been documented for southern Britain, where generally tufa HOLOCENE has been shown to have formed in densely wooded conditions. Through the tufa deposits the ostracod and RESEARCH molluscan evidence are in accord and provide detailed hydrological and environmental information respectively. Fine sampling (1 cm units) allowed a temporal resolution of perhaps around 20 years per sample. It is clear that the tufas studied here formed as groundwater-fed paludal deposits within the valley bottom. | https://researchspace.bathspa.ac.uk/334/ |
Scottish traditional music icons Phil Cunningham and Aly Bain invite friends including multiple award-winning singer Emily Smith and 2017 winner of BBC Radio 2's Young Folk Award Josie Duncan to join them onstage for the Royal Scottish National Orchestra's (RSNO) St Andrew's Party, on Saturday 25 November at the Glasgow Royal Concert Hall.
This year's St Andrew's Party (sponsored by Capital Document Solutions) will feature songs including Mauchline Belles, Irish Beauty, Kid on the Mountain, Sophie's Lullaby, Flatwater Fran and Cathcart. Josie Duncan will perform We Will be Free from Phil Cunningham's Highlands and Islands Suite and The Midlothian Miners' Song. Emily Smith's set includes Robert Burns' songs Silver Tassie, Ae Fond Kiss and Adoon Windin Nith.
Phil and Aly have been performing together for over thirty years and have been marking the week of the feast of Scotland's national saint in concert with the RSNO since 2001. This year, conductor and arranger John Logan, also Head of Brass at the Royal Conservatoire of Scotland, marks his tenth year leading the RSNO for the celebrations, sharing the anniversary with guitarist and singer Jenn Butterworth, who has accompanied Phil, Aly and the RSNO since 2007.
Traditional musician Phil Cunningham MBE: "It's that time of year again and Aly and I are very excited to take to the stage with the RSNO for another St Andrew's party. As usual, we are delighted to welcome our guest singers for the night. This year we welcome back the wonderful vocal talents of Emily Smith and for the first time, BBC Radio 2 folk award winner, Josie Duncan. We'll also be joined again by a selection of students from the Royal Conservatoire of Scotland and of course the one and only Jenn Butterworth on guitar. We're really looking forward to a great night of music and fun."
2017 BBC Radio 2's Young Folk Award winner Josie Duncan: "I can't think of a better way to spend St Andrew's night than to celebrate Scottish music with such wonderful musicians. I'm really looking forward to revisiting Phil Cunningham's Highlands and Islands Suite after being lucky enough to perform it at last year's Celtic Connections, on the same lovely stage. I'm also humbled and excited to be performing music from my project which pairs coal mining songs with brass accompaniment."
Scots Singer of the Year award winner Emily Smith: "I'm delighted to be part of this year's RSNO St Andrews party with Aly & Phil. It's sure to be a memorable evening and an honour to sing with the RSNO. I've known Phil and Aly for many years now. It's always a treat to work with such fantastic musicians and entertainers."
For tickets to A St Andrew's Party on Saturday 25 November at the Glasgow Royal Concert Hall contact the Glasgow Royal Concert Hall Box Office on 0141 353 8000 or visit www.rsno.org.uk. | https://www.rsno.org.uk/award-winning-vocalists-join-phil-and-aly-for-st-andrews-party/ |
Recipe By:
Creative Gourmet
creativegourmet.com.au
Serves:
8
This rich, fudgy, moist dessert cake is simply delicious! It's perfect for entertaining, dinner parties and special occasions.
Chocolate Cake
Ingredients
200g quality dark chocolate, chopped
150g unsalted butter, chopped
1 tablespoon Tia Maria or similar liqueur
2/3 cup caster sugar plus 1/4 cup extra for compote
5 eggs, separated (at room temperature)
1/3 cup ground almonds
1/3 cup plain flour
500g Creative Gourmet frozen Mixed Berries
Cocoa or icing sugar, for dusting
Cream or ice-cream, to serve
Method
Grease and line a 23cm spring-form pan with baking paper. Place chocolate and butter in a large heatproof bowl. Microwave, uncovered, on medium, stirring every minute with a metal spoon, for 2-4 minutes until melted
Stir in Tia Maria and 1/3 cup sugar. Set aside to cool slightly
Preheat oven to 180°C/160°C fan-forced. Beat egg yolks one at a time into cooled chocolate mixture. Add ground almonds and sift over the flour. Gently fold until combined
Using electric hand beaters, beat egg whites in a large bowl until stiff peaks form. Gradually beat in remaining 1/3 cup sugar
Using a metal spoon, fold a large spoonful of egg whites into chocolate mixture
Fold through half the remaining egg whites. Sprinkle over 2 cups frozen berries. Add remaining egg whites and gently mix until just combined (breaking up any clumps of berries). Pour mixture into prepared pan
Bake for 45-50 minutes until cake has risen and is firm to touch. Remove from oven and cool cake completely in pan
To make the berry compote, place remaining frozen berries and 1/4 cup sugar into a small saucepan over medium heat. Bring to the boil, stirring occasionally, and cook for 2 minutes. Cool slightly
Place into an airtight container and chill until ready to serve
To serve, carefully transfer cake to a serving plate or board. Dust with cocoa or icing sugar. Serve with berry compote and cream or ice-cream
Tips & Hints
Expect the cake to sink on cooling because of the small amount of flour used (this gives the cake its lovely fudgy texture). Make the cake a day in advance. Once totally cooled, cover with foil and refrigerate overnight. Remove from fridge, transfer to a serving plate and bring to room temperature. Dust with cocoa or icing sugar and serve. Pick out the largest berries from the box and use for the compote, this cake is so delicious as it is, it would be a shame to make health modifications, if you are trying to lose weight, simply have half the portion size and serve with a small scoop of light ice cream instead of whipped cream. | https://myfoodbook.com.au/print/38830 |
Description:
(
hide
)
As a biblical motif, the third day indicates a colossal turn-around from hopelessness and despair to victory and jubilation. The third day rally, or revival motif, recurs throughout Scripture. For example, it manifests itself in David's sacrifice at the threshing floor of Aruna, when David finally realized the horrible depth of his sin. This action rallied Israel, leading to the construction of Solomon's Temple and a golden age for Israel. On the third day of creation, the sea and land were separated and seed life began to germinate. Another example is Jonah's revival from the belly of the great fish on the third day, which prefigured Christ's resurrection on the third day, at which time He was restored to His former glory. His post-resurrected body established His identity as the Messiah and Son of God. The disciples at that time internalized prophetic connections that were previously only academic in their thinking. Isaac's rescue from certain death was another third day event, providing a type of Christ's resurrection. Because of Abraham's sterling obedience on this third day, his physical and spiritual offspring were richly blessed. After three days, Pharaoh's butler was restored, as Joseph's interpretation of his dream forecasted. Esther's petition before the king, restoring the well-being of her people, occurred on the third day. The Great Tribulation, using a year for a day principle (two days of Satan's wrath and one day of God's wrath), will have its dramatic turn-around on the third day, when God's government will destroy and rep
The transcript for this audio message is not available yet.
Related
Eucatastrophe
'After Three Days'
Abraham's One God
Jerusalem and the Plan of Redemption
Series
The Third Day
series:
The Third Day (Part One)
The Third Day (Part Two)
×
Close
Email this page
To Address:
Your Address: | http://www.sabbath.org/index.cfm/fuseaction/Audio.details/ID/4030/Third-Day-Part-Two.htm |
The main objective of this task force is to identify applications in which big data technologies can create the biggest impact in Europe. This TF will facilitate value creation (economic and societal benefits) in Europe by supporting selected industrial sectors and other areas of interest with big data technologies. The key focus of this group should be to identify and act on the needs (technology, skills, etc.) of different industrial sectors as well as the areas of interest applicable to different industrial sectors such as language technologies, HPC, etc. The industrial sectors will include healthcare, telecom, content/media, energy, manufacturing, finance, supply chain, etc. Several sub-task forces have been already created to address sectors in which BDVA has a critical mass of members and in which big data applications will create the biggest impact. An additional goal of this TF is to identify and address synergies across application areas. | https://www.bdva.eu/task-force-7 |
SOP issued ahead of reopening of schools in Odisha
Bhubaneswar: Couple of hours after the Odisha government allowed reopening of schools for Class X and XII students from July 26, the School and Mass Education Department issued SOP for the schools. The classes will be held from 10 AM to 1.30 PM excluding Sundays and public Holidays.
Following guidelines have been mentioned by the Additional Secretary of S&ME department to the Director of Higher Secondary Education, Director of Secondary Education, Chairman of CHSE, President of Board of Secondary Education:
a) Online/distance learning shall continue to be the preferred mode of teaching and shall be encouraged. Where schools are conducting online classes, and some students prefer to attend online classes rather than physically attend school, they may be permitted to do so.
b) Online and classroom learning will go in tandem with each other and should continue to share timelines and daily schedules to ensure synchronization.
c) Students may attend schools/institutions in consultation with their parents or guardians. Attendance must not be enforced
d) The District Collector will have the final authority to take decisions in this regard to the time and method of school reopening in exceptional circumstances for all the schools in the district
STANDARD OPERATING PROCEDURES (SOPs) FOR HEALTH, HYGIENE AND OTHER SAFETY PROTOCOLS BEFORE OPENING OF SCHOOLS:
A General Guidelines on school opening and attendance of teachers/students/staff
1. Only schools outside the containment zones shall be allowed to open. Further, students, teachers and employees living in containment zones will not be allowed to attend the school Students, teachers and employees shall also be advised not to visit areas falling within containment zones. Due to the dynamic nature of the situation, these decisions will be taken by the District Collector.
2. The District Collector will also direct the relevant schools to immediately shut down in case their zone is declared as a containment zone.
3. Prior to resumption of activities, all work areas including furniture, libraries, laboratories, storage places and areas of common use shall be sanitized with particular attention to frequently touched surfaces.
4. Schools may not reopen without 100% access to potable drinking water and adequate functional toilets for all students. Any school without access to the above must first make these arrangements before reopening.
5. Hostel facilities are not to be opened at this point of time. Detailed SOP for Hostel operations will be issued when Hostel reopening is deemed safe.
6. School provided transportation should also be-discouraged to reduce risk. Parents must ensure they take responsibility of the child’s commute to school. Where plying. transport facilities may run at maximum of 50% capacity with adequate sanitization before picking up and after dropping students.
7. High risk staff members with severe ailments or underlying conditions must take extra precaution.
8. No student should be coerced to come to school. Only those parents and students who feel comfortable attending school should do so
B. Provisions to be made inside schools:
1. For ensuring social distancing and queue management: Inside and at the entrance of the premises, specific markings on the floor/ground with a gap of 6 feet should be made
2. Inside classrooms, students should be made to sit at safe distances/alternate desks Fixed seating should be ensured. A particular seat/space should be earmarked for each student (for example: based on roll number) so that there is limited exposure to other students’ physical spaces
3. Similarly, physical distancing shall also be maintained in staff rooms (by earmarking seats for teachers at an adequate distance), and other common areas (mess, libraries, cafeterias, etc.) with relevant markings as required
4. If available, temporary space or outdoor spaces (in case of pleasant weather) may be utilized for conducting classes, keeping in view the safety and security of the children and physical distance protocols
5. There must be adequate soap (solid/liquid) and running water in all washrooms and toilets. Hand sanitizers etc. for the teachers, students, and staff must be available mandatorily in each classroom. Students should be encouraged to sanitize their hands when entering and leaving classrooms and toilets.
6. Any staff entrusted with cleaning/sweeping duties must be informed and trained about the cleaning/sanitization processes as well as general norms for waste management and disposal
7. The school should display state helpline numbers and also numbers of local health authorities etc. to teachers/students/employees to contact in case of any emergency. Posters related to the preventive measures about COVID-19 must also be displayed.
8. A separate isolation room has to be marked in the school and kept ready. This room may be used in case any student or staff develops Covid symptoms.
Every school must determine how to run the school basing on the number of students and number and size of class rooms available. Maximum 20-25 students should be allowed to sit in a classroom to ensure safe distancing among students. For schools with an adequate number of classrooms all students can be asked to come on a daily basis.
School timings should be as usual. Recess/Break should be staggered for different classes to ensure there is no overcrowding at common spaces and toilets.
Students should be encouraged to bring healthy and nutritious food from home and should be advised not to share with fellow students.
Assemblies, sports and events that can lead to crowding are prohibited.
No outside vendor should be allowed to sell any eatables inside the school premises or within 100 meters from the entry gate/point.
Sensitization of teachers, parents, staff, and members of School Management Committee
1. Before the opening of the school, a meeting of al SMC/SMDC members and any other parents who desire to attend the meeting shall be called by the Head of School
2. In this meeting, a detailed discussion on the safety protocols must be held, inputs of all members incorporated, and consent taken from SMCSMDCs. Proceedings of the meeting must be recorded in the relevant register maintained in the school
3. The SMC/SMDC must also be encouraged to walk around the school premises and ensure that all hygiene and safety precautions are there to their satisfaction.
4. All these information should also be shared with the parents community through WhatsApp message or SMS. The message also include dos/don’ts that the parents/students must follow.
Written consent should be sought from parents for their child to attend school. Students opting to study from home with the consent of the parents may be allowed to do so.
I. STANDARD OPERATING PROCEDURES (SOPs) FOR HEALTH, HYGIENE AND
OTHER SAFETY PROTOCOLS AFTER OPENING OF SCHOOLS
A. Monitoring Team to be made along with SMCISMDC members
1. Every school must have a Covid Monitoring Team comprising of 1 teachers & 1 parent member from SMC/SMDC
The responsibility of the Monitoring Team will be to ensure health and hygiene within the school This team will come to school 30 mins early and leave 30 mins after school hours to ensure full cleanliness/sanitization in school.
Emergency response- Have a clear plan in place for contingencies and take action anytime there is an emergency or risk in school.
In addition, the team will also support the HM with: Preparation and implementation of all calendars, schedules and activities in the school. HM will decide the school schedule, calendar, take decision on academic activities with a key consideration towards ensuring the right balance between learning and safety of students. | https://sambadenglish.com/sop-issued-ahead-of-reopening-of-schools-in-odisha/ |
JIS College of Engineering vaccinated all the teaching and Non-teaching staff members and their families on 7th June. More than 200 people have been jabbed in the first drive under the initiative of institutional social responsibilities.
They have always prioritized the growth, health, and well-being of each member of the JIS College of Engineering family. Hence they are fulfilling the Institutional Social Responsibilities with high spirits to ensure the safety of JISCE staff members.
Taking forward the initiative to contain the spread of the deadly virus among the employees and their families, JISCE have been persistent in regular medical check-ups and sanitization in each nook and corner of the campus on a daily basis. | http://backeyenews.com/jis-college-of-engineering-vaccinates-all-the-staff-members-and-their-family/ |
Jolley, 32, played a total of 269 games and kicked 82 goals in his illustrious VFL career, including a club record 217 games at Williamstown.
Captain of the Calder Cannons 2004 premiership team and the 2004 U18 Vic Metro team, Jolley was drafted by Essendon in 2005 and played four AFL games in 2006. He then went on to play 52 VFL matches with the Bendigo Bombers before crossing to Williamstown in 2008, captaining the Club from 2012-2017.
In his eleven seasons at Williamstown, Jolley has won the Gerry Callahan Medal, the Club’s best and fairest award, a record-equalling four times: 2011, 2012, 2014 and 2015, and captained the Seagulls’ stand-alone 2015 Premiership team.
The Ron James Memorial Trophy, voted on by the playing group each year for the player they feel shows respect, professionalism and sacrifices a player makes for the team, was presented to Ben for the ninth time, and eighth consecutive year, in 2017.
Jolley has been named in the VFL Team of the Year on seven consecutive occasions, twice represented the Victorian state team, and is a life member of both Williamstown and the VFL.
CEO Jason Reddick stated “In the 155 year history of the Williamstown Football Club, Ben Jolley’s record of achievements as a player and captain are second to none. Ben’s sustained performances at the top level are testament to the enormous dedication and professionalism that he has displayed in preparing himself, over his whole career, to be the best player he can be. As humble as he is, Ben is recognised as one of the most respected players in the VFL competition, will forever be remembered as one of the greats of the modern era, and will undoubtedly be inducted as an official Legend of our Club.
More importantly than his on-field accomplishments however, the real legacy that Ben Jolley leaves behind at Williamstown is the amazing influence he has had on a generation of teammates, coaches, staff, volunteers, members and supporters. Ben has been an inspiration to everyone at the Williamstown Football Club, setting the standard of expectation on and off the field for all to follow, being a trusted friend to all, and being the most respected leader and representative that our Club could ever hope for. His nine Ron James awards are testament to that. Everyone who has passed through the doors of our Club over the last 11 years, is blessed to have met and learned from Ben Jolley”.
“It’s no surprise that Ben has an amazingly supportive family and we thank his proud parents Phil and Carmel, wife Jane, daughters Ada and Lucy, brother Alex, sister Libby, and the extended family, for sharing in the wonderful journey that Ben has had with Williamstown.“ added Reddick.
“I thank the Club for the opportunity that I have had during my time here. Williamstown has played a big part in my life over the past 11 seasons” said Jolley. “My time at the Club will hold a special place in my heart. To have been able to Captain the Club to the 2015 premiership, be awarded Club & VFL Life Membership, and have so many teammates become lifelong friends, is something that I hold very close to me” Jolley added.
We congratulate Ben on a fantastic career and wish him and his family the very best for the future.
One of the GREAT VFL players !!
Congratulations Ben Jolley has been one of the most respected long serving players in the competition and he has had a fantastic career at Williamstown.
Has been rumoured that Coburg are VERY keen to acquire his services.
Reading between the lines I get the feeling that it may not have been his decision entirely to retire (from Willi).
That's my mail Wal,wanted to go on but wasn't encouraged to do so,will land at Port or Coburg,inconceivable really but that's how Willy FC roll.
All roads lead to Port !! | http://vflfooty.com/node/10325 |
Theory of quantum materials
Putting a large number of identical quantum particles together induces a wide range of new and often unexpected physical phenomena. Due to their quantum nature, the interactions of these particles lead to the emergence of new states, new phases, new excitations, new physical laws and principles. This is particularly true for correlated electrons in solids, being at the heart of our theoretical research.
We focus on many-body effects, ordering phenomena, and collective excitations in strongly correlated electron compounds with 3d or 4f/5f elements. Of special interest are magnetic ordering and spin dynamics in compounds with low-dimensional effective interactions, such as layered compounds of 3d elements (vanadates, cuprates) and 4f pnictide compounds. Competing Coulomb and exchange interactions often produce frustration of magnetic degrees of freedom, which in turn leads to a large number of low-lying quasi-degenerate states. The high quasi-degeneracy of these states has characteristic traces in various observables, for example the magnetic excitation spectrum or the field-dependent ordered moment. Characteristic features can also be seen in the temperature and field dependence of thermodynamic quantities like heat capacity, magnetic susceptibility, magnetization, and the magnetocaloric effect.
To investigate these effects, we develop and use various theoretical methods, including classical analytical methods of many-body physics such as spin-wave theory, numerical exact diagonalisation of model Hamiltonians on finite tiles and the finite temperature Lanczos algorithm. Our goal is to derive a detailed understanding of the materials we are exploring. To this end we work closely together with our in-house experimental colleagues as well as with theoretical and experimental groups worldwide. A strong overlap and intense cooperation also exists with the theorists from our in-house departments as well as with the neighbouring Max Planck Institute for the Physics of Complex Systems. | https://www.cpfs.mpg.de/1815358/Theory-of-quantum-materials---B_-Schmidt |
Here in the States, the National Institute of Standards and Technology (NIST) catalogs over 1,300 standard reference materials (SRM). Another agency, the NASA Technical Standards System (NTSS) sets boundaries on metrics, research, and applications to assure solid scientific outcomes. The NTSS is inextricably an extension of the NIST.
Follow this link to a scholarly paper learn more on the subject of the Scientific Method and the Metric System.
Astronomical Standards Are Out of This World
Astronomers and astrophysicists apply special standards toward the goal of making the astronomical relatable to us earthlings.
The Astronomical Unit (AU) is the nominal distance between the center of the Earth and the sun, 93 million miles. This standard works well when describing distances here in our solar system. But, outside the solar system, scientists apply other standards.
Most people are familiar with the speed of light at 186,000 miles per second and distance indicated in light years. The light year gets us into a metric beyond the AU.
The parsec is a unit of measure beyond the light year and the astronomical unit.
One parsec is approximately 3.26 light years (3.086 × 1013 kilometers). Why is a parsec 3.26 light-years? Learn more from Astronomy Magazine
Following the sound science of Einstein’s General Relativity and the measurements by the Wilkinson Microwave Anisotropy Probe (WMAP) satellite, astrophysicists calculate the age of the universe to 13.77 billion years, with an uncertainty of only 0.4%.
The Methuselah Star is the ‘astronomical elephant’ in the Room
Applying the same scientific standards used to calculate the age of the universe, the Methuselah Star located nearby in our own Milky Way Galaxy is 16 billion years old. That’s a problem, since most researchers agree that the Big Bang that created the universe occurred about 13.8 billion years ago.
Instead of consistently relying on sound science and accepted standards, astrophysicists twist the standards to reconcile the difference. In accounting terms, this amounts to “cooking the books”, a crime in civil society.
Now a team of astronomers has derived a new, less nonsensical age for the Methuselah star, incorporating information about its distance, brightness, composition and structure.
“Put all of those ingredients together, and you get an age of 14.5 billion years, with a residual uncertainty that makes the star’s age compatible with the age of the universe,” study lead author Howard Bond, of Pennsylvania State University and the Space Telescope Science Institute in Baltimore, said in a statement.Space.com 7 Mar 2013 | Strange ‘Methuselah’ Star Looks Older Than the Universe
Rear Admiral Grace Hopper must be chuckling to herself up in Heaven.
There is one more standard of measurement to consider, the God Day.
“…do not let this one fact escape your notice, beloved, that with the Lord one day is like a thousand years, and a thousand years is like one day” Bible Gateway:1 Peter 3:3-8 AMP version
How do I explain the Methuselah Star? In can’t but God can. As the prophet quoted God, the Creator of the Universe in Isaiah 55:8-9
“For My thoughts are not your thoughts, Nor are your ways My ways,” declares the Lord. “For as the heavens are higher than the earth, So are My ways higher than your ways And My thoughts higher than your thoughts.
God has a sense of humor. God sets the ultimate, irrevocable standards of the universe, above the NIST and the CEN, above the ruminations of the brightest minds on earth.
According to God, He created everything from nothing and He did it in six God days. Are God days equal to our human 1,000 years? I don’t know. That’s not the only thing I don’t know and I challenge anyone to prove he or she has thoughts higher than the thoughts of God. | https://rockwallconservative.me/2020/09/22/science-the-scientific-method-and-standards/ |
Travel and accommodation
Destinations
Useful
Best time to visit Kyrgyzstan
The best time to visit Kyrgyzstan is from june until september, when you will have a pleasant temperature and limited till little rainfall. The highest average temperature in Kyrgyzstan is 26°C in july and the lowest is -1°C in january.
The weather and climate of Kyrgyzstan is suitable for a sun vacation and winter sports.
The average climate figure for Kyrgyzstan is an 7,3. This is based on various factors, such as average temperatures, the chance of precipitation and weather experiences of others.
Kyrgyzstan has the continental climate prevailing. In the summer it is warm and dry, and in the winter it is cold and wet. If you want to know what the average temperature is in Kyrgyzstan or when most precipitation (rain or snow) falls, you can find an overview below. This way, you are well prepared. Our average monthly climate data is based on data from the past 30 years.
Climate Kyrgyzstan
Climate Kyrgyzstan per Month
Kyrgyzstan has the continental climate prevailing. The average annual temperature for Kyrgyzstan is 13° degrees and there is about 524 mm of rain in a year. It is dry for 114 days a year with an average humidity of 57% and an UV-index of 3.
Day
Night
Precip
Raindays
Snowdays
Drydays
Sun hoursper day
Wind forcein Bft
UV-index
Jan
Feb
Mar
Apr
May
Jun
Jul
Aug
Sep
Oct
Nov
Dec
Day
-1°C
-1°C
8°C
13°C
19°C
23°C
26°C
25°C
20°C
14°C
6°C
0°C
Night
-8°C
-8°C
0°C
4°C
9°C
14°C
16°C
15°C
11°C
5°C
-2°C
-7°C
Precip
21mm
28mm
45mm
53mm
78mm
84mm
51mm
42mm
30mm
32mm
32mm
28mm
Raindays
11
10
15
17
19
21
17
13
12
12
11
9
Snowdays
10
11
5
1
0
2
7
9
Drydays
10
7
11
12
12
9
14
18
18
17
12
13
Sun hoursper day
7
8
8
8
12
12
12
10
10
9
6
6
Wind forcein Bft
2
UV-index
1
2
3
4
4
5
6
5
4
3
2
2
Climate Kyrgyzstan january
On average, it is maximum -1° in january in Kyrgyzstan and at least around -8° degrees. In january there are 11 days of rainfall with a total of 21 mm. The it will be dry 10 days this month in Kyrgyzstan and on average, it snows 10 days in january.
Climate Kyrgyzstan february
On average, it is maximum -1° in february in Kyrgyzstan and at least around -8° degrees. In february there are 10 days of rainfall with a total of 28 mm. The it will be dry 7 days this month in Kyrgyzstan and on average, it snows 11 days in february.
Climate Kyrgyzstan march
On average, it is maximum 8° in march in Kyrgyzstan and at least around 0° degrees. In march there are 15 days of rainfall with a total of 45 mm. The it will be dry 11 days this month in Kyrgyzstan and on average, it snows 5 days in march.
Climate Kyrgyzstan april
On average, it is maximum 13° in april in Kyrgyzstan and at least around 4° degrees. In april there are 17 days of rainfall with a total of 53 mm. The it will be dry 12 days this month in Kyrgyzstan and on average, it snows 1 day in april.
Climate Kyrgyzstan may
On average, it is maximum 19° in may in Kyrgyzstan and at least around 9° degrees. In may there are 19 days of rainfall with a total of 78 mm and the it will be dry 12 days this month in Kyrgyzstan.
Climate Kyrgyzstan june
On average, it is maximum 23° in june in Kyrgyzstan and at least around 14° degrees. In june there are 21 days of rainfall with a total of 84 mm and the it will be dry 9 days this month in Kyrgyzstan.
Climate Kyrgyzstan july
On average, it is maximum 26° in july in Kyrgyzstan and at least around 16° degrees. In july there are 17 days of rainfall with a total of 51 mm and the it will be dry 14 days this month in Kyrgyzstan.
Climate Kyrgyzstan august
On average, it is maximum 25° in august in Kyrgyzstan and at least around 15° degrees. In august there are 13 days of rainfall with a total of 42 mm and the it will be dry 18 days this month in Kyrgyzstan.
Climate Kyrgyzstan september
On average, it is maximum 20° in september in Kyrgyzstan and at least around 11° degrees. In september there are 12 days of rainfall with a total of 30 mm and the it will be dry 18 days this month in Kyrgyzstan.
Climate Kyrgyzstan october
On average, it is maximum 14° in october in Kyrgyzstan and at least around 5° degrees. In october there are 12 days of rainfall with a total of 32 mm. The it will be dry 17 days this month in Kyrgyzstan and on average, it snows 2 days in october.
Climate Kyrgyzstan november
On average, it is maximum 6° in november in Kyrgyzstan and at least around -2° degrees. In november there are 11 days of rainfall with a total of 32 mm. The it will be dry 12 days this month in Kyrgyzstan and on average, it snows 7 days in november.
Climate Kyrgyzstan december
On average, it is maximum 0° in december in Kyrgyzstan and at least around -7° degrees. In december there are 9 days of rainfall with a total of 28 mm. The it will be dry 13 days this month in Kyrgyzstan and on average, it snows 9 days in december.
Flight tickets to Kyrgyzstan
Through our partners below you will find the cheapest flight tickets for Kyrgyzstan. Click on a logo to visit the website or take a look on Skyscanner for the cheapeast flight tickets from all the airline companies.
About Kyrgyzstan
Kyrgyzstan is in the continental part Asia and the capital is Bishkek. The total surface is 198.500 km² and the population consists of 6.140.200 inhabitants. In terms of area, the country is somewhat smaller Philippines. If you want to call Kyrgyzstan you have to use +996 or 00996 before the phone number without the first 0 in the original phone number (if it occurs). If the normal number is 05012457809, for example, you can remove the first 0 and call +9965012457809 or 009965012457809.
Kyrgyzstan is in a straight line at 5.331 km distance from Manila. From Manila International Airport the flight time is about 7 hours25.
Exchange rate Kyrgyzstani som (KGS)
7 june 2020
Install exchange rate app
It can be difficult to calculate the exchange rate yourself. Install an app on your phone that updates the exchange rate daily. By entering the local price you immediately see what the price is in Euro.
Bring your credit card
Take a credit card along with your own bank card. If your bank card is not accepted then you have the credit card that you can use.
| |
The cratered face of the Moon is mute testimony to the frequency of asteroid strikes in the Solar System. And Earth itself is hardly immune. According to the Lunar and Planetary Laboratory at the University of Arizona our homeworld has suffered upwards of three million impact craters larger than 1 km in diameter – the largest stretching more than 1000 km in diameter.
The 1908 Tunguska impact in Siberia, the largest impact in recorded history, is thought to have been triggered by an incoming object of 30-40 m in diameter. The 2013 Chelyabinsk airburst, whose shockwave struck six cities across Russia, may have been caused by an asteroid just 20 m across.
ESA has been considering the use of space missions for asteroid risk assessment for almost two decades. Although the chance of a major asteroid impact is low, the potential consequences to our society could be very severe. Small bodies are continually colliding with Earth, however, the vast majority of these objects are very small and pose no threat to human activity.
Larger impacts are rarer, but, when they do occur, they can lead to a major natural catastrophe. For comparison, the energy released from the Tohoku earthquake in Japan (3rd March 2011) was estimated to be approximately 45 megatons; this natural disaster caused an estimated economic loss of over $200 billion according to the World Bank.
The effects of an asteroid impact on Earth depends on many factors, such as, for instance, the location of impact, trajectory and physical properties of the asteroid, etc., but a small 150 m object could release several times the amount of energy released in Tohoku.
The difference with earthquakes though is that for an asteroid impact we do have the technology available to mitigate such a threat, but it has never been tested in realistic conditions. Moreover, the design of an efficient mitigation strategy relies on our understanding of the physical properties of threatening objects and their response to a mitigation tool, which is still extremely poor.
Most of the techniques that have been proposed to avoid an Earth impact event are linked to altering the trajectory of an asteroid on a collision course with Earth. Among these proposals, the one that is currently being considered as more mature, because it is based on existing and affordable spacecraft technology, is the kinetic impactor, which changes the orbit of an asteroid by a direct hit of a spacecraft at a very high relative speed (several km/s).
Europe has conducted thorough studies of this approach – such as the Don Quijote mission concept, which gave rise to AIDA – which would be suitable to address the statistically most common threats, namely of bodies of up to a few hundred meters in diameter. In the framework of such mitigation studies, a better understanding of the fragmentation process resulting from an impact is required to answer essential questions:
- How does impactor momentum transfer depend on the bulk density, porosity, surface and internal properties of the target near-Earth object and the relative velocity vector of the impactor?
- How much impactor kinetic energy may be going into fragmentation and restructuring or into the kinetic energy of the ejected material?
- Can momentum enhancing ejecta production be characterised in terms of parameters that are, for many objects, only available from ground-based observations, such as the taxonomic type?
Hera will be Europe and humankind’s first investigation of a planetary defence technique. Together with its US counterpart, DART, Hera is part of the joint AIDA mission, called into life to study the effect of a kinetic impactor hitting an asteroid. The target of the mission is the double asteroid system Didymos that will have a close approach to Earth in 2022. | http://www.esa.int/Our_Activities/Operations/Space_Safety_Security/Hera/Planetary_defence2 |
Welcome to the first Beacon of Hope weekly newsletter.
As ships navigate, dangerous and complicated oceans during the nighttime there are great challenges. The darkness of night is an obstacle for ships navigating the ocean. Ships can get turned around and confused and can easily get lost at sea. There also are dangers for ships that don’t understand their surroundings.
They cannot see everything around them…this could lead to their demise. What guides ships through the night are lighthouses.
Lighthouses help ships navigate the sea and avoid potential dangers. The lighthouse serves as a Beacon of Hope for the ships and guides them on their journey. As many of you know, our students are often like ships traveling the dangerous seas. Our students know they want to move forward and have a safe journey, but are often unaware of the direction they must go and the dangers they must avoid.
Modesto Junior College is that lighthouse for our students…we are their Beacon of Hope! In my second full semester, I continue to be inspired by the compassion, empathy, and overall commitment that so many of you show to our students. Our students cannot just “pull themselves up by the bootstraps” like students in previous decades. The obstacles our students face just to make it to the front door of our campus would be crippling too many.
Our students deserve the very best our college has to offer and they are depending on us not just to help them graduate but also to change their lives. Our students persevere and we need to be that Beacon of Hope to continue to ensure they reach their destinations. Today, MJC is almost 100 years old and at the same time MJC must prepare to be 100 years new. How do we do this you might ask?
Over the coming weeks I will share with you all MJC’s Strategic Priorities. These five priorities were vetted through the Academic Senate, Associated Students, and College Council. Our five priorities will focus on the areas of Access, Affordability, Building Community, Transformative and Innovative Practices and Stewardship of Resources.
Thank you for all that you do, | https://www.mjc.edu/president/beaconarchive.php |
String. It supports the usual add, contains, delete, size, and is-empty methods. It also provides an iterator method for iterating over all the elements in the set.
This unordered set class implements PATRICIA (Practical Algorithm to Retrieve Information Coded In Alphanumeric). In spite of the acronym, string keys are not limited to alphanumeric content. A key may possess any string value, with one exception: a zero-length string is not permitted.
Unlike other generic set implementations that can accept a parameterized key
type, this set class can only accommodate keys of class
String. This unfortunate restriction stems from a
limitation in Java. Although Java provides excellent support for generic
programming, the current infrastructure somewhat limits generic collection
implementations to those that employ comparison-based or hash-based methods.
PATRICIA does not employ comparisons or hashing; instead, it relies on
bit-test operations. Because Java does not furnish any generic abstractions
(or implementations) for bit-testing the contents of an object, providing
support for generic keys using PATRICIA does not seem practical.
PATRICIA is a variation of a trie, and it is often classified as a space-optimized trie. In a classical trie, each level represents a subsequent digit in a key. In PATRICIA, nodes only exist to identify the digits (bits) that distinguish the individual keys within the trie. Because PATRICIA uses a radix of two, each node has only two children, like a binary tree. Also like a binary tree, the number of nodes, within the trie, equals the number of keys. Consequently, some classify PATRICIA as a tree.
The analysis of PATRICIA is complicated. The theoretical wost-case performance for an add, contains, or delete operation is O(N), when N is less than W (where W is the length in bits of the longest key), and O(W), when N is greater than W. However, the worst case is unlikely to occur with typical use. The average (and usual) performance of PATRICIA is approximately ~lg N for each add, contains, or delete operation. Although this appears to put PATRICIA on the same footing as binary trees, this time complexity represents the number of single-bit test operations (under PATRICIA), and not full-key comparisons (as required by binary trees). After the single-bit tests conclude, PATRICIA requires just one full-key comparison to confirm the existence (or absence) of the key (per add, contains, or delete operation).
In practice, decent implementations of PATRICIA can often outperform balanced binary trees, and even hash tables. Although this particular implementation performs well, the source code was written with an emphasis on clarity, and not performance. PATRICIA performs admirably when its bit-testing loops are well tuned. Consider using the source code as a guide, should you need to produce an optimized implementation, for anther key type, or in another programming language.
Other resources for PATRICIA:
Sedgewick, R. (1990) Algorithms in C, Addison-Wesley
Knuth, D. (1973) The Art of Computer Programming, Addison-Wesley
|Constructor and Description|
|
|
Initializes an empty PATRICIA-based set.
|Modifier and Type||Method and Description|
|
||
|
Adds the key to the set if it is not already present.
|
||
|
Does the set contain the given key?
|
||
|
Removes the key from the set if the key is present.
|
||
|
Returns all of the keys in the set, as an iterator.
|
||
|
Unit tests the
|
||
|
Returns a string representation of this set.
clone, equals, finalize, getClass, hashCode, notify, notifyAll, wait, wait, wait
forEach, spliterator
public void add(String key)
key- the key to add
IllegalArgumentException- if
keyis
null
IllegalArgumentException- if
keyis the empty string.
public boolean contains(String key)
key- the key
trueif the set contains
keyand
falseotherwise
IllegalArgumentException- if
keyis
null
IllegalArgumentException- if
keyis the empty string.
public void delete(String key)
key- the key
IllegalArgumentException- if
keyis
null
IllegalArgumentException- if
keyis the empty string.
public Iterator<String> iterator()
set, use the foreach notation:
for (Key key : set).
public String toString()
public static void main(String args)
PatriciaSETdata type. This test fixture runs a series of tests on a randomly generated dataset. You may specify up to two integer parameters on the command line. The first parameter indicates the size of the dataset. The second parameter controls the number of passes (a new random dataset becomes generated at the start of each pass). | https://algs4.cs.princeton.edu/code/javadoc/edu/princeton/cs/algs4/PatriciaSET.html |
Verb structures can be divided into two main categories: the active and the passive voice. The active voice expresses what the subject of the sentence is doing, whereas the passive voice expresses what is being done to the subject of the sentence, often leaving out the person actually performing the action.
Active voice
Han ska renovera villan.
He will renovate the villa.
Passive voice
Villan ska renoveras.
The villa will be renovated.
A common way to form a passive verb structure in Swedish is the s-passive. The s-passive is formed by taking the object of the active sentence and moving it into the place of the subject in that sentence. After this, the letter s is placed after the main verb to mark the verb as having a passive form.
Active voice
En kvinna öppnade dörren för mig.
A woman opened the door for me.
Passive voice
Dörren öppnades för mig.
The door was opened for me.
When forming the s-passive in the present tense, extra care must be taken. The s-marker of the s-passive is attached to the verb according to the following rules:
Verb group 1: infinitive + s (älska -> älskas)
Verb groups 2, 3 and 4: verb stem + s (skriva -> skriv/er -> skrivs)
Verb stems ending with s: verb stem + es (läsa -> läs/er -> läses)
EXAMPLES:
Vi talar engelska i skolan varje dag. -> Engelska talas i skolan varje dag.
Ungdomar dricker för mycket läsk. -> Läsk dricks för mycket.
In other tenses the s-marker is simply added to the end of the main verb.
EXAMPLES:
Vi valde henne till president. -> Hon valdes till president.
Jag gav henne boken. -> Hon gavs boken.
If need be, the original subject of the active sentence doesn’t have to be left out, but can be added into the passive sentence as an agent with the help of the preposition av:
Active voice
En kvinna öppnade dörren för mig.
Passive voice + agent
Dörren öppnades för mig av en kvinna. | https://www.worddive.com/grammar/en/swedish-grammar/8-passive-voice/ |
Applications of Karnatic Rhythm to Western Music
This is a unique two-year programme that can only be studied at the Conservatorium van Amsterdam, and is intended for:
* Performers from both classical and improvisation backgrounds
* Creators from both classical and improvisation backgrounds
* Musicians with a non-western background who can demonstrate a high command of their instrument
* Musicians with a pedagogical background or orientation
The material imparted in this programme is suitable for musicians interested in one or more of the following aspects:
A) Exploring rhythm as a tool to
* Improve accuracy and understanding in the performance of contemporary music, whether composed or improvised.
* Create new music, regardless of the musician’s aesthetics or background
* Find new paths in improvisation
* Learn new approaches to teaching music, whether as a solfege or instrument teacher
B) Exploring ‘transversality’ between different aspects of music (composition, performance, improvisation)
C) Openness to non-western musical cultures
Opening doors to a wide number of professional positions for musicians
This programme would enable musicians to work professionally in fields or positions such as:
* Member of a contemporary music ensemble (e.g. Ictus Ensemble, Ensemble Modern, Ensemble Intercontemporain, Musikfabrik etc.)
* Member of already established jazz ensembles that explore complex rhythmical concepts (e.g. Miles Okazaki, Dan Weiss, Aka Moon, Vijay Iyer etc).
* Develop solo careers, whether as a contemporary specialist, or as a creator, be it in a through-composed or improvised setting.
* The creation of or functioning in groups or ensembles that explore transversality of genres (e.g. jazz-flamenco, improvisation with classical contemporary aspects).
* The creation of or functioning in groups that explore non-western influences.
* Enhancing flexibility within popular or traditional styles (e.g. rap, hip-hop, pop, rock, electronic dance music etc.)
* Teaching a new rhythmical solfege from a different angle to musicians/students/pupils of any background or age.
* Teaching instrumental lessons emphasising rhythm and rhythmical development.
* Ethnomusicology with a strong practical approach to teaching.
The expansion of rhythmical possibilities has been one of the cornerstones of musical developments in the last hundred years, whether through western development or through the borrowing from non-western traditions. Most classical performers, whether in orchestral or ensemble situations, will have to face a piece by Stravinsky, Béla Bartók, Ligeti, Messiaen, Varèse or Xenakis, to mention just a few well-known composers, while improvisers face music influenced by Dave Holland, Steve Coleman, Aka Moon, Vijay Iyer, Miles Okazaki or elements from the Balkans, India, Africa or Cuba. Furthermore, many creators, whether they belong to the classical or jazz worlds, are currently organising their music not only in terms of pitch content but with rhythmical structures and are eager to obtain information that would structure and classify rhythmical possibilities in a coherent and practicable way.
20th- and 21st-century music demands a new approach to rhythmical training, a training that will provide musicians with the necessary tools to face with accuracy more varied and complex rhythmical concepts, while keeping the emotional content. The programme Applications of Karnatic Rhythm to Western music addresses ways in which the Karnatic rhythmical system can enhance, improve or even radically change the creation (be it written or improvised) and interpretation of (complex) contemporary classical and improvised music.
The incredible wealth of rhythmical techniques, devices and concepts, the different types of tala construction, the use of rhythm as a structural and developmental element and, last but not least, the use of mathematics to sometimes very sophisticated levels in South India, enable the western musician to improve and enhance their accuracy and/or their creative process and make the study of Karnatic rhythm a fascinating adventure of far-reaching consequences. The large variety of rhythmical devices used in Karnatic music is, in the West, one of the elements most unknown and least documented, yet potentially most universal.
Working with Karnatic rhythmical techniques and their almost infinite developmental possibilities, enables the musician to discover new ways to learn, improvise, analyse and read new music, to create and, also, how to teach. This programme can provide students with many important tools and methods that they can adapt and use in their very own way in their very own work.
General entrance requirements
The special ‘study by contract’ of the programme ‘Applications of Karnatic rhythm to western music’ is developed for performers and composers with a keen interest in rhythmic traditions from all around the world, with the aim to combine those with our Western music background. Candidates have to be active in the musical field and can be admitted, provided that they show proficiency in music theory and a sufficient level in performance skills.
In addition to the completed and signed application form and the other documents, students are requested to send an audio and/or video recording (CD or DVD in data format or links to the internet) of a performance with a maximum length of 15 minutes and a motivation letter stating the student's reasons for wishing to enter the programme and a short description of what the candidate thinks to achieve after finishing the course(s).
Selected candidates will be invited to do interview with a duration of around 20 minutes in which they can present themselves and perform.
General first year completion requirements (see below for further detail):
1) Theory exam
2) Concert presentation of around 30 minutes
General graduation requirements (see below for further details)
1) Theory exam
2) Recital of 60 minutes (40 minutes for composers)
The participant will receive a certificate upon completion of the studies
Programme Content
Based on the aspects the student wishes to stress, the guidelines below can be followed. There are three main approaches (Classically-trained musicians, Improvisers, Composers), each one of them with four different options to choose from.
Each approach has two elements in common. These are:
* A weekly individual meeting of 30 minutes in order to coach the student’s project
* Attending weekly sessions of 90 minutes of the so-called ‘deepening sessions’, where the ‘roots’ of the material, as well as what other creators have done or are doing with Karnatic rhythmical concepts, will be listened to and analysed within a musical context. | https://www.conservatoriumvanamsterdam.nl/en/study/continuing-education/applications-of-karnatic-rhythm-to-western-music/ |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.