text
stringlengths 0
7.84M
| meta
dict |
---|---|
With everyone from academics to Microsoft looking at the prospect of storing data using DNA, it was probably inevitable that someone would start looking at the security implications. Apparently, they're worse than most people might have expected. It turns out it's possible to encode computer malware in DNA and use it to attack vulnerabilities on the computer that analyzes the sequence of that DNA.
The researchers didn't find an actual vulnerability in DNA analysis software—instead, they specifically made a version of some software with an exploitable vulnerability to show that the risk is more than hypothetical. Still, an audit of some open source DNA analysis software shows that the academics who have been writing it haven't been paying much attention to security best practices.
More like a virus than most
DNA sequencing involves determining the precise order of the bases that make up a DNA strand. While the process that generates the sequence is generally some combination of biology and/or chemistry, once it's read, the sequence is typically stored as an ASCII string of As, Ts, Cs, and Gs. If handled improperly, that chunk of data could exploit vulnerable software to get it to execute arbitrary code. And DNA sequences tend to see a lot of software, which find overlapping sequences, align it to known genomes, look for key differences, and more.
To see whether this threat was more than hypothetical, the researchers started with a really simple exploit: store more data than a chunk of memory was intended to hold, and redirect program execution to the excess. In this case, said excess contained an exploit that would use a feature of the bash shell to connect into a remote server that the researchers controlled. If it worked, the server would then have full shell access to the machine running the DNA analysis software.
Actually implementing that in DNA, however, turned out to be challenging. DNA with Gs and Cs forms a stronger double-helix. Too many of them, and the strand won't open up easily for sequencing. Too few, and it'll pop open when you don't want it to. Repetitive DNA can form complex structures that get in the way of all the enzymes we normally use to manipulate DNA. The computer code they wanted to use, however, had lots of long runs of the same character, which made for a repetitive sequence that was very low in Gs and Cs. The company they were ordering DNA from couldn't even synthesize it.
In the end, they had to completely redesign their malware so that its translation into nucleic acids produced a DNA strand that could be synthesized and sequenced. The latter created another hurdle. The most common method of sequencing is currently limited to reading a few hundred bases at a time. Since each base has two bits of information, that means the malware has to be incredibly compact. That limits what can be done, and it explains why all this particular payload did was open up a remote connection.
Then, there was the matter of getting the malware executed. Since this was a proof of concept, the researchers made it easy on themselves: the modified an existing tool to create an exploitable vulnerability. They also made some changes to the system's configuration to make the execution of random memory locations easier (made the stack executable and turned off memory address randomization). While that makes the test environment less realistic, the goal was simply to demonstrate that DNA-delivered malware was possible.
With everything in place, they ordered some DNA online then sent it off to a facility for sequencing. When their sequences came back, they sent them through a software pipeline that included their vulnerable utility. Almost immediately, the computer running the software connected into their host, providing them with access to the machine. The malware worked.
Semi-realism
Given how easy the authors made things—a known vulnerability and a number of safeguards turned off—does this really pose a threat? There's good news and bad news here.
On the good side, there's the complications of translating computer instructions into DNA that can be synthesized and sequenced. Plus there's the issue that most sequencing machines are limited in how long a sequence they can read. The machine used in this work maxes out at 300 bases, which is the equivalent of 600 bits, and most facilities keep things shorter than that. Longer read machines are available, but they're also error prone, and any errors will typically disable the malware.
But it's also common for the software used to analyze DNA to look for places where two short sequences overlap and use that to build up longer sequences. This has the potential to expand the size of the malware considerably, although less of the analysis software pipeline will be exposed to these longer, assembled sequences.
Similar issues exist with how the malware is encoded. While the authors used each base to encode two bits, DNA analysis software handles DNA in various ways internally. For example, if sequencing doesn't provide a clear indication of what a base is, other characters may be used (for example, N for any base, or R for G or A). Any software that handles these ambiguous bases has to have a more complex encoding scheme; many simply use ASCII characters.
As a result, different pieces of software will be vulnerable to different malware encodings. While that means some software will be immune, the size of the DNA analysis pipelines typically means that a dozen or more pieces of software will be run in succession. Chances are good that at least one of them will use the same encoding as the malware.
Bad habits
The research community's habits are also a major point of vulnerability. The analysis software was generally not written with security in mind. Using the Clang compiler's analysis tools and HP's Fortify compiler, the authors searched a collection of open source DNA analysis software for potential vulnerabilities. They found widespread use of functions that are prone to buffer overflows (strcat, strcpy, sprintf, vsprintf, gets, and scanf)—about two instances for every 1,000 lines of code. "Our research suggests that DNA sequencing and analysis have not to date received significant—if any—adversarial pressure," they conclude.
The second issue is how easy it is to infiltrate malicious code onto other machines via DNA. The sequencing machines have such a high capacity, work from several different labs is run on a single machine at the same time. As a result, some of the sequences returned from the machine will end up mixed into an unrelated sample. When the researchers checked with another group that had their sequencing performed at the same time, they found that the other group's results contained 27 instances of the malware.
Separately, lots of services simply allow you to send in any DNA for sequencing, putting their software at risk. And many public repositories allow people to upload their sequence for analysis by others. So, you wouldn't even have to synthesize any DNA to have your exploit analyzed—you can simply upload the text of the sequence you've designed to someone else's data repository.
None of this means that a DNA-based exploit is around the corner. But it's a healthy warning that the research community and commercial DNA companies should look to improve their practices before this does become a problem.
|
{
"pile_set_name": "OpenWebText2"
}
|
Dichiarazioni di voto
Dichiarazioni di voto orali
Zuzana Roithová
- (CS) Un aspetto positivo del bilancio approvato di recente per il 2009 è che esso stabilisce priorità chiare, per esempio di sostegno delle piccole e medie imprese, protezione del clima e dei paesi più poveri colpiti dall'emergenza alimentare. Purtroppo non abbiamo molte possibilità di utilizzare il bilancio allo scopo di risolvere la crisi finanziaria, sia in ragione della sua modesta entità rispetto ai bilanci degli Stati membri - di cui rappresenta appena l'uno per cento - sia per la mancanza di flessibilità nelle norme stabilite per il bilancio pluriennale del periodo 2007-2013. Apprezzo gli sforzi compiuti dai deputati che hanno discusso con la Commissione affinché fossero apportate alcune correzioni, almeno per quanto concerne i problemi internazionali. Purtroppo il Consiglio non ha voluto concedere maggiore flessibilità. Con la ratifica del trattato di Lisbona, al Parlamento europeo saranno conferiti maggiori poteri.
Frank Vanhecke
(NL) Signora Presidente, poiché per mia natura sono molto critico sul modus operandi e le istituzioni dell'Unione europea, va da sé che io abbia votato contro questa relazione sul bilancio 2009. Innanzi tutto non sono affatto convinto che le istituzioni europee stiano spendendo in maniera accorta l'abbondante flusso di denaro che perviene loro tramite il gettito fiscale.
In secondo luogo ci immischiamo, a mio avviso, in troppi ambiti della politica e le sovvenzioni che eroghiamo agli Stati membri sono sempre considerate da questi come fondi europei di sorta che non hanno nulla a che vedere con loro e che proprio per questo motivo sono in maniera inefficiente dagli Stati membri medesimi.
Peraltro ho osservato che ricevo poche risposte, se ne ricevo, alle interrogazioni parlamentari che presento in merito alle spese operative delle numerose agenzie e organizzazioni affiliate all'UE. Tutto questo mi rende assai sospettoso e non fa che rafforzare il mio voto sfavorevole a questo bilancio.
Ignasi Guardans Cambó
(ES) Signora Presidente, vorrei puntualizzare brevemente che mi sono astenuto dal voto sulla relazione Thyssen relativa alla sicurezza dei giocattoli.
Ovviamente concordo con la relatrice e con la maggioranza di quest'Assemblea sulla necessità di garantire la sicurezza dei bambini e più in generale dei consumatori. Ritengo anche che si debbano rispettare le consuetudini culturali dei diversi Stati membri e soprattutto che la discussione sulla sicurezza non debba sfociare in estremismi legislativi che sfiorano il ridicolo, come in questo caso.
Alcuni dei requisiti di sicurezza introdotti con questa direttiva sfiorano senz'altro il ridicolo o per lo meno così è stato specialmente durante la discussione. La direttiva è stata salvata nel suo complesso. Andando avanti di questo passo, un giorno approveremo una direttiva per obbligare i bambini a indossare un casco ogni volta che escono di casa o a mettersi i guanti quando è freddo. A mio giudizio tutto ciò non ha senso, anche se è questa la direzione verso cui ci stiamo muovendo.
Pertanto, benché mi renda conto che la direttiva contenga anche degli aspetti molto positivi, credo anche che talvolta si spinga troppo oltre ed è per questo motivo che mi sono astenuto.
Zuzana Roithová
- (CS) Mi rallegro che siamo riusciti ad approvare la direttiva sulla sicurezza dei giocattoli in prima lettura e che abbiamo respinto l' assurda proposta dei Verdi e di alcuni socialisti per il collaudo obbligatorio di tutti i giocattoli da parte di organismi indipendenti. Essi hanno presentato questo emendamento come una sorta di blocco sebbene l'esperienza degli Stati Uniti e della Cina abbia dimostrato che i giocattoli sul mercato europeo sono difettosi nonostante i collaudi. Il nostro obiettivo è di rendere i produttori e gli importatori totalmente responsabili per la sicurezza dei giocattoli. E' compito dei produttori attestare la conformità dei loro prodotti agli standard. L'articolo 18 della direttiva impone i collaudi soltanto nei casi in cui tali standard non esistono. Il costo dei collaudi esterni si aggira attorno ai 3 000 euro nella Repubblica Ceca. Questo disposto escluderebbe dal mercato le piccole imprese dell'Unione europea, mentre il collaudo dei giocattoli in Cina non costituirebbe comunque una garanzia della loro sicurezza. La responsabilità deve ricadere su importatori e produttori, che non devono assolutamente avvalersi di centri di collaudo non regolamentati in giro per il mondo. Mi rallegro per questo regalo che facciamo ai genitori.
Hiltrud Breyer
(DE) Signora Presidente, non ho votato a favore della soluzione di compromesso sui giocattoli. Essa presenta tuttora un grave deficit di sicurezza, in particolare per quanto attiene alle sostanze chimiche. Le sostanze tossiche non devono finire in mano ai bambini, neppure in quantità minime. La decisione di oggi è deludente e peraltro poco ambiziosa. Esistono troppe scappatoie e manca una messa al bando chiara di tutti i metalli pesanti e delle fragranze allergizzanti. Non sono contenute prescrizioni in merito al rumore. E' deplorevole scoprirci così pusillanimi quando è in gioco la sicurezza dei più piccoli.
E' invero assurda la frenesia con cui si è cercato un consenso rinunciando a una prima lettura, solo per dare l'impressione che sotto l'albero di Natale la prossima settimana ci sarà un giocattolo sicuro, una vera assurdità e un delirio senza senso. Certo, la proposta contiene alcune migliorie ma non poteva essere altrimenti nella revisione di una direttiva ormai in vigore da 20 anni. La morale è che c'è tanto fumo e poco arrosto. Non possiamo delegare semplicemente la responsabilità all'industria. La responsabilità di una normativa chiara è nostra!
Zita Pleštinská
- (SK) Ho votato a favore della relazione Thyssen.
RAPEX sono solo cinque lettere, ma significano molto: un sistema di allarme rapido europeo che allerta i consumatori in merito a prodotti pericolosi.
Nel 2006, grazie a uno scambio rapido di informazioni tra gli Stati membri, sono pervenute 924 segnalazioni in totale, di cui 221 riguardavano giocattoli potenzialmente pericolosi. Le segnalazioni sui giocattoli riguardavano principalmente il rischio di lesioni ai bambini oppure l'allergenicità e altri problemi di salute che riguardano in particolare i bambini allergici.
Sono lieta che il Parlamento abbia approvato oggi la direttiva, poiché i fatti alla mano dimostrano senza dubbio che c'è un vuoto da colmare. Con il voto odierno su questa direttiva, che ne aggiorna una precedente di 20 anni fa, il Parlamento europeo ha compiuto un passo importante a favore della sicurezza e dell'igiene dei giocattoli, nonché a tutela della sicurezza dei bambini.
Mi compiaccio viepiù che questo iter in seno al Parlamento europeo sia seguito da un gruppo di visitatori slovacchi cui estendo un caloroso benvenuto e auguro una visita interessante in questa sede della democrazia europea.
Kathy Sinnott
(EN) Signora Presidente, sono molto lieta che il voto sulla direttiva per la sicurezza dei giocattoli abbia avuto luogo oggi per il semplice motivo che, se vogliamo dare un messaggio forte al mondo in relazione ai giocattoli e alla loro sicurezza, dovevamo per forza farlo prima di Natale. Procrastinare il voto avrebbe significato annacquarne il messaggio. Questo è il momento dell'anno in cui le persone pensano ai giocattoli.
Anche quest'anno, come pure l'anno scorso, milioni di giocattoli cinesi sono stati ritirati dal mercato. I motivi di tale decisione sono estremamente gravi e riguardano la presenza nei giocattoli di piombo, arsenico, mercurio e policlorobifenile. Non importa quale sia l'uso teorico di un giocattolo - che si tratti di un libro o altro - so per esperienza di madre che prima o poi potrebbe finire in bocca a un bambino. Non si può mai essere abbastanza attenti con i giocattoli e sono lieta che abbiamo trasmesso questo messaggio oggi.
Milan Gaľa
- (SK) Desidero ringraziare il collega Mann per questa relazione. Sappiamo quanto siano importanti le leggi che disciplinano la mobilità di studenti e lavoratori e rimuovere le barriere che impediscono loro di trasferirsi per rispondere alla legge della domanda e dell'offerta del mercato del lavoro europeo.
Il sistema europeo di crediti per la formazione professionale faciliterà il trasferimento, il riconoscimento e la cumulabilità dei titoli formativi. I crediti saranno riconosciuti anche per le qualifiche conseguite attraverso vari percorsi formativi a tutti i livelli del quadro di formazione europeo attinente all'apprendimento permanente.
Con il nostro voto positivo abbiamo aperto la via a un sostegno più ampio ad apprendimento permanente e occupazione, all'apertura, alla mobilità e all'integrazione sociale di lavoratori e persone che frequentano corsi di formazione. Promuoveremo così lo sviluppo di approcci flessibili e individuali, oltre che il riconoscimento di tipologie di apprendimento conseguite tramite una formazione sia scolastica che informale.
Miroslav Mikolášik
- (SK) Vorrei innanzi tutto ringraziare la collega Thyssen per essere riuscita a condurci verso un compromesso meritevole, grazie al quale i nostri bambini non maneggeranno giocattoli costruiti con materiali inadatti e nel contempo le nostre industrie non saranno penalizzate.
Come forse sapete, condivido appieno le limitazioni all'impiego di allergeni nei giocattoli; io stesso ho quattro bambini e non sempre ho fatto caso alla sicurezza di ogni giocattolo che passava nelle loro mani. I genitori in Europa spesso partono dal presupposto che se un giocattolo è in vendita in negozio, allora non potrà essere dannoso. Sono assai soddisfatto del lavoro che abbiamo compiuto insieme per inasprire la normativa affinché soltanto i giocattoli adatti ai bambini arrivino sugli scaffali dei negozi, poiché i piccoli sono i consumatori più vulnerabili.
Fino all'80 per cento dei giocattoli sul mercato UE sono importati e bisogna ricordare che nel 2007 sono stati ritirati dal mercato milioni di giocattoli prodotti in Cina perché non conformi alle norme europee. L'odierna circolazione delle merci implica che dobbiamo rivedere le norme per l'immissione dei prodotti sul mercato e i provvedimenti di controllo della loro conformità agli standard.
Zuzana Roithová
- (CS) (la parte iniziale dell'intervento è inudibile) assicurazione di deposito, che il Parlamento europeo ha proposto in maniera molto flessibile e in favore della quale ho votato, è chiaro. Vogliamo creare un livello minimo uniforme di protezione per i piccoli risparmiatori prevedendo una copertura assicurativa per i depositi fino a 50 000 euro. Intendiamo inoltre fissare un termine di preavviso breve per l'estinzione dei depositi, affinché i risparmiatori possano ottenere informazioni chiare, tempestive e precise sull'andamento dei loro depositi bancari anche in un momento di crisi. Questo provvedimento è indispensabile, perché i risparmiatori stanno trasferendo i propri depositi in maniera disordinata da banche solide a istituti che si sono salvati grazie alle garanzie offerte dallo Stato. Questa proposta è l'unica soluzione per ripristinare la fiducia dei piccoli risparmiatori e stabilizzare il mercato dei servizi bancari. Vorrei che la garanzia fosse estesa anche alle piccole e medie imprese, in quanto esse svolgono un ruolo insostituibile nelle società europee ma in tempo di crisi sono sempre quelle più a rischio.
Frank Vanhecke
(NL) Signora Presidente, ho votato a favore della risoluzione relativa all'OLAF perché concorde pienamente con la richiesta del Parlamento di maggiore autonomia dell'Ufficio per la lotta antifrode. Urge davvero un intervento. Dopotutto, al momento l'OLAF si distingue di poco da una qualsiasi direzione generale della Commissione ed è posto sotto la responsabilità politica del vicepresidente della Commissione. Questo sistema non è sano; pur godendo di un'autonomia operativa, l'OLAF è una sorta di ibrido e questa situazione deve cambiare. Bene.
Più in generale, ritengo che le istituzioni europee gestiscono sempre con grande noncuranza il notevole gettito fiscale a loro disposizione. L'OLAF dovrebbe disporre almeno delle risorse, degli uomini e del potere necessario a porre un freno agli aspetti più apertamente criminali di questo comportamento. Per quanto riguarda questa prodigalità nello spendere legittimamente le risorse, temo che noi stessi dovremo metterci un freno.
Frank Vanhecke
(NL) Signora Presidente, ho votato a favore di questa relazione di qualità sorprendente su Frontex, perché posso solo plaudere alla richiesta di un rafforzamento di questa istituzione. Per quanto mi concerne, la lotta contro l'immigrazione clandestina dovrebbe figurare tra le massime priorità dell'Unione e, in questo contesto, gli accordi che Frontex ha stipulato con le autorità di paesi terzi sono particolarmente importanti. E' apprezzabile che la relazione vada dritta al punto e critichi l'atteggiamento inaccettabile della Turchia, un paese candidato.
A mio modo di vedere, bisognerebbe sottolineare con forza che il rifiuto attivo delle autorità di un paese terzo - in questo caso la Turchia, niente meno di un paese candidato - di cooperare con Frontex dovrebbe avere ripercussioni immediate sui rapporti politici ed economici tra l'Unione e tale Stato, ovvero comportare la sospensione dei negoziati per l'adesione della non-europea Turchia.
Philip Claeys
(NL) Signora Presidente, ho votato a favore della relazione Sánchez, pur con talune riserve. In tutta franchezza, devo ammettere che nutrivo ridotte aspettative nei confronti della relazione, in ragione dello spirito di correttezza politica che generalmente permea i lavori della commissione per le libertà civili, la giustizia e gli affari interni. Eppure bisogna dire che la relazione è equilibrata e affronta una serie di punti dolenti, tra cui la mancanza di cooperazione, o sarebbe più consono parlare di sabotaggio, da parte di paesi terzi come la Libia e la Turchia.
Certo nel caso della Turchia è riprovevole che un paese candidato sfugga ai propri obblighi in maniera così spudorata. Frontex - e questo è il punto in cui la relazione lascia un poco a desiderare - dovrebbe diventare uno strumento efficace di lotta contro l'immigrazione clandestina, ma anche contro il crimine internazionale e il traffico di stupefacenti e di armi.
Philip Claeys
(NL) Signora Presidente, ho votato a favore della relazione Susta perché la contraffazione è senz'altro un grave problema e il testo in discussione è permeato dal buon senso.
In effetti concordo appieno con il paragrafo 30 della relazione che ci rammenta - e qui cito - che "la Turchia sarà un candidato credibile all'adesione soltanto se sarà in grado di recepire l'acquis comunitario e di garantire il rispetto pieno del DPI”. Da ciò si evince che la Turchia non è ancora un paese pronto per l'adesione all'UE e di questo prendo nota.
Syed Kamall
(EN) Signora Presidente, credo che tutti in quest'Aula - gremita, direi - concordiamo sull'importanza della proprietà intellettuale, sia per le economie della conoscenza, sia per i gravi danni che la contraffazione può arrecare ai consumatori di tutta Europa, per esempio nel caso di medicinali, alimenti o pezzi di ricambio per l'automobile contraffatti.
Avevo alcune riserve in merito alla risoluzione originale che enfatizzava eccessivamente il ruolo dei consumatori. Avremmo rischiato di trovarci in situazioni paradossali, per esempio con perquisizioni dei viaggiatori ai confini e la confisca di computer, lettori MP3 e iPod per verificare la presenza di eventuali materiali contraffatti. Grazie al cielo i Verdi hanno avanzato un'alternativa più ragionevole e sono stati più che disponibili, nello spirito di un compromesso natalizio, a ritirare l'emendamento immotivato sulle critiche delle imprese. Nel complesso, è stato un piacere votare a favore della risoluzione.
Adesso che mi sono tolto la soddisfazione di parlare a un'Aula deserta, vorrei concludere augurando a tutti coloro che sono ancora qui buon Natale e un felice Anno Nuovo.
Kathy Sinnott
(EN) Signora Presidente, vorrei augurarle anch'io un felice Natale e rassicurarla del fatto che l'Aula non è completamente deserta.
Ho votato a favore della relazione López sulla protezione degli adulti, in particolare per quanto attiene agli aspetti transfrontalieri, perché so per esperienza quanto essa sia necessaria e anche perché spero che ciò ci avvicinerà di qualche passo al giorno in cui la mobilità in Europa sarà un dato di fatto. Nel caso in esame ci occupiamo di adulti che in un modo o nell'altro sono seguiti da un tribunale. Si tratta spesso di persone molto vulnerabili, talvolta di soggetti posti sotto tutela giudiziale o di persone disabili. Ma se consideriamo la questione in una prospettiva più ampia, l'intenzione della proposta è di consentire un giorno ai fruitori dell'assistenza sociale di spostarsi mantenendo tale assistenza, ovvero di offrire loro la medesima libertà di circolazione in Europa che viene riconosciuta ai lavoratori.
Kathy Sinnott
(EN) Signora Presidente, ho votato contro la relazione Deva sulle prospettive di sviluppo per la costruzione della pace e la ricostruzione della nazione nelle situazioni postconflittuali a causa del n paragrafo in cui si dichiara che dovremmo essere in grado di prendere iniziative preventive, oltre che di risposta, che possano contemplare anche l'uso della forza militare come extrema ratio.
Questa è la dottrina Bush, forse gli altri deputati dell'Aula non vi hanno riconosciuto la dottrina che ci ha portato in Iraq, ma si tratta della medesima logica. Sarah Palin è stata criticata per non sapere quale fosse la dottrina Bush, ma mi chiedo se i deputati di questo Parlamento hanno capito di avere appena votato a favore di tale dottrina.
Luisa Morgantini
a nome del gruppo GUE/NGL. - Signor Presidente, onorevoli colleghi, non è mia consuetudine usare lo strumento democratico delle dichiarazioni di voto, lo faccio oggi per la prima volta a nome del mio gruppo.
Per motivare devo dire con dispiacere il nostro voto contro ad una relazione che io stessa, sia come redattrice del parere della commissione donne che come membro della commissione sviluppo, ho contribuito a definire. Questa è davvero una buona relazione e ringrazio molto Deva e la commissione sviluppo per il lavoro svolto.
Infatti siamo in sintonia con gran parte del testo: integrare analisi dei conflitti nella cooperazione, sostegno alla società civile e locale, lotta alla diffusione delle armi leggere, necessità di codici di condotta per soldati o polizia, riferimenti alla salute riproduttiva, trasparenza nell'uso delle risorse naturali e sostegno ai profughi. In particolare poi, sul rapporto e sulle politiche di genere c'è un grande mainstreaming. Allora perché votare contro? È semplice, perché in alcune sue parti cerca di inserire la componente militare negli aiuti allo sviluppo.
Questo Parlamento, la commissione sviluppo, la relazione Mitchell, hanno in realtà detto molto chiaramente nell'introdurre il regolamento e il DCI che i fondi allo sviluppo non devono essere usati per finanziare spese militari. Quindi non solo questo, ma il nostro Parlamento ha anche vigilato affinché nei country strategy paper non togliessero risorse allo sviluppo per le operazioni di sicurezza.
Quindi perché queste contraddizioni tra le diverse nostre risoluzioni? I fondi allo sviluppo vanno utilizzati per lo sviluppo, per l'educazione, la sanità, l'agricoltura, le comunità locali e le organizzazioni di donne. Le risorse della cooperazione sono troppo poche per sconfiggere la povertà, l'ingiustizia e anche per fare la pace, quindi nessuna commistione con il militare.
Dichiarazioni di voto scritte
Pedro Guerreiro
L'UE ha deciso di creare un "nuovo strumento di risposta rapida all'impennata dei prezzi alimentari nei paesi in via di sviluppo” e ha approvato lo stanziamento di 1 miliardo di euro su tre anni.
Inizialmente era stato proposto di finanziare questo "strumento alimentare” dal margine disponibile della linea di bilancio 2 (Agricoltura) del quadro finanziario pluriennale (QFP) e tramite la revisione del massimale per la linea di bilancio 4 (Azioni esterne) del medesimo QFP. Alla fine si è tuttavia deciso di trovare la copertura finanziaria tramite lo strumento di flessibilità, la riserva per gli aiuti d'urgenza e una ridistribuzione delle risorse della linea di bilancio 4 dello strumento di stabilità.
Per finanziare questa iniziativa occorre modificare l'Accordo interistituzionale al fine di incrementare le risorse disponibili per il 2008 nella riserva per gli aiuti d'urgenza a 479.218.000 euro (in base ai prezzi attuali).
Pur considerando positivi gli obiettivi dichiarati dell'iniziativa, ribadiamo che essa non dovrebbe essere ridotta a una semplice modifica del bilancio volta a consentire all'UE di imporre un accordo in seno all'Organizzazione mondiale del commercio o nell'ambito degli accordi di partenariato economico con i paesi ACP. Tale strumento non deve essere un espediente per occultare la riduzione degli aiuti allo sviluppo da parte dell'UE o gli ingenti importi stanziati per rilanciare la corsa agli armamenti e militarizzare i rapporti internazionali di cui l'UE è protagonista.
Pedro Guerreiro
Questa nuova proposta di rettifica del bilancio riguarda la mobilitazione del Fondo di solidarietà dell'UE - che ammonta a circa 7,6 milioni di euro - a favore di Cipro, colpita da una siccità protratta che ha causato danni per 176 milioni di euro.
La Commissione ha puntualizzato che "tenuto conto degli stanziamenti in eccesso nella linea 13 04 02 Fondo di coesione, non vi sarà bisogno di nuovi stanziamenti di pagamento per finanziare i pagamenti del Fondo di solidarietà dell'Unione europea per Cipro”. In altre parole, i finanziamenti necessari per fare fronte a questa calamità naturale proverranno dalla politica di coesione.
Gli "stanziamenti in eccesso” del Fondo di coesione sono motivati, tra l'altro, dai ritardi registrati nell'attuazione dei programmi di coesione presso i paesi interessati. Pertanto, anziché applicare un concetto di "solidarietà” che rischia di penalizzare i paesi economicamente meno sviluppati, avremmo dovuto prendere delle decisioni volte a prevenire le carenze croniche nell'esecuzione delle politiche strutturali e di coesione.
Vorremmo altresì richiamare la vostra attenzione, come abbiamo già fatto in passato, sulla necessità di accelerare l'iter per l'attivazione del Fondo di solidarietà, assicurare l'ammissibilità delle calamità regionali e riconoscere tangibilmente la specificità delle calamità naturali che interessano la regione mediterranea, come gli incendi e la siccità.
Luís Queiró
per iscritto. - (PT) L'aumento dei prezzi dei generi alimentari nei paesi in via di sviluppo è un problema estremamente grave che richiede un intervento rapido dell'UE al fine di contrastare gli effetti negativi sulle popolazioni più bisognose. In questa relazione, il Parlamento propone di finanziare con 420 milioni di euro una risposta rapida volta a mitigare gli effetti di questa situazione. Più precisamente, s'intende ricorrere allo strumento di flessibilità previsto dall'Accordo interistituzionale del 2006. In tale accordo, l'UE ha contemplato la possibilità di mobilitare uno strumento di flessibilità per finanziare spese identificate che non possono essere finanziate entro i massimali disponibili di una o più linee di bilancio del quadro finanziario pluriennale.
La proposta in esame risponde appieno ai requisiti istituzionali ed è senz'ombra di dubbio giustificata dalla politica di solidarietà dell'Unione europea. Vista la gravità della situazione, non sono state presentate obiezioni dagli organi decisionali.
Il tempo stringe e il nostro intervento rapido potrebbe fare la differenza tra una circostanza avversa e una vera e propria tragedia umana con conseguenze incommensurabili sullo sviluppo futuro di queste popolazioni.
Derek Roland Clark
per iscritto. - (EN) L'Independence Party britannico ha votato a favore della relazione perché così facendo 4,9 miliardi di euro di stanziamenti non spesi saranno restituiti ai governi nazionali.
Pedro Guerreiro
In questo esercizio finanziario (2007 e 2008), il Fondo di solidarietà è stato mobilitato nove volte (Germania: 166,9 milioni di euro; Regno Unito: 162,3 milioni di euro; Grecia: 99 milioni di euro; Francia: 17,1 milioni di euro; Ungheria: 15 milioni di euro; Slovenia: 8,2 milioni di euro e Cipro: 7,6 milioni di euro) per un totale di circa 477,3 milioni di euro rispetto al massimale previsto di 1 miliardo di euro l'anno.
Senza mettere in questione l'evidente necessità di questo strumento e senza entrare nel merito dell'iter necessario per attivare e rendere disponibile (in ritardo) questo aiuto, rimane la questione dell'origine delle dotazioni mobilitate, in particolare con riferimento all'attuale progetto di bilancio rettificativo.
In altre parole, pur riconoscendo la necessità urgente di fornire assistenza nel caso di calamità naturali, si mette in questione l'origine di questi stanziamenti, tanto più se essi sono "sottratti” alla politica di coesione anziché, per esempio, provenire dagli stanziamenti per la progressiva militarizzazione dell'UE. Riteniamo che la politica di coesione vada salvaguardata.
Da ultimo vorremmo sottolineare, come abbiamo già fatto in altre occasioni, la necessità di apportare modifiche alle procedure del fondo di solidarietà al fine di renderne più rapida la mobilitazione, pur garantendo che sia mantenuta l'ammissibilità delle calamità regionali e riconoscendo concretamente la specificità delle calamità della regione mediterranea, quali gli incendi e la siccità.
Hélène Goudin e Nils Lundgren
per iscritto. - (SV) Junilistan ritiene possibile dimezzare i contributi degli Stati membri all'UE. La maggior parte del denaro dell'UE è speso per attività superflue o socioeconomicamente dannose, quali la politica agricola, il fondo di coesione, la politica per la pesca e le sovvenzioni a diversi tipi di campagne informative. A questo si aggiungono i costi per il pendolarismo tra Strasburgo e Bruxelles del Parlamento europeo e di altre istituzioni come il Comitato economico e sociale europeo e il Comitato delle regioni che dovrebbero essere smantellate immediatamente.
La politica agricola è particolarmente odiosa, poiché trasferisce il denaro dai consumatori a beneficiari spesso molto facoltosi. Gli agricoltori dei paesi poveri del mondo sono penalizzati dalla concorrenza degli agricoltori sovvenzionati dell'UE.
Le varie istituzioni UE si prodigano in costanti esortazioni agli Stati membri a ridurre la loro spesa pubblica. Nel contempo, quest'Aula richiede continui aumenti delle spese comunitarie. L'intera situazione è assurda. Gli Stati membri spendono il denaro pubblico per le scuole, la sanità, la ricerca, le infrastrutture e l'assistenza alle fasce più deboli della società, mentre l'UE convoglia il denaro in una politica agricola folle, in fondi strutturali gestiti male e nel finanziamento di istituzioni comunitarie che avrebbero dovuto essere chiuse già da tempo.
Il nostro "no” al progetto di bilancio deve essere interpretato come una richiesta di taglio drastico alla spesa nel bilancio UE e di dimezzamento dei contributi dovuti dagli Stati membri all'Unione europea.
Kader Arif
per iscritto. - (FR) Nel bilancio comunitario per l'esercizio 2009, noi socialisti abbiamo proposto e ottenuto l'approvazione di un'azione preparatoria per lo sviluppo del turismo sociale in Europa.
Tale progetto scaturisce dalla constatazione che numerosi cittadini rinunciano a viaggiare per motivi di ordine economico e che occorre ovviare a questa disuguaglianza assicurando opportunità di villeggiatura per tutti. Tale proposta ha peraltro una sua utilità anche per la gestione del territorio e lo sviluppo locale.
Con questa combinazione di integrazione sociale e sviluppo locale, favorendo l'accesso al turismo a fasce della popolazione che altrimenti difficilmente ne usufruirebbero, il turismo sociale incrementa la redditività del comparto turistico. Esso consente infatti di sviluppare un turismo destagionalizzato proprio nelle regioni in cui ha una forte connotazione stagionale e incoraggia in questo settore economico la creazione di posti di lavoro più duraturi. Inoltre il turismo sociale e associativo dimostra l'esistenza di un settore intermedio tra il mercato del tempo libero e l'economia non solvibile e che la pertinenza economica non è necessariamente incompatibile con una maggiore facilità di accesso per le masse. Grazie ai contatti che avvengono tra i cittadini europei tramite il turismo, esso contribuisce parimenti a rafforzare la cittadinanza europea.
Ho voluto ribadire l'importanza di questo settore, sia in termini di ricadute economiche che di risorse pubbliche.
Pedro Guerreiro
A dispetto delle analisi economiche che prevedono una recessione in diversi Stati membri - alcuni dei quali sono già tecnicamente in fase recessiva - il Consiglio e il Parlamento stanno approvando per il 2009 un bilancio comunitario che è inferiore, in termini di pagamenti, rispetto a quello del 2008.
Inoltre, se raffrontiamo l'attuale progetto di bilancio per il 2009 con il massimale per l'anno indicato nel quadro finanziario pluriennale 2007-2013, che al tempo avevamo già dichiarato insufficiente a garantire la coesione economica e sociale in un'UE allargata a 27, la situazione è anche peggiore, poiché questo bilancio rivela un buco di circa 8 miliardi di euro!
Il bilancio UE per il 2009 è il più basso in termini percentuali (0,89 per cento) rispetto al PIL comunitario da quando il Portogallo ha aderito alla Comunità Economica Europea.
Pur esprimendo "preoccupazione”, in particolare per "i possibili effetti di una recessione sui cittadini europei” e i livelli "estremamente bassi” dei pagamenti e di utilizzo degli stanziamenti entro la politica di coesione, il Parlamento approva il bilancio. Alla base di tutto questo c'è il tentativo di migliorare la propria immagine tra i lavoratori e i cittadini nei diversi paesi, senza interrogarsi sui principi di base, nella speranza che vada tutto secondo i piani alle prossime elezioni del Parlamento europeo in giugno.
Il nostro voto negativo è motivato da queste ragioni.
Czesław Adam Siekierski
per iscritto. - (PL) Il bilancio 2009 non soddisfa appieno le nostre aspettative e risponde solo in parte alle nuove criticità e alle preoccupazioni odierne. Esso rispecchia presupposti e obiettivi adottati in precedenza e, da questo punto di vista, è rispondente ai criteri stabiliti. Ho votato a favore della sua adozione, seppure ritenga opportuno richiamare la vostra attenzione sui seguenti aspetti:
1. è positivo incrementare gli stanziamenti a sostegno dello sviluppo agricolo nei paesi in via di sviluppo colpiti da carestie. Tuttavia dovremmo ricordare che, nell'Unione europea, quasi 80 milioni di persone vivono sulla soglia della povertà e 43 milioni di cittadini rischiano la denutrizione.
2. Nonostante la PAC, i redditi delle famiglie di agricoltori sono notevolmente inferiori a quelli delle famiglie che si sostentano tramite altre attività.
3. In Europa stiamo assistendo al crollo sistematico e alla bancarotta delle aziende agricole. Le scorte di prodotti agricoli sono in calo e ciò mette a rischio la sicurezza alimentare. Nel contempo c'è chi auspica tagli alla politica agricola comune.
4. La politica di coesione e quella strutturale menzionano entrambe la coesione territoriale, economica e sociale, oltre all'importanza di uniformare lo sviluppo e di creare pari opportunità di crescita, specialmente nelle regioni meno abbienti. In realtà, le aree meno favorevoli alla coltivazione e dove lo stato delle infrastrutture lascia molto a desiderare sono affette da un progressivo spopolamento.
Andrzej Jan Szejna
Mi sono espresso a favore dell'approvazione della relazione predisposta da Jutta Haug e Janusz Lewandowski in relazione al progetto di bilancio UE per il 2009. E' importante che, alla fine, i deputati siano riusciti a raggiungere un compromesso con il Consiglio per il finanziamento degli obiettivi prioritari del Parlamento, quali per esempio le misure atte a ridurre gli effetti della recessione economica, le iniziative di promozione della crescita, della coesione e dell'occupazione.
Il Parlamento aumenterà le risorse finanziarie destinate alla politica sociale e del lavoro, ovvero alle attività di incremento della competitività e della coesione. Tale spesa rientrerà nell'ambito del Fondo sociale, che riceverà ulteriori 135 milioni di euro, nonché del Fondo di sviluppo regionale e del Fondo di coesione. Nell'attuale, difficile situazione finanziaria dell'intera Unione europea, le iniziative di promozione della crescita e dell'occupazione rivestono un'importanza cruciale che deve riflettersi nel bilancio 2009. E' encomiabile che il bilancio intenda incrementare i fondi destinati agli aiuti alle PMI.
I paesi in via di sviluppo potranno contare su aiuti finanziari volti a mitigare gli effetti di impennate improvvise nei prezzi dei generi alimentari e sarà stanziato anche un ulteriore miliardo di euro per prevenire la fame in questi paesi. Plaudo anche all'intenzione del Parlamento di limitare le proprie spese amministrative e di contenerle entro il 20 per cento della sua spesa complessiva.
Alessandro Battilocchio
per iscritto. - Signor Presidente, onorevoli colleghi, sostengo con un voto favorevole la relazione della collega Monica Maria Iacob-Ridzi sul piano d'azione europeo per la mobilità del lavoro (2007-2010).
La volontà di creare un vero e proprio mercato europeo del lavoro necessita di un adattamento delle legislazioni nazionali e di smaltire gli iter burocratici che a volte scoraggiano la mobilità dei lavoratori. L'Unione ha un ruolo fondamentale nell'armonizzare i sistemi nazionali di sicurezza sociale e la trasferibilità dei diritti di pensione complementare. È importante inoltre che si facciano sforzi per aumentare il grado d'informazione dei cittadini, non solo attraverso il miglioramento del portale EURES, ma anche tramite campagne informative europee.
Ilda Figueiredo
per iscritto. - (PT) Sebbene la relazione contenga diverse raccomandazioni che sosteniamo, sono state tutte formulate entro un contesto liberale. Mi riferisco per esempio alla volontà di includere il concetto della mobilità dei lavoratori in particolare nelle politiche volte al completamento del mercato interno, tralasciando il fatto che tali politiche non proteggono adeguatamente i lavoratori.
Oltre a queste raccomandazioni di per sé accettabili, la relazione enfatizza la dimensione economica e sociale della strategia di Lisbona, dimenticando che tale strategia propugna le politiche più neoliberali dell'Unione europea che hanno già dato origine a proposte come la rinomata direttiva Bolkenstein, la cosiddetta flessicurezza e la proposta del Consiglio per la direttiva sull'orario di lavoro.
La relazione è l'ennesimo documento di propaganda che si sforza di occultare la politiche antisociali dell'Unione europea e di ignorare le conseguenze del neoliberismo, anche se questo è ormai un segreto di Pulcinella. E' sufficiente un'occhiata alle contraddizioni dei paragrafi 15 e 16 per comprendere perché ci siamo astenuti.
Bruno Gollnisch
per iscritto. - (FR) Il problema, per il relatore, non sembra essere tanto la rimozione degli ostacoli giuridici o amministrativi alla mobilità professionale dei lavoratori europei sul territorio dell'Unione europea, quanto piuttosto il fatto che tale mobilità non è generalizzata e soprattutto obbligatoria. La relazione auspica una grande mescolanza delle popolazioni che acceleri l'estinzione delle nazioni europee. L'idea soggiacente è di favorire la concorrenza tra i salari, il dumping sociale e un'uniformazione verso il basso degli stipendi. Con la creazione di una tessera di previdenza sociale europea dai contorni alquanto sfuocati lavoriamo per mettere a repentaglio e smantellare i sistemi di protezione sociale nazionali.
Domandate cosa ne pensano della vostra mobilità gli operai francesi a cui è stato proposto, qualche anno fa, di mantenere il proprio posto di lavoro a condizione di lasciare tutto in patria per andare a lavorare in Romania a qualche centinaio di euro al mese!
Certo, la risoluzione dei problemi fiscali o di riconoscimento dei diritti sociali ai lavoratori transfrontalieri che hanno lavorato presso più Stati membri rientra tra le competenze dell'Unione europea. Ma non al prezzo di una precarizzazione sociale.
Zita Pleštinská
per iscritto. - (SK) La mobilità dei lavoratori è uno degli elementi fondamentali per conseguire gli obiettivi della strategia di Lisbona, eppure ancora oggi è ostacolata da barriere amministrative, giuridiche, fiscali o previdenziali. Le barriere amministrative sono costituite essenzialmente da discrepanze nelle normative nazionali in materia di occupazione di cui sono generalmente responsabili gli Stati membri.
Vorrei innanzitutto manifestare la mia delusione nel rilevare che alcuni Stati dell'UE-15 ancora applicano restrizioni all'impiego di lavoratori provenienti dai nuovi Stati membri, sebbene i timori palesati dai cittadini e dai governi di tali paesi non siano stati corroborati da studi economici o da dati statistici.
Alcune persone si rivolgono a me per i numerosi problemi che hanno incontrato quando hanno tentato di esercitare il loro diritto alla mobilità al di fuori del proprio paese di origine. Queste persone si sono viste rifiutare il riconoscimento dell'esperienza di mobilità maturata nell'ambito del loro percorso professionale e si confrontano con problematiche relative alla sicurezza sociale e alla pensione, specialmente nel caso di piccole e medie imprese. Anche le barriere linguistiche rappresentano un ostacolo importante alla mobilità dei lavoratori e delle loro famiglie; gli Stati membri devono incentivare l'insegnamento delle lingue straniere, in particolar modo per gli adulti.
Sono fermamente persuasa che campagne mediatiche efficaci possano diffondere tra i cittadini le informazioni pertinenti sulla rete EURES che offre uno sportello unico per la mobilità dei lavoratori in Europa, sulla rete TRESS o sullo strumento SOLVIT che aiuta a risolvere i problemi del mercato interno o inerenti alla mobilità dei lavoratori.
Nicolae Vlad Popa
Ho votato a favore della relazione perché la mobilità dei lavoratori è un diritto fondamentale garantito ai cittadini UE in forza del trattato. La mobilità rappresenta un pilastro fondamentale del modello sociale europeo che consente il raggiungimento degli obiettivi della strategia di Lisbona.
Mi complimento per questa relazione che, oltre a richiamare l'attenzione sugli ostacoli che impediscono la libertà di movimento nel mercato del lavoro ai lavoratori provenienti dai nuovi Stati membri, contiene anche spunti importanti a integrazione del Piano d'azione europeo per la mobilità del lavoro presentato dalla Commissione europea, come per esempio il sostegno a programmi che collegano l'istruzione con il mercato del lavoro, il mutuo riconoscimento delle qualifiche e l'ampliamento della rete EURES.
Luca Romagnoli
per iscritto. - Signor Presidente, onorevoli colleghi,
Esprimo il mio voto favorevole alla relazione della collega Iacob-Ridzi sul piano europeo per la mobilità del lavoro per il periodo 2007-2010. Concordo nel sostenere che la mobilità professionale tra gli Stati membri dell'Unione abbia contribuito in maniera positiva all'integrazione europea: esempi di ciò sono la facilità, rispetto al passato, con cui é possibile effettuare un soggiorno professionale in un altro paese e le possibilità, oggi molto più numerose, di avere accesso alle offerte di lavoro in paesi diversi da quello di provenienza. A questo punto, il tentativo di migliorare la situazione sul piano legislativo, amministrativo, fiscale e sociale, attraverso la rimozione degli ostacoli burocratici in questi campi, é da intraprendere. Tuttavia, é sempre bene tener presente che l'azione dell'Unione Europea non può prescindere dalla considerazione delle differenze socio-economiche tra i Paesi membri.
Andrzej Jan Szejna
In occasione della sessione di dicembre, il Parlamento aveva espresso il proprio voto sul piano d'azione europeo per le competenze e la mobilità, presentato dalla commissione per l'occupazione e gli affari sociali.
La mobilità dei lavoratori dipende dal principio fondamentale della libertà di circolazione delle persone all'interno del mercato unico, in conformità al Trattato che istituisce la Comunità europea. Insieme alla sicurezza, questa è una delle quattro libertà fondamentali cui hanno diritto i cittadini dell'Unione europea.
La legislazione comunitaria dovrebbe fare in modo che i lavoratori emigrati non perdano l'assistenza sociale cui hanno diritto. In questo ambito sono stati compiuti progressi importanti, ma dobbiamo continuare a lavorare per abbattere gli ostacoli amministrativi e giuridici alla mobilità che derivano da determinate normative in vigore presso alcuni Stati membri.
In effetti, la mobilità dei lavoratori può essere funzionale a potenziare la portata economica e sociale della strategia di Lisbona. La mobilità può essere un passo decisivo per dare nuovo slancio al programma sociale europeo e affrontare tutta una serie di sfide, quali il cambiamento demografico, la globalizzazione o il progresso tecnologico.
Sono favorevole al piano d'azione europeo per le competenze e la mobilità, oltre che alla proposta di creare un portale informativo e consultivo incentrato su tutti gli aspetti della mobilità dei lavoratori, come ad esempio le offerte di lavoro, la copertura sanitaria e l'assicurazione sociale, nonché il riconoscimento reciproco di qualifiche e formazioni.
John Attard-Montalto
per iscritto. - (EN) Nonostante le numerose strategie che abbiamo approntato a favore dell'apprendimento permanente, la loro attuazione lascia molto a desiderare. Il grado d'impegno e di spesa varia da paese a paese. Purtroppo le tendenze positive nella spesa pubblica per l'istruzione cominciano in genere a dare segni di cedimento. Occorre destinare all'apprendimento degli adulti una quota adeguata del bilancio. Tale necessità è motivata dal fatto che la partecipazione degli adulti all'apprendimento permanente non sembra avere preso piede. Saranno necessari maggiori sforzi per incrementare le competenze della popolazione adulta e conseguire flessibilità e sicurezza in tutto il mercato del lavoro.
I datori di lavoro vanno incoraggiati a organizzare una proposta formativa per i loro dipendenti. Si consigliano incentivi volti a consentire ai lavoratori meno qualificati di partecipare a programmi di apprendimento. Occorre destinare particolare attenzione ai disoccupati di lungo periodo, in particolare quelli provenienti da contesti sociali svantaggiati, persone con esigenze particolari, giovani provenienti da istituti, ex-detenuti e tossicodipendenti che hanno effettuato un percorso di recupero.
Charlotte Cederschiöld, Christofer Fjellner, Gunnar Hökmark e Anna Ibrisagic
per iscritto. - (SV) Dichiarazione di voto in relazione alla relazione sull'attuazione del programma di lavoro "Istruzione e formazione 2010”.
Oggi abbiamo votato a favore della relazione d'iniziativa, presentata dall'onorevole Novak [Partito popolare europeo (Democratici-cristiani) e dei Democratici europei] sull'attuazione del programma di lavoro "Istruzione e formazione 2010”. La relazione formula numerose raccomandazioni costruttive, in particolare per quanto concerne le misure volte a facilitare la mobilità tra gli Stati membri per studenti e lavoratori.
D'altra parte, non crediamo che le raccomandazioni destinate a influire sui programmi di studio degli Stati membri siano compatibili con il principio di sussidiarietà. Il numero di ore da dedicare settimanalmente all'educazione fisica nelle scuole e l'eventuale introduzione di un'alfabetizzazione ai linguaggi dei mezzi di comunicazione nei programmi nazionali sono questioni su cui possono decidere al meglio gli Stati membri medesimi.
Avril Doyle
per iscritto. - (EN) La comunicazione della Commissione pubblicata nel 2007 con il titolo "L'apprendimento permanente per la conoscenza, la creatività e l'innovazione” fa parte di una serie di relazioni interlocutorie biennali sull'attuazione del programma di lavoro "Istruzione e formazione 2010”. Pertanto la relazione illustra in sintesi i progressi compiuti ed esamina la situazione del coordinamento in materia di istruzione e formazione entro la prospettiva degli obiettivi della strategia di Lisbona che mirano a rendere quella europea l'economia più competitiva al mondo e a raggiungere la piena occupazione entro il 2010.
La relazione ci fornisce informazioni preziose sull'andamento di svariate iniziative educative, riuscite o meno, oltre a documentare gli strumenti e le misure idonei a conseguire ulteriori miglioramenti. La relazione fornisce obiettivi chiari, indicatori statistici e benchmark significativi.
Sostengo appieno gli sforzi compiuti per portarci verso la destinazione convenuta nella strategia di Lisbona ed esprimo un voto favorevole perché questa relazione lo merita.
Ilda Figueiredo
per iscritto. - (PT) Questa relazione contiene talune raccomandazioni importanti e condivisibili che chiedono maggiore sostegno economico e sociale, provvedimenti complementari e l'integrazione di immigrati e minoranze, sottolineano l'importanza dello sport nell'istruzione e nella formazione e la necessità di un maggiore sostegno all'istruzione pre-scolare e a insegnanti e studenti, in particolare nella scuola primaria e secondaria. La relazione sostiene anche le proposte della Commissione europea, compresa la strategia di Lisbona, e insiste sull'applicazione del processo di Bologna senza preoccuparsi minimamente delle sue implicazioni pratiche.
Basata sulla comunicazione della Commissione intitolata "L'apprendimento permanente per la conoscenza, la creatività e l'innovazione”, la relazione fa propria la sintesi dei progressi compiuti e di quelli ancora insufficienti, oltre a proporre misure per cambiare la situazione in sintonia con obiettivi che non sempre sono perfettamente condivisibili, poiché abbracciano il neoliberismo e insistono affinché sia applicato anche all'istruzione. Si tratta pertanto di una dichiarazione politica che può essere considerata un programma per gli anni a venire, con il quale ci troviamo sostanzialmente in disaccordo.
Per esempio, non possiamo accettare che l'ammodernamento dell'istruzione superiore passi attraverso un'integrazione delle riforme del processo di Bologna e una maggiore sponsorizzazione dal settore privato, in particolare nel caso di paesi come il Portogallo, dove l'istruzione pubblica superiore viene lasciata a languire.
Hélène Goudin e Nils Lundgren
per iscritto. - (SV) Per l'ennesima volta la commissione per la cultura e l'istruzione del Parlamento europeo cerca di intromettersi in questioni relative all'istruzione. I rappresentanti di Junilistan desiderano puntualizzare a quest'Aula, ancora una volta, che la politica in materia d'istruzione è di esclusiva competenza degli Stati membri.
Come accade ogni volta nelle sue relazioni, la commissione per la cultura e l'istruzione si lancia in voli pindarici. Questa relazione solleva di nuovo la questione dell'educazione fisica nelle scuole. Il paragrafo 4 della proposta di relazione avanza l'ipotesi riservare all'educazione fisica almeno tre unità didattiche alla settimana.
Questo ennesimo caso esemplifica la volontà di politici e funzionari UE di interferire in qualsiasi ambito e a qualsiasi livello allo scopo di conseguire un accentramento del potere politico. La sussidiarietà è proclamata con belle parole ma non viene mai rispettata nelle politiche attuate.
Riteniamo che questo ambito non riguardi affatto il Parlamento europeo e abbiamo pertanto votato contro la relazione.
Zita Pleštinská
per iscritto. - (SK) L'istruzione e la formazione professionale sono la forza trainante della strategia di Lisbona. Le strategie e gli strumenti generali per l'apprendimento permanente - in particolare il quadro di formazione europeo, l'Europass, il quadro delle competenze chiave e le raccomandazioni per la mobilità e la qualità dell'istruzione superiore - dovrebbero essere applicati con maggiore coerenza presso gli Stati membri. I governi degli Stati membri dovrebbero assumere un ruolo molto dinamico nelle politiche relative all'istruzione. Anche se il sistema di riferimento europeo per le qualifiche non sarà armonizzato fino al 2010, l'attuazione accelerata del quadro europeo delle qualifiche presso tutti gli Stati membri ridurrebbe le difficoltà che ancora incontrano i cittadini UE.
La mobilità di studenti e insegnanti è un aspetto fondamentale della mobilità professionale. Occorre dedicare maggiore attenzione a iniziative quali il processo di Bologna e i programmi Comenius, Erasmus e Leonardo da Vinci che consentono di studiare all'estero e sottolineano l'importanza della mobilità professionale per l'avvenire.
Il successo dell'istruzione dipende innanzi tutto dalla qualità dei programmi di studio e dell'insegnamento. Dobbiamo introdurre rapidamente nei programmi di studio l'insegnamento della cittadinanza europea, l'insegnamento delle lingue straniere o argomenti quali la protezione dei consumatori, la tutela dell'ambiente e la lotta contro il cambiamento climatico. E' importante che gli Stati membri destinino risorse adeguate per la previdenza sociale degli insegnanti e per l'assunzione e la formazione in particolare degli insegnanti di lingue straniere.
Sono persuasa che se non riusciremo a rendere l'insegnamento una professione più ambita, ci troveremo a fare i conti con la carenza di specialisti qualificati nel comparto dell'istruzione.
Luca Romagnoli
per iscritto. - Signor Presidente, Onorevoli colleghi, dichiaro il mio voto favorevole alla relazione della collega Novak riguardante l'apprendimento permanente per la conoscenza, la creatività e l'innovazione e, in particolare, l'attuazione del programma di lavoro "Istruzione e formazione 2010".
Mi associo alla collega nel sottolineare che azioni nel campo dell'istruzione e della formazione meritino il sostegno sistematico da parte dell'Unione Europea attraverso politiche mirate, soprattutto nei settori critici che, secondo la comunicazione presentata nel 2007 dalla Commissione Europea, richiedono un miglioramento, come l'apprendimento permanente, anche in età adulta, la spesa pubblica e gli investimenti privati a favore dell'istruzione, l'abbandono scolastico, ancora troppo elevato a livello di scuola secondaria, e la pertinenza dell'istruzione rispetto al mercato del lavoro. Desidero inoltre rimarcare il fatto che il settore della formazione e dell'istruzione, la ricerca, l'innovazione e il trasferimento di conoscenze sono fondamentali per l'Europa di oggi e di domani e devono quindi essere oggetto dello sforzo congiunto a livello nazionale e comunitario.
Tomáš Zatloukal
per iscritto. - (CS) Signora Presidente, ho votato a favore della relazione della collega Novak riguardante il programma "Istruzione e formazione 2010”. Concordo con la necessità di sostenere l'efficacia e l'efficienza dei diversi sistemi d'istruzione. Un modo efficace per garantire a tutti i bambini, compresi quelli appartenenti a fasce svantaggiate, un'opportunità di apprendimento permanente consiste nel migliorare la qualità dell'istruzione prescolastica. La successiva istruzione elementare e secondaria deve aiutare gli studenti a sviluppare un pensiero creativo, nonché i talenti e le inclinazioni individuali che li aiuteranno a trovare un lavoro.
Nell'ambito della formazione specializzata dobbiamo migliorare la qualità e l'attrattività delle materie offerte, ma soprattutto collegare la formazione all'economia, ovvero il processo formativo deve rispondere alle esigenze del mercato del lavoro sia comunitario che, soprattutto, di una data regione. Per quanto concerne l'istruzione universitaria, concordo sulla necessità di aggiornare i programmi di studio affinché siano rispondenti alle esigenze socio-economiche attuali e future. I programmi d'istruzione per gli adulti dovrebbero essere articolati in particolare a sostegno delle persone più svantaggiate sul mercato del lavoro e dei datori di lavoro che offrono un apprendimento permanente ai loro dipendenti.
Ole Christensen, Dan Jørgensen, Poul Nyrup Rasmussen, Christel Schaldemose e Britta Thomsen
per iscritto. - (DA) In linea di principio, la delegazione danese del gruppo del Partito del socialismo europeo in questo Parlamento è favorevole alla certificazione di taluni tipi di giocattoli da parte di enti terzi al fine di garantire la conformità di tali prodotti ai requisiti posti dall'UE. Tuttavia, questo emendamento non è formulato correttamente per conseguire questo obiettivo e, peraltro, la sua approvazione farebbe naufragare l'intero compromesso. Vogliamo migliorare i requisiti di sicurezza per i giocattoli e crediamo che questo obiettivo potrà essere meglio raggiunto, in generale, avallando il compromesso raggiunto dal Parlamento europeo e dal Consiglio.
Carlos Coelho
La direttiva sulla sicurezza dei giocattoli rappresenta un passo importantissimo per garantire la sicurezza dei nostri bambini. Era assolutamente indispensabile estenderne la portata e chiarire la legislazione relativa a un tema tanto importante. L'accresciuta responsabilità di produttori e importatori, nonché l'ampliamento avveduto del numero di sostanze proibite dimostrano il rigore con cui questo argomento è stato affrontato.
Mi devo complimentare con la relatrice per essere riuscita a stabilire regole volte a garantire la sicurezza dei bambini, tenendo conto anche della necessità di garantire la sopravvivenza e la stabilità delle piccole e medie imprese del comparto.
Comunque dobbiamo occuparci anche dell'accresciuta responsabilità che questo atto normativo impone agli Stati membri. Per conseguire l'obiettivo della direttiva, ovvero la sicurezza dei bambini, gli Stati membri devono ottemperare a maggiori obblighi in termini di vigilanza sul mercato.
Alla luce della situazione in Portogallo, dove l'ente pubblico di vigilanza competente per questi controlli si è dimostrato più volte inadempiente, invito gli Stati membri ad assumersi appieno le loro responsabilità. Il progresso che la direttiva ha ottenuto in termini di sicurezza deve essere affiancato da un'azione di sorveglianza efficace e responsabile a cura degli Stati membri.
Gérard Deprez
per iscritto. - (FR) I giocattoli devono essere articoli ancora più sicuri degli altri prodotti, perché i bambini sono consumatori estremamente vulnerabili. Eppure nell'Unione europea circolano ancora giocattoli pericolosi. Possiamo dunque compiacerci del compromesso raggiunto tra Parlamento e Consiglio in merito a una normativa che impone all'industria il rispetto di una serie di criteri di sicurezza quale requisito per l'immissione di un giocattolo sul mercato europeo.
Come molti altri compromessi, il testo contiene sia progressi che disposizioni deludenti.
Per quanto attiene ai progressi, citerei in particolare il dovere, da parte dei fabbricanti, di garantire che i loro giocattoli non abbiano effetti nocivi sulla salute o sulla sicurezza, l'inasprimento dei valori limite per i metalli tossici, una migliore prevenzione dei rischi di soffocamento e strangolamento a causa delle piccole parti smontabili, o ancora istruzioni più chiare sulle confezioni o sui giocattoli stessi.
Questi progressi spiegano il mio voto positivo sul testo definitivo.
Con riferimento alle delusioni, devo menzionare sia la moltiplicazione delle deroghe al divieto di utilizzo di sostanze cancerogene, mutagene e tossiche, sia la rinuncia alla certificazione da parte di enti terzi indipendenti. Avevo votato a favore di questa clausola che non è stata però mantenuta, con mio sommo dispiacere.
Avril Doyle
per iscritto. - (EN) La proposta della collega Thyssen di una direttiva del Parlamento europeo e del Consiglio sulla sicurezza dei giocattoli intende migliorare le misure di sicurezza e limitare l'impiego di metalli pesanti pericolosi nella preparazione e costruzione dei giocattoli per bambini. La proposta intende rivedere radicalmente l'attuale direttiva in vigore (88/378/CEE) al fine di adeguarla alle specifiche precisate nella decisione relativa a un quadro comune per la commercializzazione dei prodotti.
Si va così ad ampliare il campo d'applicazione della direttiva fino a coprire i prodotti "a doppio utilizzo” che sono anche giocattoli; aumenta pertanto il numero di prodotti interessati da questa direttiva. In termini concreti, il testo affronta il problema dei rischi di soffocamento e dell'utilizzo di prodotti chimici in fase di produzione allo scopo di rimuovere o ridurre i pericoli per i bambini. A questa proposta natalizia posso garantire il mio consenso incondizionato.
Edite Estrela
per iscritto. - (PT) Ho votato a favore della relazione Thyssen sulla sicurezza dei giocattoli, poiché credo che il testo di compromesso approvato porterà all'applicazione di requisiti di sicurezza più rigorosi per i giocattoli, laddove incrementa la responsabilità di produttori e importatori in relazione alla commercializzazione dei loro prodotti, oltre a rafforzare gli obblighi di vigilanza sul mercato da parte degli Stati membri.
Mi rammarico tuttavia che non sia stato accolto l'emendamento 142, con cui si richiedeva la valutazione della conformità dei giocattoli da parte di laboratori esterni prima della loro commercializzazione.
Ilda Figueiredo
per iscritto. - (PT) Il presente progetto di direttiva intende introdurre requisiti più severi per la sicurezza dei giocattoli, in particolare per quanto concerne l'utilizzo di sostanze chimiche e le caratteristiche elettriche. Questo nuovo atto legislativo precisa anche le caratteristiche fisiche e meccaniche al fine di ridurre i rischi di soffocamento. Esso stabilisce peraltro alcuni provvedimenti di rafforzamento del controllo sul mercato a cura degli Stati membri e impone nuovi obblighi ai produttori.
Lo scopo è dunque di migliorare la direttiva in vigore, tenendo conto di nuovi rischi alla sicurezza che potrebbero insorgere a seguito dell'ideazione e della commercializzazione di nuove tipologie di giocattolo, eventualmente costruite con materiali nuovi.
La discussione sulla direttiva e la successiva votazione hanno lasciato emergere diverse questioni. Le garanzie della Commissione europea non erano disponibili in fase di votazione e ciò ha causato un piccolo malinteso.
Inoltre alcuni esperti sono preoccupati perché i requisiti proposti non eliminano del tutto l'utilizzo di sostanze cancerogene, mutagene o tossiche per la riproduzione (note come sostanze CMR), pur imponendo maggiori restrizioni.
Esistono peraltro punti di vista discordanti sui valori massimi per i metalli, in particolare per arsenico, cadmio, cromo, piombo, mercurio e stagno, che sono altamente tossici e non andrebbero usati nelle parti dei giocattoli che possono entrare in contatto con i bambini.
Il nostro gruppo ha pertanto deciso di votare contro questa proposta.
Robert Goebbels
per iscritto. - (FR) Mi sono astenuto dal voto sulla direttiva relativa alla sicurezza dei giocattoli in segno di protesta contro la prassi antidemocratica di presentare al Parlamento europeo delle relazioni negoziate nell'ambito di dialoghi a tre informali che impediscono alla nostra istituzione di svolgere il proprio lavoro secondo la procedura normale.
La direttiva proposta è inoltre una dimostrazione dell'assurdità del principio precauzionale. Il legislatore moltiplica le norme e i divieti per mettere a posto la propria coscienza, mentre i bambini se ne infischiano di tutte queste regole nei loro giochi.
Małgorzata Handzlik
per iscritto. - (PL) Il Parlamento ha approvato la direttiva sulla sicurezza dei giocattoli. Si tratta di una direttiva eccellente, in grado di migliorare la sicurezza dei giocattoli che finiscono nelle mani dei nostri figli. Questo strumento acquisisce un significato importante anche a seguito delle notizie sempre più frequenti di incidenti causati dai giocattoli, per esempio di bambini che ingoiano parti di giocattoli assemblati male. E' opportuno ricordare che la stragrande maggioranza di tutti i giocattoli presenti sul mercato europeo - all'incirca l'80 per cento - sono importati dalla Cina.
La direttiva è riuscita a conciliare gli interessi delle associazioni dei consumatori con quelli dei rappresentanti dell'industria dei giocattoli. Posso soltanto felicitarmi di un accordo raggiunto su quello che per me, come genitore, è un atto normativo fondamentale. Entrambe le parti trarranno vantaggio da questa direttiva. I consumatori potranno essere certi che i giocattoli disponibili sul mercato europeo, e che finiscono nelle mani dei loro bambini, ottemperano a elevati standard di sicurezza, non contengono sostanze tossiche e riportano avvertenze chiare che possono essere lette al momento dell'acquisto del giocattolo.
L'industria del giocattolo ha ribadito spesso che non è possibile scendere a compromessi quando è in gioco la sicurezza dei bambini e i produttori sono pertanto favorevoli ai cambiamenti proposti. Tuttavia, queste modifiche non dovrebbero mettere a rischio la posizione dei produttori di giocattoli nel mercato europeo. L'accordo negoziato concederà all'industria un periodo transitorio di due anni per il suo adeguamento alla nuova legislazione in materia di sostanze chimiche.
Eija-Riitta Korhola
per iscritto. - (FI) Signora Presidente, ho votato a favore della direttiva sulla sicurezza dei giocattoli perché rappresenta un miglioramento sostanziale. Da una lato, garantisce una maggiore sicurezza dei giocattoli e salvaguarda la salute dei bambini, laddove vieta il ricorso a sostanze allergizzanti o CMR, metalli pesanti e componenti che possono provocare soffocamento.
Dall'altro lato, la direttiva si è dimostrata un compromesso riuscito ed equilibrato, in cui si tiene conto del fatto che la maggioranza dei 2000 produttori europei di giocattoli sono prudenti e riconoscono la propria responsabilità. Non è giusto che siano penalizzati a causa del comportamento irresponsabile di una manciata d'importatori.
In questo particolare momento dell'anno, la direttiva sulla sicurezza dei giocattoli è una dimostrazione della volontà e della capacità dell'Unione di proteggere più efficacemente i consumatori e i loro bambini, che sono i soggetti più vulnerabili. Tuttavia è forse opportuno ricordare che nessuna legge può assolvere i genitori dalle loro responsabilità. La direttiva sulla sicurezza dei giocattoli da sola non può garantire che nel pacchetto regalo ci sia qualcosa di adatto al bambino.
Mairead McGuinness
per iscritto. - (EN) Ho votato a favore della relazione Thyssen con piacere, anche se questioni di natura procedurale hanno rischiato di compromettere il voto finale.
I giocattoli devono essere sicuri e l'UE dovrebbe essere sempre in prima linea nelle questioni attinenti alla sicurezza.
E' indispensabile prevedere un'interdizione totale dell'uso di sostanze chimiche che sono cancerogene, mutagene o tossiche per la riproduzione. Sebbene siano state previste deroghe per singoli casi, saranno concesse soltanto previo parere favorevole del comitato scientifico europeo.
E' altresì opportuno vietare l'utilizzo di fragranze allergizzanti e 55 di tali sostanze d'ora in avanti non potranno essere utilizzate nei giocattoli.
Parimenti sono state imposte norme molto stringenti sull'uso dei metalli pesanti e sono stati fissati valori limite.
I genitori che comprano i regali di Natale quest'anno partono dal presupposto che i giocattoli sono sicuri. Questa direttiva riveduta sulla sicurezza dei giocattoli migliorerà sostanzialmente la situazione, che sarebbe già migliorata se il testo riveduto fosse stato già in vigore per queste festività.
Rareş-Lucian Niculescu
I dati sono più eloquenti di qualsiasi parola. La stampa rumena ha pubblicato proprio oggi i risultati di un controllo svolto dall'ufficio rumeno per la protezione dei consumatori. Gli ispettori hanno rilevato che in una recente ispezione il 90 per cento dei giocattoli controllati sono risultati non conformi alla normativa.
Alcuni giocattoli non recavano le istruzioni e non precisavano l'età consigliata per l'uso. Gli ispettori hanno trovato anche pistole e spade giocattolo ritenute pericolose. Altri giocattoli contenevano piccole parti facilmente staccabili.
Secondo i risultati di tale controllo, la Cina rimane il principale paese d'origine dei giocattoli pericolosi e ciononostante è il maggiore esportatore di giocattoli nell'Unione europea. Occorrono provvedimenti drastici per la sicurezza al fine di salvaguardare il benessere dei nostri bambini.
Bart Staes
per iscritto. - (NL) La nuova normativa sulla sicurezza dei giocattoli è un passo nella direzione giusta, anche se spreca alcune opportunità. Per questo motivo ho scelto di non approvare la relazione.
Per esempio, l'utilizzo di talune fragranze allergizzanti e, tra le altre, di sostanze chimiche cancerogene, mutagene o tossiche per la riproduzione è stato ridotto, ma tali sostanze non saranno interdette integralmente, bensì tolte gradualmente. Inoltre non sono stati fissati requisiti vincolanti per i giocattoli che producono suoni.
Un aspetto positivo è che, di fronte alla legge, gli importatori di giocattoli saranno equiparati ai produttori. Un aspetto meno positivo della direttiva sono le disposizioni poco incisive in materia di vigilanza sul rispetto degli standard di sicurezza per i giocattoli, poiché sono i produttori stessi a essere responsabili di questo aspetto della sicurezza.
La direttiva stabilisce che gli Stati membri devono svolgere analisi a campione, ma temo che questo disposto manchi di forza cogente.
I controlli sulla sicurezza sono casuali e, ad oggi, non esiste un vero e proprio marchio di qualità europeo che consenta ai genitori di fare scelte informate e di non comprare giocattoli potenzialmente pericolosi per la salute dei loro figli. Una certificazione obbligatoria da parte di enti indipendenti potrebbe ovviare a questa carenza. Gli Stati Uniti e la Cina attribuiscono entrambi notevole importanza alla sicurezza dei prodotti e hanno votato di recente una normativa che rende questi controlli obbligatori. Perché l'Europa è rimasta indietro?
Catherine Stihler
per iscritto. - (EN) Da tempo la normativa esistente in materia di sicurezza dei giocattoli necessitava di un aggiornamento. Accolgo con favore l'occasione di voto odierna. La sicurezza dei bambini deve annoverarsi tra le nostre massime priorità e spero che anche l'industria dei giocattoli prenderà sul serio questo aspetto.
Bernadette Vergnaud
per iscritto. - (FR) Ritengo che il compromesso sulla relazione Thyssen sia troppo lassista in relazione alle regole per la sicurezza o alla presenza di sostanze chimiche nei giocattoli. Inoltre non è stato approvato l'emendamento in cui si prevedeva un controllo di conformità dei giocattoli tramite organismi indipendenti, sebbene sembra evidente che la sicurezza dei bambini debba essere posta innanzi agli interessi dei grandi gruppi industriali. Sono sempre stata favorevole a una vigilanza più severa sui prodotti in generale e tanto più sui prodotti destinati all'infanzia. Il contenuto finale deludente di questo atto - di gran lunga inferiore alle nostre ambizioni iniziali, pur consentendo alcuni progressi - mi ha convinta a scegliere l'astensione dal voto.
Avril Doyle
per iscritto. - (EN) Il sistema europeo di crediti per l'istruzione e la formazione professionale (ECVET) è stato concepito per agevolare e incoraggiare la mobilità transnazionale degli studenti e l'accesso alla formazione permanente. A livello operativo, l'ECVET migliorerà il trasferimento, il riconoscimento e la cumulabilità dei risultati di studio. Il quadro europeo delle qualifiche già fornisce uno strumento per "convertire” e valutare la vasta gamma di qualifiche esistenti in Europa. Il sistema di crediti fornisce un ulteriore strumento di conversione e trasposizione, nella misura in cui si avvale di un quadro metodologico condiviso per agevolare il trasferimento dei risultati di apprendimento da un sistema all'altro. L'importanza di un investimento nel futuro della nostra economia europea basata sulla conoscenza non potrà mai essere ribadita a sufficienza e questo metodo transnazionale di riconoscimento dei risultati educativi ci offre uno strumento utile. Sono pienamente favorevole alla proposta per l'istituzione di questo sistema di crediti.
Nicolae Vlad Popa
L'istruzione e la formazione professionale hanno acquisito maggiore rilievo negli ultimi anni.
L'introduzione di un sistema europeo di crediti per l'istruzione e la formazione professionale contribuirà a sviluppare ed espandere la cooperazione europea nel settore dell'istruzione.
Questo sistema migliorerà inoltre la mobilità e la trasferibilità delle qualifiche a livello nazionale tra diversi settori dell'economia e all'interno del mercato del lavoro.
L'istruzione e la formazione professionale sono un elemento chiave del lavoro compiuto a livello europeo per fare fronte alle criticità sociali legate all'invecchiamento della popolazione, riaffermare la propria posizione nell'economia globale e risolvere la crisi economica.
Ritengo pertanto importante che gli Stati membri riconoscano l'istruzione formale e informale, specialmente alla luce del fatto che il numero di diplomati con curricula professionali subirà una flessione drammatica tra il 2009 e il 2015. In quel medesimo periodo assisteremo a un incremento della domanda di personale con qualifiche tecnico-professionali in grado di soddisfare le richieste del mercato del lavoro. Considero particolarmente importante che in questo settore gli organismi europei sostengano attivamente la collaborazione tra gli Stati membri e le aziende europee al fine di realizzare un sistema finanziato in compartecipazione.
Andrzej Jan Szejna
Per realizzare gli obiettivi della strategia di Lisbona - relativi a crescita economica, competitività, occupazione e coesione sociale - è fondamentale potenziare la formazione professionale.
Il sistema europeo di crediti per l'istruzione e la formazione professionale (ECVET) s'iscrive tra le varie proposte avanzate a livello europeo nel settore della formazione. I risultati dell'apprendimento sono estremamente eterogenei in ragione delle diversità tra i sistemi d'istruzione e formazione nazionali. ECVET offre un quadro metodologico che include le conoscenze, competenze e abilità acquisite, affronta la questione del trasferimento e dell'accumulo dei crediti e la colloca entro il contesto delle qualifiche. Questo sistema agevola la mobilità transfrontaliera dei lavoratori e consente di incrementare la trasparenza sulle qualifiche professionali acquisite all'estero.
ECVET potrebbe rivelarsi uno strumento prezioso per adeguare l'istruzione e la formazione professionale alle esigenze del mercato del lavoro, a condizione che tenga conto di talune specificità nazionali e regionali. Tale sistema deve essere anche al servizio degli utenti e qui mi riferisco ai lavoratori e alle imprese, comprese le PMI e i piccoli ambienti di lavoro europei. ECVET contribuisce alla mobilità transfrontaliera e agevola l'accesso all'apprendimento permanente legato all'offerta educativa professionale. Le persone in formazione dovrebbero essere così in grado di scegliere il proprio percorso di crescita professionale.
A mio giudizio, l'introduzione del sistema ECVET rappresenterà un contributo importante alla creazione di un mercato del lavoro europeo, a condizione che vengano ridotti gli oneri amministrativi ad esso associati.
Peter Skinner
per iscritto. - (EN) Condivido l'impostazione del relatore che rispecchia le odierne preoccupazioni di numerosi cittadini europei.
L'esistenza di un intervento europeo coordinato su questo tema dimostra che l'Europa può cambiare per il meglio la vita delle persone anche nel mezzo di crisi come quella attualmente in corso.
Il relatore ha previsto provvedimenti concreti e ciò ha contribuito a rendere pragmatica l'intera proposta.
David Martin
per iscritto. - (EN) Ho votato a favore di questa relazione che semplifica gli obblighi contabili delle piccole e medie imprese, riducendone di conseguenza gli oneri amministrativi.
Nicolae Vlad Popa
La proposta della Commissione volta a promuovere la semplificazione e l'armonizzazione del diritto societario europeo, con l'obiettivo concreto di ridurre gli oneri amministrativi del 25 per cento entro il 2012, rappresenta uno strumento indispensabile per incrementare l'efficienza delle aziende europee e il potere d'attrazione dell'economia comunitaria tramite un risparmio stimato di 150 miliardi di euro.
L'iniziativa relativa alla revisione delle disposizioni della quarta e settima direttiva sul diritto societario esonera sia le piccole che le medie imprese o le imprese madri con imprese figlie che presentano un interesse irrilevante dall'obbligo di comunicare informazioni contabili e di redigere conto consolidati. Tale proposta ha integrato il contributo della relatrice che la sostiene e garantirà, per il futuro, la stabilità e la certezza di un quadro legislativo idoneo destinato a un segmento dell'economia estremamente importante per la creazione di posti di lavoro nell'UE.
Condivido altresì l'enfasi posta dalla relatrice sulla necessità di trasparenza e di disponibilità di informazioni accurate per tutti i portatori d'interesse, in particolare tramite la realizzazione su larga scala di sistemi di rendicontazione economica e finanziaria basati sulle tecnologie dell'informazione e della comunicazione.
Andrzej Jan Szejna
La relazione van den Burg che modifica taluni obblighi di comunicazione a carico delle medie imprese e l'obbligo di redigere conti consolidati è un ottimo documento giuridico.
La relazione preparata dalla commissione giuridica mira a semplificare in tempi brevi le condizioni operative per le piccole imprese europee. Come scopo primario si prefigge di sgravare tali imprese dall'obbligo di comunicare le informazioni relative alle spese considerate come attivi (ovvero i costi di avviamento dell'impresa), oltre che dall'obbligo di redigere rendiconti finanziari consolidati nei casi in cui una società madre ha imprese figlie che presentano un interesse irrilevante.
Nell'ottica di un'armonizzazione del diritto societario, credo che queste deroghe concesse sia alle piccole che alle medie imprese non mettano in alcun modo a repentaglio la trasparenza. Credo piuttosto che questa iniziativa potrebbe ridurre sostanzialmente il loro carico amministrativo e finanziario.
Jan Andersson, Göran Färm, Inger Segelström e Åsa Westlund
per iscritto. - (SV) I sottoscritti quattro rappresentanti socialdemocratici svedesi presso questo Parlamento hanno deciso in definitiva di votare a favore della relazione dell'onorevole Moreno Sánchez. Condividiamo alcune delle preoccupazioni espresse in merito alle caratteristiche che Frontex sta assumendo. A nostro giudizio, lo strumento Frontex non dovrebbe essere militarizzato e pertanto abbiamo votato a favore dell'emendamento n. 2. Frontex non deve diventare l'occasione per erigere bastioni più alti tra l'UE e il mondo esterno. Al contrario, ci teniamo che l'UE persegua una politica generosa nei confronti dei rifugiati e degli immigrati. Comunque siamo lieti che la discussione su Frontex abbia consentito di trattare l'argomento in seno al Parlamento europeo. E' positivo che il Parlamento europeo abbia chiesto di annoverare tra i compiti di Frontex anche la lotta contro il traffico di esseri umani e che si verifichi la conformità del diritto comunitario al diritto internazionale altrimenti applicabile in questo ambito, affinché l'UE possa prendere i provvedimenti più efficaci per aiutare le persone in stato di bisogno.
Bruno Gollnisch
per iscritto. - (FR) L'agenzia Frontex, responsabile della gestione comune delle frontiere esterne dell'Unione europea e della lotta contro l'immigrazione clandestina, deve la propria esistenza allo smantellamento dei controlli alle frontiere interne e alla volontà dell'Europa di Bruxelles e dei governi nazionali di perseguire una politica d'immigrazione attiva. Rimane da vedere se tale agenzia comunitaria offra un reale valore aggiunto rispetto alla cooperazione intergovernativa classica, alla pari di quanto si può riscontrare, in un contesto diverso, a livello di diversa efficacia e utilità dell'Europol e dell'Interpol.
I compiti dell'agenzia sembrano diventare sempre più numerosi, complessi e a dire il vero insormontabili se non si affronterà il problema alla radice. Infatti l'Europa resta, da un lato, un Eldorado sociale ed economico per gli aspiranti immigrati clandestini, nonostante le traversie del viaggio e le difficoltà che incontrano all'arrivo, mentre dall'altro lato la politica di cooperazione, di per sé inadeguata, è messa in pericolo dall'immigrazione di personale diplomato e qualificato pianificata dall'UE medesima. E' indispensabile bloccare questa tendenza migratoria e le politiche in corso di attuazione.
Vorrei peraltro ricordare l'esistenza di associazioni locali che lottano contro l'emigrazione clandestina, come ad esempio l'ALCEC creata da Emile Bomba in Camerun, che meriterebbero un aiuto e un sostegno.
Pedro Guerreiro
Il Parlamento europeo non avrebbe potuto celebrare la giornata internazionale degli emigranti in maniera più inopportuna, ossia approvando una relazione che caldeggia un rafforzamento di Frontex e che "plaude all'approvazione del patto europeo sull'immigrazione e l'asilo da parte del Consiglio europeo”.
Alla pari di Frontex, la spietata "direttiva sul rimpatrio” rappresenta una colonna portante della politica d'immigrazione dell'UE - criminalizzante, imperniata sulla sicurezza, sfruttatrice ed elitaria.
A seguito dell'approvazione da parte del Parlamento, il Consiglio "Trasporti, telecomunicazioni ed energia” ha adottato questa direttiva furtivamente e in sordina lo scorso 9 dicembre, grazie al voto favorevole del governo portoghese.
I deputati del partito socialista portoghese eletti in questo Parlamento si sforzino pure di occultare il comportamento del proprio partito e governo. La verità è che il governo portoghese ha votato a favore di questa vergognosa direttiva in seno al Consiglio UE.
Adesso è fondamentale osteggiare questa direttiva nella fase della sua trasposizione in Portogallo. Ciò significa denunciarne i tratti disumani e contrari ai diritti umani, mobilitando tutti coloro che stanno lottando per difendere la dignità umana degli emigranti.
Il partito comunista portoghese rimarrà in prima linea in questa battaglia, lotterà per respingere i contenuti ignobili di questa direttiva e ottenere la ratifica della Convenzione internazionale delle Nazioni Unite sulla protezione dei diritti di tutti i lavoratori migranti e dei membri delle loro famiglie.
Carl Lang e Fernand Le Rachinel
per iscritto. - (FR) Il considerando B della relazione recita: "l'immigrazione illegale costituisce una sfida comune per l'Europa". Quest'affermazione appare tanto più veritiera se pensiamo che ogni mese migliaia di immigrati clandestini alla ricerca di un "Eldorado" europeo si riversano sulle spiagge italiane, greche o spagnole.
A fronte di questa sfida che, ricordiamolo, risale essenzialmente agli accordi di Schengen con cui sono stati aboliti i controlli alle frontiere interne degli Stati membri, la risposta dell'Unione è stata la creazione di un'Agenzia europea per la gestione delle frontiere esterne, Frontex.
Quella che fino a ieri era un'istituzione vuota, priva di risorse, personale e poteri, sembra essere stata dotata oggi di un mandato che le consente di partecipare alle operazioni congiunte di rimpatrio e di contribuire, almeno in minima parte, alla lotta quotidiana contro l'immigrazione clandestina.
Ma rendiamoci conto che non servirà a nulla chiudere qualche breccia attraverso cui passano i clandestini se gli Stati membri dell'Unione non riescono a reagire di concerto per denunciare gli accordi di Schengen e ripristinare dei controlli effettivi a tutte le frontiere terrestri e marittime.
Adam Bielan
per iscritto. - (PL) I nostri mercati sono invasi sempre più spesso da prodotti contraffatti. Ciò rappresenta un problema grave per le imprese europee che operano legalmente e ottemperano ai requisiti di sicurezza e che non possono competere contro prodotti più economici e contraffatti. Ma l'aspetto peggiore è che i prodotti contraffatti, siano questi alimenti, parti di ricambio, cosmetici, giocattoli e specialmente medicinali, mettono in serio pericolo la salute e la vita dei consumatori.
La legislazione in vigore presenta lacune che consentono ai prodotti contraffatti di accedere agevolmente ai nostri mercati. Per esempio, la legislazione polacca non contiene alcuna definizione precisa delle caratteristiche dei prodotti medici contraffatti. Assumere medicinali contraffatti non ha certamente le medesime conseguenze di utilizzare un profumo contraffatto. Se le persone non conoscono il problema e utilizzano medicinali contraffatti, gli effetti potrebbero essere tragici.
Glyn Ford
per iscritto. - (EN) Ho votato a favore della relazione Susta. La contraffazione può distruggere posti di lavoro, arrecare danni alla salute e diventare un mezzo di finanziamento per i gruppi criminali internazionali e il terrorismo. A causa della sua gravità, il Parlamento, il Consiglio e la Commissione devono adottare tutte le misure del caso.
D'altra parte, le multinazionali perennemente tese a massimizzare i profitti creano un clima che incoraggia la produzione di articoli contraffatti e la loro accettazione da parte del pubblico. Mi limiterò a citare un esempio. A seguito della suddivisione dei DVD per aree geografiche, esistono differenze di prezzo sostanziali tra le diverse regioni, cui i consumatori possono ovviare soltanto adattando illegalmente i loro lettori DVD o acquistando illegalmente DVD piratati, poiché una commercializzazione globalmente uniforme di questi prodotti è stata resa impossibile tramite un trucco tecnologico. Potete immaginare quante altre aziende perseguono il massimo profitto in maniera analoga in tutti i settori.
Bruno Gollnisch
per iscritto. - (FR) La contraffazione non è solo un problema di rispetto dei diritti di proprietà intellettuale. Come sottolineato dal relatore, questo fenomeno soffoca l'impulso all'innovazione, causa la scomparsa di migliaia di posti di lavoro qualificati e non in Europa, getta le basi per un'economia clandestina controllata dal crimine organizzato. Queste pratiche illegali possono mettere anche a repentaglio la sicurezza e la salute dei consumatori o arrecare gravi danni all'ambiente.
Il problema della contraffazione s'iscrive nel problema più generale della qualità e pericolosità dei prodotti importati, la cui contraffazione amplifica ulteriormente i rischi nella misura in cui induce in errore i consumatori. I paesi d'origine di tali prodotti sono stati ampiamente individuati e al primo posto si colloca la Cina. L'Unione talvolta accetta perfino di aprire i propri mercati a prodotti che non rispettano le norme imposte ai produttori europei, come nel caso del pollo al cloro, meno costoso da produrre rispetto ai polli sottoposti a controlli veterinari.
Nello strumentario di misure proposte dal relatore (accordi bilaterali o multilaterali, cooperazione con i paesi d'origine, cooperazione tra i servizi europei competenti) non figurano le sanzioni commerciali contro gli Stati che avallano queste pratiche e l'imposizione di un sistema generale di preferenze nazionale ed europeo.
Hélène Goudin e Nils Lundgren
per iscritto. - (SV) Junilistan è favorevole al libero mercato interno e abbraccia qualsiasi proposta costruttiva volta a contrastare i fenomeni che distorcono il mercato, tra cui anche la contraffazione dei marchi.
Cionondimeno, sia la relazione della commissione che la proposta alternativa di risoluzione raccomandano un intervento normativo a livello comunitario che è molto più esteso di quanto effettivamente necessario a fronteggiare i problemi causati dalla contraffazione.
Nello specifico, Junilistan è contraria alle proposte di coordinamento delle attività delle autorità giudiziarie e di polizia e all'armonizzazione del diritto penale tra i vari Stati membri.
Per i motivi enunciati siamo obbligati a votare contro l'intera relazione.
Vasco Graça Moura
L'importanza crescente dei diritti di proprietà intellettuale conferma un paradigma inconfutabile: l'economia moderna attribuisce valore alla conoscenza su cui si basa e vuole proteggerla. Le attività produttive di qualsiasi settore dipendono dalla possibilità di detenere diritti esclusivi sull'utilizzo di un know-how specifico. La contraffazione è spesso sanzionata perché il danno arrecato alle industrie che operano lecitamente ha evidenti ripercussioni in termini di occupazione, ricerca e sviluppo. Tali effetti destano gravi preoccupazioni nel mio paese.
Detto questo, bisogna riconoscere che la contraffazione oggi non arreca esclusivamente danni economici: la sua portata negativa si è estesa verso nuove frontiere; mentre prima era diffusa la contraffazione di capi d'abbigliamento, adesso esistono medicinali e prodotti alimentari contraffatti che possono avere effetti nocivi. Il consumatore ignaro non capisce l'entità del rischio.
E' nostro dovere contrastare questo tipo di contraffazione. Occorrono sanzioni più aspre, coordinamento e cooperazione tra le autorità competenti e l'armonizzazione dei principi giuridici validi presso le varie giurisdizioni partecipanti.
Oltre a istituire meccanismi efficaci di composizione delle eventuali controversie, dovremmo addivenire a una sorta di "accordo commerciale anticontraffazione”. Si tratta di un accordo internazionale multilaterale, attualmente in discussione, che offre nuovi strumenti giuridici atti a stabilire provvedimenti di sorveglianza e sanzionatori adeguati.
Pedro Guerreiro
La risoluzione adottata dal Parlamento europeo contiene tematiche e proposte cui noi siamo favorevoli, pur dissentendo su taluni aspetti.
La lotta contro la contraffazione dovrebbe essere senz'altro una priorità. Tuttavia, seppure la risoluzione affermi che i diritti di proprietà intellettuale "comprese le indicazioni geografiche e le denominazioni di origine, non sono sempre tutelati in modo efficace dai partner commerciali dell'Unione europea”, è opportuno sottolineare che neppure l'UE brilla da questo punto di vista. Il Consiglio ha bloccato una proposta per un regolamento relativo al "made in” e non ha approvato nessun altro provvedimento volto a imporre regole vincolanti alle importazioni da paesi terzi in merito all'indicazione del marchio d'origine dei prodotti.
Da parte nostra, continueremo a incoraggiare l'approvazione di provvedimenti comunitari con cui tutti i paesi sono invitati ad approvare e attuare misure contro la contraffazione dei marchi commerciali e il contrabbando, oltre a prevedere controlli doganali specifici per l'individuazione di prodotti accompagnati da false dichiarazioni d'origine o che violano le norme di tutela dei marchi.
Ogni paese dovrebbe attuare misure di protezione contro le esportazioni aggressive, effettuando controlli e ispezioni sistematiche sulle merci importate e facendo ricorso, ove necessario, a clausole di salvaguardia.
David Martin
per iscritto. - (EN) Ho votato a favore di questa relazione che svolge un ruolo importante nella lotta contro la contraffazione, che riguarda all'incirca il 7-10 per cento degli scambi mondiali con un costo pari a 500 miliardi di euro. La relazione articola una proposta coerente e razionale per la lotta che l'UE può intraprendere contro la contraffazione, una posizione che gode del mio consenso. Pur rispettando diritti fondamentali quali la protezione della riservatezza e dei dati, la relazione offre un quadro per un impegno congiunto contro la contraffazione e protegge così migliaia di posti di lavoro qualificati.
Avril Doyle
per iscritto. - (EN) L'onorevole Ortega propone uno strumento di gestione dei documenti giuridici chiamati atti autentici. Gli atti autentici provengono in genere dagli Stati membri fondati sulla civil law, dove la fonte principale del diritto è la legislazione, in opposizione ai paesi di common law (ossia Irlanda e Regno Unito) che fanno maggiore riferimento a diritti e privilegi consuetudinari. Secondo la civil law, un atto autentico deve essere predisposto da un funzionario pubblico o da altra autorità competente e l'autenticità riguarda sia l'atto medesimo che i suoi contenuti. I contenuti possono spaziare dalle operazioni finanziarie, fino alle registrazioni pubbliche o altri documenti di questo genere.
La proposta di risoluzione parlamentare intende incoraggiare un maggiore intervento legislativo tra gli Stati membri che posseggono tali atti tramite il loro mutuo riconoscimento e utilizzo in ambiti specifici. Questa proposta aggiunge peso alla legislazione pregressa e conferisce potenziali vantaggi ai paesi che si reggono su questo tipo di tradizione giuridica.
Carl Lang e Fernand Le Rachinel
per iscritto. - (FR) Questa relazione relativa all'impiego transfrontaliero e al riconoscimento degli atti autentici pone alcuni rischi di confusione per diversi motivi.
In primo luogo è opportuno precisare che la nozione di atto autentico non esiste nei sistemi common law. In Inghilterra e nel Galles, gli avvocati (solicitors) svolgono funzioni notarili. Esistono anche notai di professione (scrivener notaries) che però non possono rilasciare atti autentici e sono esclusivamente abilitati a certificare l'autenticità delle firme.
Per un eccesso di zelo nel volere armonizzare le professioni giuridiche, la Commissione presta poca attenzione alle differenze motivate dalla natura stessa degli ordinamenti giuridici degli Stati membri.
Purtroppo, questa volontà politica non favorisce la sicurezza giuridica nel suo complesso.
L'Europa deve preservare l'identità dei suoi popoli, nonché i valori e le tradizioni proprie di ciascuno Stato. Sarebbe un errore madornale costruire l'Europa contro i suoi stessi popoli.
David Casa
per iscritto. - (MT) Questa relazione riveste un'importanza estrema e dovrebbe essere considerata un riferimento su cui poggeranno numerose altre decisioni in futuro. L'impiego dell'informatica e delle tecnologie della comunicazione in ambito giudiziario agevola notevolmente il lavoro delle amministrazioni e della giustizia. In un'Europa che lavora per una maggiore integrazione e unità economica e sociale, occorrono anche strumenti in grado di metterci al passo con i tempi. Questa è la nozione alla base di e-Justice.
Nel contempo non dobbiamo però dimenticare che i sistemi tradizionali utilizzati in passato avevano anch'essi dei pregi e credo che trovando un equilibrio adeguato possiamo lavorare insieme in maniera più armonica e a vantaggio di tutti. Il ricorso al sistema di giustizia elettronica consentirà alla giustizia di concentrarsi sul proprio lavoro senza essere appesantita da un ulteriore onere amministrativo.
Carlos Coelho
Lo Spazio di giustizia europeo è stato creato grazie al riconoscimento reciproco delle sentenze e alla creazione di una cultura di cooperazione giuridica tra le autorità competenti al fine di favorire la libera circolazione dei cittadini in Europa.
Si stima che all'incirca 10 milioni di persone sono coinvolte in contenziosi transfrontalieri in Europa, con tutte le difficoltà connesse in termini di lingua, distanza, ordinamenti giuridici sconosciuti, eccetera.
L'utilizzo delle tecnologie dell'informazione e della comunicazione nell'amministrazione della giustizia può offrire nuove soluzioni e migliorare il funzionamento della giustizia in termini di migliore accessibilità ed efficienza, razionalizzare le procedure e tagliare i costi.
La strategia proposta per la giustizia elettronica si prefigge essenzialmente di rendere la giustizia più efficace in tutta Europa a vantaggio della cittadinanza. Tuttavia il potenziale campo d'applicazione della giustizia elettronica potrebbe essere molto più esteso; occorre quindi definire chiaramente i limiti della sua portata al fine di non mettere a rischio l'efficacia e la credibilità delle iniziative comunitarie.
Qualsiasi cambiamento deve essere introdotto per gradi e in sintonia con l'avanzamento dello Spazio di giustizia europeo e lo sviluppo tecnologico.
Sono favorevole alla richiesta avanzata affinché la Commissione predisponga un piano d'azione e un portale europeo per la giustizia elettronica.
Avril Doyle
per iscritto. - (EN) Il Consiglio "Giustizia e affari interni” del 2007 ha approvato nelle sue conclusioni l'utilizzo della giustizia elettronica - relativo all'utilizzo transfrontaliero delle tecnologie informatiche nel settore giudiziario - e stabilito che sarebbero dovuti proseguire gli sforzi per la creazione di un sistema centralizzato per lo spazio europeo di libertà, sicurezza e giustizia. Adesso che l'utilizzo di Internet si avvicina alla saturazione e si comprende appieno la portata di questa società basata sull'informazione, un maggiore sostegno tecnologico alla giustizia appare un chiaro vantaggio per tutti. Tuttavia è importante riconoscere che la tecnologia ha uno sviluppo eterogeneo all'interno dell'Unione e che questo strumento rimarrà facoltativo fino al momento in cui sarà raggiunto uno sviluppo più uniforme e un potenziale tecnico più avanzato.
La proposta della collega Wallis riguarda la realizzazione di un sistema centralizzato di giustizia elettronica e illustra le azioni da intraprendere per la creazione di un portale di e-Justice europeo in cui saranno raccolti tutti gli aspetti attinenti a questioni civili, penali e commerciali e che conterrà, a titolo esemplificativo, i dati di casellari giudiziari, registri tavolari e fallimentari che saranno così accessibili agli Stati membri.
Alessandro Battilocchio
per iscritto. - Grazie Presidente, dichiaro il mio voto favorevole alla relazione sulle prospettive di sviluppo per la costruzione della pace e la ricostruzione della nazione nelle situazioni postconflittuali, curata dal collega Nirj Deva, per l'attenzione alle responsabilità della Comunità internazionale nei confronti degli Stati, o dei gruppi locali, coinvolti in un conflitto. Sono compiaciuto del fatto che gli emendamenti proposti dal PSE hanno portato ad un sostanziale miglioramento della proposta, con riferimento alla necessità di maggiore coordinazione tra attività di costruzione della pace, d'aiuto umanitario e di sviluppo nei paesi che escono da conflitti. Vorrei porre l'accento sulla situazione dei bambini nelle zone di conflitto, in particolare coloro che hanno perso uno, o entrambi, i genitori. Inoltre molto spesso sono gli ospedali e le scuole a essere oggetto di attacchi dalle forze in conflitto. Occorre lavorare affinché i bambini possano superare i traumi post-conflittuali, attraverso la cooperazione con l'UNICEF, già presente in molte aree a rischio nel mondo, per garantire ai bambini un'educazione adeguata e un futuro migliore.
Hélène Goudin e Nils Lundgren
per iscritto. - (SV) Junilistan ritiene che la costruzione della pace e delle nazioni nei paesi in via di sviluppo non siano questioni di competenza dell'UE. Tali criticità devono essere infatti affrontate nell'ambito delle Nazioni Unite.
Siamo estremamente critici rispetto ad alcune frasi della relazione in cui si raccomanda il progressivo sviluppo della forza militare comunitaria e abbiamo pertanto votato contro questo documento.
Pedro Guerreiro
Preso atto che è impossibile commentare la mescolanza (intenzionale) di temi che questa relazione abbraccia, ci concentreremo su quello che consideriamo il suo obiettivo primario: sdrammatizzare l'ingerenza delle maggiori potenze dell'UE presso paesi terzi occultandola sotto il concetto di "responsabilità di proteggere”.
La relazione ribadisce certo la sovranità degli Stati, ma ritiene "tuttavia che, nei casi in cui i governi non siano in grado o non abbiano la volontà di proteggere, la responsabilità di prendere provvedimenti appropriati diventa la responsabilità collettiva della comunità internazionale in senso allargato”. Si osserva inoltre che tali provvedimenti "dovrebbero essere sia preventivi che reattivi, e dovrebbero comportare l'utilizzo della forza militare soltanto come extrema ratio”. Il linguaggio evidentemente non tradisce le intenzioni.
Comunque la relazione fuga ogni dubbio laddove "richiede” che "la responsabilità di proteggere subentri al principio di non intervento” ed è convinta che "esistano due fasi di costruzione della pace e dello Stato: la fase di stabilizzazione in cui l'accento viene posto sulla sicurezza, lo stato di diritto e la fornitura dei servizi di base, e la seconda fase di costruzione dello Stato che si concentra sulla governance e sulle istituzioni volte a garantirla”.
Questa relazione rappresenta un'incitazione all'ingerenza e al colonialismo.
Eija-Riitta Korhola
per iscritto. - (FI) Ho votato a favore della relazione Deva sulle prospettive di sviluppo per la costruzione della pace e la ricostruzione della nazione nelle situazioni postconflittuali perché affronta in maniera approfondita gli aspetti cruciali per la riuscita della ricostruzione. Il tema è fondamentale se si considera che metà dei paesi usciti da un conflitto rientrano in guerra entro cinque anni. Oltre al paese stesso che versa in una situazione di fragilità, nella ricostruzione di una nazione svolge un ruolo importante la comunità internazionale. Credo in particolare nell'importanza di consultare e sostenere ancora più di prima le organizzazioni femminili locali e le reti internazionali femminili per la pace, nonché di insistere sui diritti delle vittime di abusi sessuali e in particolare sul diritto di avere accesso alla giustizia. E' anche opportuno ricordare che la pace non significa unicamente assenza di guerra. Per avere successo, qualsiasi politica di ricostruzione deve affrontare le cause stesse dell'instabilità adottando i provvedimenti socioeconomici, politici e culturali che possono promuovere la crescita economica e creare le capacità istituzionali e amministrative.
Luca Romagnoli
per iscritto. - Signor Presidente, Onorevoli colleghi, esprimo il mio voto favorevole alla relazione dell'Onorevole Deva sulle prospettive di sviluppo per ciò che concerne il consolidamento della pace e dello Stato al termine dei conflitti. Il collega traccia un ottimo percorso di quella che dovrebbe essere l'ideale transizione dalla situazione di post-conflitto a una di normalizzazione della vita sociale ed economica.
Ritengo che ciò debba essere tenuto presente nella risoluzione dei troppo numerosi e violenti conflitti interni ai paesi, soprattutto relativamente al ruolo della Comunità Europea ed internazionale. Mi unisco al collega nel ritenere che la strada verso la risoluzione dei conflitti sia facile da tracciare e difficile da percorrere concretamente. Ciò non toglie, tuttavia, il fatto che almeno da parte dell'Unione Europea, l'azione debba essere volta al serio sostegno dei paesi in difficoltà e totalmente libera da posizioni ipocrite e di comodo.
|
{
"pile_set_name": "EuroParl"
}
|
Q:
difference of 湿らす vs 潤す
湿らす and 潤す.
Both of them have the same meaning? what's the difference between them?
Thanks,
Or
A:
湿らす (or its intransitive counterpart 湿る) is a matter-of-fact verb that means "to make/become (mildly) wet".
濡らす (or its intransitive counterpart 濡れる) is "to make/become (hardly/drippingly) wet".
潤す (or its intransitive counterpart 潤う) is used with a narrower range of objects that are considered "unhealthy" when dry, e.g., skin, lip, throat, meat, earth. In other words, 潤す means supplying water to something and putting it into a healthy moist status. The kanji 潤 has a clearly positive connotations (see: 潤い). 潤す even means "to (financially) enrich", "to (psychologically) comfort", etc., too.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Call a function on a ReactJs component class object
I have the following ReactJs class:
var Linker = React.createClass({
foo: function(){
console.log('foo')
},
render: function(){
return (
<div></div>
);
}
});
Now I want to call the foo() function from the outside:
Linker.foo();
But it throws:
app.js:4654 Uncaught TypeError: Linker.foo is not a function
Can anyone tell me how to call the foo() method?
Reason: I have to use an older version of react-router where I need an old ES5 class with some mixins to transition to a different route. My Plan is to let the Linker class do the transition while my main class is ES6.
A:
If you need access to the function without any context, then you can find it directly on the prototype property for Linker.
Linker.prototype.foo();
|
{
"pile_set_name": "StackExchange"
}
|
"The β1-selective adrenoceptor agonist dobutamine": a fallacy being perpetuated.
Dobutamine is a cardiotonic agent, developed as a racemate more than 30 years ago. The compound soon got the label "the β(1)-selective adrenoceptor agonist". However, a closer examination of the enantiomers showed that (+)-dobutamine is predominantly a β(1)- and β(2)-adrenoceptor agonist with modest selectivity whereas (-)-dobutamine is predominantly an α(1)-adrenoceptor agonist. Nevertheless, rac dobutamine is still frequently used as a tool for classification of β-adrenoceptors. This ignorance of chirality may lead to erroneous conclusions and consolidate false labels.
|
{
"pile_set_name": "PubMed Abstracts"
}
|
Hyperpolarised gases in magnetic resonance: a new tool for functional imaging of the lung.
In magnetic resonance imaging (MRI), nuclear spins are the source of the image signal. In the lung, low-proton spin density in alveolar gas and abundant gas-tissue interfaces substantially impair conventional native 1H-MRI. Spin polarisation can be increased in two non-radioactive noble gas isotopes, 3He and 129Xe, by exposure to polarised laser light. When inhaled, such "magnetized" gases provide high-intensity MR images of the pulmonary airspaces. Thus, hyperpolarised gas (HPG) MRI opens up new routes to a) morphologic imaging of airways and alveolar spaces, and b) analysis of the intrapulmonary distribution of inhaled aliquots of these tracer gases; c) diffusion-sensitive MRI-techniques allow mapping of the "apparent diffusion coefficient" (ADC) of 3He within lung airspaces, where ADC is physically related to local bronchoalveolar dimensions; d) also, 3He magnetisation decays in an oxygen-containing atmosphere at a rate proportional to ambient PO2. This property allows image-based determination of regional broncho-alveolar PO2 and its decrease during a breathhold. Currently, these modalities of functional lung imaging are being assessed by several European and American research groups in animal models, human volunteers and patients. First results show good imaging quality with excellent spatial and unprecedented temporal resolution, and attest to the reproducibility, feasibility and safety of the technique. Regionally impaired ventilation of both structural and functional origin is detected with high sensitivity, e.g. in smokers, asthmatics, patients with COPD or after lung transplantation. Studies into regional ADC and PO2 measurement demonstrate good agreement with reference methods and physiological predictions. The present limitations of HPG-MRI include the HPG production rate and the US and EU health authorities' still pending final approval for clinical use.
|
{
"pile_set_name": "PubMed Abstracts"
}
|
Clinical Significance and Phenotype of MTA1 Expression in Esophageal Squamous Cell Carcinoma.
Metastasis-associated gene 1 (MTA1) is considered a potential prognostic factor in esophageal cancer. We investigated the clinical relationship between MTA1, LAT1, and tumor metabolism, as evaluated by positron emission tomography (PET) in esophageal squamous cell carcinoma. We analyzed 142 esophageal squamous cell carcinoma patients who underwent curative resection without preoperative treatment. MTA1 expression was assessed by immuno-zahistochemistry, and tested against standardized uptake values from preoperative PET-CT. The association among MTA1, LAT1, and 18FAMT PET results were analyzed. MTA1 staining was observed in 82 of 142 cancer tissues. Five-year overall survival was 69.9 % in the absence of MTA1, but 50.7% otherwise (p=0.021), while disease-free survival was 66.5% and 49.0% (p=0.071), respectively. Abnormal 18FAMT accumulation was noted in 13 patients without MTA1 and in 18 patients with MTA1 (p=0.079), with maximum standardized uptake value 1.6±1.6 and 2.7±1.6, respectively (p=0.036). MTA1 expression was positively correlated with LAT1 (p=0.013) and CD34 (p=0.034) expression, but not with Ki-67 (p=0.078). MTA1 shows promise as a diagnostic and prognostic marker in esophageal cancer, and we anticipate that the gene will also prove to be a good therapeutic target.
|
{
"pile_set_name": "PubMed Abstracts"
}
|
Q:
How can I store the value of a MySqlCommand in C# to a local variable?
I am using MySql in Windows Form Application. I want to store the result of a MySqlCommand in a string and then assign the contents of that string to a label control on my Windows form so that the label text will display the result of the MySqlCommand.
I have tried the bottom code but that doesn't work.
public void WhoLogIn()
{
_con.Open();
MySqlCommand NewCommand = new MySqlCommand("select titleandfullname from users where username ='" + Variables.whologin + "';", _con);
//Variables.whologin is a public const string in a class declared in another file.
MySqlDataReader result;
result = NewCommand.ExecuteReader();
string _nameofwhologin;
while (result.Read())
{
_nameofwhologin = result.GetString(0);
}
_nameofwhologin = label2.Text;
}
I would like help on how to achieve this using corrections to the code or the proposal of a different method altogether.
A:
Change your code to this;
public void WhoLogIn()
{
_con.Open();
MySqlCommand NewCommand = new MySqlCommand("select titleandfullname from users where username ='" + Variables.whologin + "';", _con);
//Variables.whologin is a public const string in a class declared in another file.
MySqlDataReader result;
result = NewCommand.ExecuteReader();
string _nameofwhologin;
while (result.Read())
{
_nameofwhologin = result.GetString(0);
}
label2.Text=_nameofwhologin;
}
|
{
"pile_set_name": "StackExchange"
}
|
March
March
March
In light of all the changes going on world wide with the Coronavirus, Eugene Velo is temporarily CANCELLING our Saturday ride leader format in line with Public Schools protocol, which at this point is through April 28th. This can change of course. We recommend you ride by yourself, or a small group
Camas Bakery Ride! This Saturday is one or our most popular and social rides. Relax, it is a casual day! We will head out Fern Ridge trail towards Applegate then make a social stop at Camas Swale Bakery for some coffee and fresh baked goods. Bundle up and bring layers, fenders
|
{
"pile_set_name": "Pile-CC"
}
|
Can the blockchain improve the way real estate property is bought, sold, or rented? Does it actually have a problem needing a solution? Let’s take a look at the numbers.
Today, the real estate market is thriving. The total amount of houses sold solely in the US market in 2017 is more than 5.51 million, more than in any of the previous years. At the same time, the prices of real estate also climbed higher. In 2016 you could have bought a house for $233,800 on average. In 2017 that price was $247,200, and in 2018 that price was $273,800. That’s a five percent increase every year!
The prices rise because, according to experts, the US is underbuilt. There are 1-2 million fewer units than there should be based on population growth. We take the US as an example, but there’s the same situation in every country with a growing population – China, Germany, United Kingdom, India, Canada, etc.
Big problems need solutions
If people always need houses to live in, then the industry feels great, right? But in reality, there’s a lot of medium and large-scale problems that exist only because there wasn’t any appropriate technical solution for them, and the blockchain can become such a solution. It can help with:
Too many middlemen involved in the process, high fees. The lack of transparency about the property objects. The difficulty of making property deals for inexperienced buyers.
We’ll look into each of these problems below.
While the rise of prices is good for real estate agents, who get 6% from each deal, it’s not so good for buyers and sellers who would rather prefer to eliminate these middlemen. How much would a buyer save if they don’t pay it? If an average house costs $273,800 today, then it’s $16,428 saved on agent fees; essentially a small fortune! This is where the blockchain comes in handy, allowing people to make deals directly between two parties in a trustless environment.
The transparency problem of the real estate market is the lack of info for buyers about the quality of the sold objects. Buyers often have to rely on databases of real estate agencies during the decision-making process and then decide just by looking at advertising booklets. They don’t have full information about its history, because these agencies have no incentive to share their databases with anyone, competitors or customers alike. That could also be a reason for the decline of investments in REITs (real estate investment trusts), funds that are similar to mutual funds, but specialized in real estate, which also lack transparency in their structure.
Deloitte, a global consulting firm, points out that the role of technology in the real estate market will grow over time. Its report “Blockchain for Real Estate,” states, “The transparency issue is a long-standing, pre-blockchain obstacle in the real estate sector, and remains a tough nut to crack,” but it believes that the situation will change over time, because competition will force everyone to become more transparent. Thus, the blockchain role will also increase.
The last of our three problems is the difficulty of accessing the market for inexperienced investors. Brian Wallace, CEO of NowSourcing, wrote that 71% of Americans wish that real estate investment was easier, and 66% of Americans believe flipping houses is a great way to make money. If only somebody would have given them an easy way to invest, all of this potential money could have blown into the real estate market.
There is already a bunch of attempts to create a blockchain-based real estate platform, and we’re going to mention a few here. But the main focus here is on Max Crowdfund, because it seems like a very promising project, having all the features needed for any kind of dealing with real estate – be it buying a house, trying to pick a cozy place to rent, or issuing tokenized shares for investors by real estate investment funds.
What is Max Crowdfund?
Max Crowdfund is a platform that potentially solves all real-estate-related tasks, which we can divide in two basic groups:
Managing real estate funds – Creating them and buying and selling tokens of these funds, to make indirect investments in real estate, helping gather all the necessary documents to comply with regulations.
Real estate property for retail investors – Listing, connecting buyers, and sellers, recording all necessary information about deals and the qualities of objects.
Why is the blockchain even needed for these tasks? It looks like it could be enough to use a traditional database and concentrate on marketing to make it popular. But that’s not why. The main point of using the blockchain here is to make it an immutable ledger that can’t be changed by anyone retrospectively, because in every open system with unbiased info and reviews there is always a high chance that there would be some dishonest participants who would want this unbiased information to be deleted if it doesn’t suit their agenda. Also, it makes Max Crowdfund less dependent on its development team because all activity on the platform should happen without its involvement, thus no legal troubles. We’ll see below why the blockchain, or more specifically the Ardor blockchain that Max Crowdfund is based on, is such a convenient instrument for real estate management.
Why does Max Crowdfund use Ardor instead of, for example, Ethereum? Being a Proof-of-Stake system, Ardor is able to potentially handle 100 transactions per second or maybe even more (although, this has yet to be tested), which is important for a potentially popular platform that Max Crowdfund could become, because global token trading requires high throughput, and let’s not forget that Ardor may also host other projects such as network capacity.
Real estate funds and their legal status
Let’s start with real estate funds. Why is Max Crowdfund‘s solution a big leap forward for small real estate companies? Currently, only the biggest real estate investment trusts are listed on large exchanges. It’s hard to get a REIT listed, which is why these funds often have difficulties with getting enough investments to maintain themselves. Max Crowdfund facilitates the creation of such funds, allowing any company related to real estate investments to issue its shares in the form of tokens, sell them as the shares in the mutual fund to any interested investors, or even pay dividends from the property income to its token holders. But wait, what about the regulators and the whole issue with security tokens? There’s a solution for this, too. The Ardor blockchain incorporates a complex system for issuing tokens that can be transferred only between verified accounts. Ardor allows for the creation of subchains with different security levels, for different types of investors.
There are three security levels for Max Crowdfund users.
Clearance level 1 requires providing only basic identity information, and it allows one to buy MPG tokens, the native Max Crowdfund utility token, and most importantly, list, buy, sell, or rent a property. Clearance level 2 requires full KYC verification. Level 2 users can buy shares of funds listed on the platforms according to their corresponding jurisdictions. For example, German users would be able to buy shares of German-based funds. Clearance level 3 is for qualified investors. They should send documents, confirming their status, and they can invest in any funds they want to.
These security levels allow each user to gain access to only the necessary parts of the platform. When anybody needs to list his or her property for sale on Max Crowdfund, it serves like a message board, so there’s no need for the user to provide too much information, and the whole listing process should be fast. But if another user wants to invest his or her money, he/she must provide more information because the platform must be compliant with the law, hence all the required KYC and AML procedures, but this shouldn’t be a surprise for users.
In addition, Max Crowdfund develops an automated system, a template for each jurisdiction, that could be used to register a real estate fund with less paperwork than usual.
Using such a permissioned blockchain structure is a win-win for everyone: real estate funds can get a listing on potentially liquid trading exchanges, get new investors, and they can evade problems with the laws revolving around security, because all investors should pass KYC and AML procedures. Investors get an opportunity to invest their money in a transparent fund, with all information about its property objects recorded on the blockchain. The platform acts as a middleman in some instances, such as KYC and AML procedures, but in all other cases, especially data management, it stays decentralized. In the future, Max Crowdfund has the potential to become a big database with information about every property in any country.
Retail Real Estate Management
Private property management is another function of the Max Crowdfund platform. As we already know, to list or rent a property, its users only need a Clearance 1 Level. For each listing of property for sale or for rent, a certain amount of MPG tokens should be paid out. All tokens received as a fee by the platform will be burned; this will decrease the amount of tokens in circulation and increase the value of the remaining ones.
All events regarding selling, purchasing, renting, or even listing a property are recorded on the blockchain. Every ticket created on the platform, for example, an agreement between a tenant and a manager, is also recorded on blockchain. Also any user can leave a review of a property. Good or bad, it doesn’t matter, no one will be able to remove them.
It’s worth noting that it has a serious advantage against all real estate agencies – using Max Crowdfund, users shouldn’t have to pay a high agent fee. Every seller should be interested in leaving a detailed and honest description of a property, thus facilitating the choice for the buyer or tenant.
Initial Token Offering
To fund the development of the platform, Max Crowdfund is conducting an initial token offering (ITO). It’s the same thing as an ICO, but it offers utility tokens instead of coins and it shouldn’t have any problems with the SEC or similar institutions around the world.
The ITO of Max Crowdfund is a bit innovative. All gathered funds will be used to build a real estate portfolio and then the rent of these property objects will go to funding further development of the project. That’s a pretty smart way to finance the project, because projects usually tend to run out of money, for example, many projects that were raising funds in ETH at price of $800 six months ago now suffer from heavy losses, and it’s unclear how they could fund their development after such a drop.
On the contrary, real estate investments are pretty safe and conservative, and can allow the project to keep running for many years, not being dependant on a volatile crypto market. Max Crowdfund has plans to use a part of these profits from real estate to buy back tokens on the open market and burn them, thus decreasing the amount of tokens in circulation. The same goes for the MPG tokens received by the platform as a fee for various services.
Competitors
A decentralized regulatory-compliant platform, which facilitates any kind of deals with real estate, has the necessary tools for funds and investors and can attract many serious companies and people when it goes live. But what about its competitors? The real estate market is a big one, and many developers would like to get a piece of this pie.
The Propy project has conducted its ICO in 2017. It proposed a registry for property purchases, that mirrors the official Land Registry, as well as a web-based listing platform. Their goal is to propose a blockchain-based property registration, with no need for additional paperwork. So far, there’s a slight use of smart contracts to send/receive documents, but most work is done the old-fashioned way: somebody has to deliver the documents to the official recording office, and it currently only works in US.
Leaseum Partners is a real estate investment firm from UK that has plans to raise $250 million during its closed ITO and buy commercial real estate in New York; about 6-10 buildings to be specific. Their tokens are considered as a security, thus mandating that they can only be sold to accredited investors. All Leaseum Partners investors will receive dividends from the acquired property. This project isn’t very complex: it’s an investment fund, and the tokens issued here only represent a share of ownership.
Harbor is a blockchain startup from San Francisco that is backed by the famous investor Peter Thiel. Currently, it has raised more than $40 million. Harbor develops its own token standard, R-Token (the Regulated Token), based on the ERC-20 protocol, but it has enforced regulatory compliance at the token level. This allows one to tokenize real estate and trade it freely, but it restricts trading to only verified accounts. However, it doesn’t have any marketplace to look for information about this traded real estate, being more about the process of tokenization and trading, not about choice and transparency.
Conclusion
Can we call Max Crowdfund a solid project with a serious potential? Definitely. It features everything related to real estate, from creating investment funds that can facilitate access to real estate investment for small retail investors, to the ordinary renting and selling of property. It could have a chance to be considered legal because its developers care about the most tricky part with blockchain investments – regulatory compliance.
Many of its competitors offer the same features, but none of them offer it all-in-one. Moreover, the project is being developed by two companies who have many properties under management, operate property funds and rent out property portfolios. As such they can transfer their own assets and funds first to ensure all is working like it should prior to opening the platform for third parties. If they can attract regular users who would fill its property objects database even further it would give it huge potential value and it could get some share of a market, and that would be pretty solid.
All materials on this site are for informational purposes only. None of the material should be interpreted as investment advice.
|
{
"pile_set_name": "OpenWebText2"
}
|
You don't think it's possible that the local FBI just really wants to get into that phone?
I think they absolutely do want into the phone and it might even be sufficient reason to permanently cripple security on all smart phones. In the NPR stories last week they talked with a prosecutor in NY that said that had about 150 apple phones they would like to get into. They could have brought any of those cases forward as the precedent setting case but didn't.
|
{
"pile_set_name": "Pile-CC"
}
|
England’s women made history early Sunday morning by reaching the World Cup semi-final for the first time. This will help build the momentum of women’s football as one of the fastest growing sports in the country. This growth is often associated with the introduction of The Football Association (FA) Women’s Super League in April 2011 and the popularity of women’s football at the London 2012 Olympics.
England’s success is arguably now another landmark event in the cultural narrative surrounding women’s football, with the potential to raise awareness and inspiring participation. But its potential significance also raises a critical question – what might this success mean in tackling some of the pervasive gender inequalities still evident in the game?
It is now a decade since Sepp Blatter suggested that female players should wear “tighter shorts”. Undoubtedly there have been gains since then: endorsements, players earning around £60,000, growth in participation and the development of the FA Women’s Super League. The BBC is broadcasting every World Cup game live and switched coverage of England’s quarter-final match against Canada from BBC3 to BBC1 after successful viewing ratings. Yet it is clear that the discriminatory treatment of women in football is still pervasive.
England’s success has mobilised people to campaign through social media for greater parity. The power of social media in sport has been more formally recognised at mega-events since the Olympic and Paralympic games, where growing networks of unaccredited media participants reported on aspects of the games that might otherwise not be covered – stories of displacement, local art, disengagement, human rights.
Even a cursory look on Twitter just hours after England’s quarter-final success reveals a dialogue between fans, friends and organisations that might invigorate critique. While this might not be an explosion of activism on the scale of, say, the global occupy movement, it may still generate social change, confronting the dominant ways of thinking about gender, femininity and football.
Scholars have long noted the potential for public spaces to be used to mobilise change. While some of the comments on Twitter may only reinforce gendered norms, we can all learn from the intersections between popular culture and citizen activism.
Bloggers, tweeters and other social media users are acting as self-designated media who want to share content, concerns and comments in the spirit of addressing gender inequality in the sport. Social media provides an opportunity for people to learn about gender discrimination and structural sexism outside of formal institutions, and to think differently about what is “normal”.
Another example includes complaints on Twitter about an article published in the Daily Mirror – Watching lionesses is such a roar deal – where the reporter wrote: “The world cup has shown that women’s football really isn’t that good – a woman’s place is not on a foreign field playing second rate football”.
It remains to be seen post-World Cup 2015 what kind of new media legacy will emerge and what kind of stories will be told that highlight and challenge pervasive inequalities.
It’s worth noting that social media can lead to interactions where one’s own views and experiences are simply validated. As others argue, public protests and attempts to mobilise can always be lost in the constant news stream.
What is clear at this juncture is that these are complex processes between online and offline media. How governing bodies, such as FIFA, and the mainstream media interact with citizen media will play crucial role in any cultural change. Social media can’t change things on its own.
|
{
"pile_set_name": "Pile-CC"
}
|
Rajesh Verma (cricketer)
Rajesh Verma (born 11 December 1981) is an Indian first-class cricketer who plays for Mumbai. He made his first-class debut for Mumbai in 2002.
References
External links
Category:1981 births
Category:Living people
Category:Mumbai cricketers
|
{
"pile_set_name": "Wikipedia (en)"
}
|
The present invention relates to a method for fabricating a semiconductor memory component that includes a barrier layer that insulates the lower electrode of a storage capacitor from a silicon substrate. The method includes steps of: applying a barrier layer; patterning the barrier layer prior to applying a storage capacitor with a hard mask; and removing the hard mask that remains after the patterning so as to uncover the patterned barrier layer. A method including these steps is known, for example, from U.S. Pat. Nos. 5,464,786, 5,506,166, and U.S. Pat. No. 5,581,436.
Furthermore, it is known from International Publication WO 99/27581 to provide an insulation layer, with a contact plug inside it, on a substrate. A dielectric with a recess is formed on the insulation layer, and a barrier layer is provided on this structure as a diffusion barrier. Then, a lower electrode layer, a dielectric layer and an upper electrode layer for a storage capacitor are deposited. Next, a buffer layer, which covers the structure and fills up the remaining recess, is deposited. Finally, in a chemical mechanical planarization step, the buffer layer is eroded down to the barrier layer, and then the barrier layer which has been uncovered at the surface is removed.
The corresponding semiconductor memory components include at least one storage capacitor having a storage medium that includes a ferroelectric thin film or a thin film with a high dielectric constant. When using storage media of this type, annealing processes at high temperatures are required, characteristically of the order of magnitude of 800xc2x0 C., in an oxidizing environment including, in particular, a process gas of oxygen. Material diffusion processes, for example, through partial oxidation of polysilicon plugs, which are used to make contact with the silicon substrate, must be avoided, since they may impair the semiconductor memory component or even cause it to fail.
To prevent material diffusion processes, diffusion barriers or sandwiches of such barriers in combination with adhesion layers, for example, consisting of Ir, IrO2, IrO, are used. In the text which follows, these structures are referred to overall as barriers or barrier layer. These barriers are arranged between the storage capacitor and the silicon substrate. For example, the lower electrode, known as the bottom electrode of the storage capacitor, which typically consists of Pt, Ru, RuO2, is applied to the barrier layer. To ensure optimum adhesion of the lower electrode to the barrier, the barrier layer must have a planar contact face which is as large as possible. Moreover, the lowest possible contact resistance is required, especially as electrode thin films usually exhibit poor adhesion to the silicon substrate.
The barrier layers can only be patterned with difficulty in the plasma, since they form insufficient or nonvolatile compounds in the process chemistry used to transfer the pattern. Therefore, the patterning has hitherto preferably been carried out by physical sputtering removal of the layers. Consequently, low selectivities with respect to mask materials are achieved during the transfer of the pattern. Moreover, in the case of a barrier layer made from IrO2, the oxygen which is released additionally contributes to the removal of the resist. Moreover, the transfer of the pattern leads to a significant change in the CD (critical dimension) and/or to beveled profiles. These beveled profiles are caused by the resist being drawn back in the lateral direction, or by the accumulation of redepositions on the side walls of the pattern that is produced, or from a combination of the two. The redepositions can only be removed with difficulty, if at all.
Moreover, in combination with the application of storage capacitors to a silicon substrate, it is known to use a dielectric hard mask which consists, for example, of SiO2, SiN or SION. Since in principle these mask layers are more difficult to erode, higher selectivities can be achieved during a process that uses these mask layers. However, because of the mask beveling that occurs during physical sputtering erosion in the plasma patterning process, the thickness of the mask layer has to be selected to be greater than the thickness which would be required purely through the selectivity, in order to prevent the bevel from being transferred into the layer which is to be patterned. The removal of the mask that remains after the pattern has been transferred, in a plasma etching process, leads to an additional increase in the desired topography of at least the thickness of the mask layer which is to be removed.
Patterning processes of this type are known, for example, from U.S. Pat. Nos. 5,464,786, 5,506,166, 5,581,436. Wet processes for the subsequent erosion of the mask layer are fundamentally unsuitable, on account of the associated additional isotropic undercut etching of the patterns.
It is accordingly an object of the invention to provide a method for fabricating a semiconductor memory component that includes a barrier layer that insulates the lower electrode of a storage capacitor from a silicon substrate, which overcomes the above-mentioned disadvantages of the methods of this general type.
In particular, it is an object of the present invention to provide a method of the type described in the introduction which ensures an optimally large surface area or adhesion surface for the barrier layer with respect to the lower electrode of the storage capacitor.
With the foregoing and other objects in view there is provided, in accordance with the invention, a method for fabricating a semiconductor memory component having a silicon substrate. The method includes steps of: configuring a barrier layer on a silicon substrate; patterning the barrier layer using a hard mask to obtain a patterned barrier layer prior to configuring a storage capacitor on the substrate; embedding the patterned barrier layer and the hard mask that remains above the patterned barrier layer in an embedding layer; performing a chemical mechanical polishing step to remove the hard mask that remains above the patterned barrier layer, to remove the embedding layer that is above the patterned barrier layer, and to thereby uncover the patterned barrier layer; configuring the storage capacitor on the substrate such that a lower electrode of the storage capacitor is insulated from the silicon substrate by the barrier layer; and constructing the storage capacitor with an upper electrode and with a dielectric layer that is located between the lower electrode and the upper electrode.
In accordance with an added feature of the invention, the chemical mechanical polishing step is stopped at the surface of the barrier layer.
In accordance with an additional feature of the invention, the semiconductor memory component is used in a DRAM or a FeRAM.
In accordance with another feature of the invention, a ferroelectric material is used for the dielectric layer.
In accordance with a further feature of the invention, the barrier layer is designed as either a diffusion barrier or a diffusion barrier sandwich in combination with adhesion layers.
In accordance with a further added feature of the invention, the adhesion layers are made from Ir, IrO2, or IrO.
In accordance with a further additional feature of the invention, the hard mask is made from SiO2, SiN, or SiON.
In accordance with yet an added feature of the invention, the embedding layer is made from SiO2 by chemical vapor deposition.
In accordance with a concomitant feature of the invention, the method includes steps of: providing an insulation layer on the substrate; providing a contact plug in the insulation layer;
and providing the barrier layer on the insulation layer as a diffusion barrier.
In other words, the invention provides for the patterned barrier layer, together with the mask layer remaining on it, to be completely embedded in SiO2 using a CVD (chemical vapor deposition) process. This is followed by a CMP (chemical mechanical polishing) process, in which the polishing is advantageously stopped at the contact face of the barrier layer. These process steps ensure that the surface areas or contact faces (also known as the critical dimensions (CD)) of the barrier layers undergo a minimal change by producing perpendicular side walls on account of a hard mask that is used for transferring the pattern. Optimum adhesion of the storage capacitor, with low contact resistance, is ensured by means of the resulting large-area, planar contact face for the lower electrode that will be applied thereto, without producing an additional topography, because of the inventive combination of CVD-SiO2 and SiO2-CMP.
A further advantage of the inventive method is that the uncovered contact face of the barrier layer is embedded in a surrounding SiO2 layer, with the exception of its surface or contact face. A structure of this type with a buried barrier layer results from the inventive procedure using CVD-SiO2 and SiO2-CMP.
Other features which are considered as characteristic for the invention are set forth in the appended claims.
Although the invention is illustrated and described herein as embodied in a method for fabricating a semiconductor memory component, it is nevertheless not intended to be limited to the details shown, since various modifications and structural changes may be made therein without departing from the spirit of the invention and within the scope and range of equivalents of the claims.
The construction and method of operation of the invention, however, together with additional objects and advantages thereof will be best understood from the following description of specific embodiments when read in connection with the accompanying drawings.
|
{
"pile_set_name": "USPTO Backgrounds"
}
|
1. Introduction {#sec1}
===============
Sleep disturbances are one of the major nonmotor symptoms in Parkinson\'s disease (PD) that affect a significant number of patients and result in an impaired quality of life. These disturbances can occur in the early or even in the premotor phase of PD but are often underrecognized by patients and physicians. The evaluation of sleep disturbances in PD is complicated by complex, overlapping nocturnal problems including nocturnal motor and nonmotor problems in addition to disease-related alterations of the sleep/wake cycle \[[@B1]\]. Restless legs syndrome (RLS), which is characterized by an urge to move the legs accompanied by abnormal leg sensations, can coexist with PD. A significant number of PD patients who suffer from RLS exhibit delayed sleep onset \[[@B2]\], and PD patients with RLS are reported to have more severe sleep problems than PD patients without RLS \[[@B3]\]. The prevalence of RLS in the general adult population in Europe and the USA is approximately 7--10% \[[@B4]\], while the prevalence in Asia is reported to be 1--4% \[[@B5]--[@B7]\]. The marked clinical response to dopaminergic agents in RLS, together with the results of a lesioning study that examined the effects of 6-hydroxydopamine injections into A11 dopaminergic neurons in rats \[[@B8]\], suggests that central dopaminergic dysfunction plays a role in RLS; however, pathological evidence supporting dopaminergic involvement in the brain is lacking. In contrast, PD is characterized by resting tremor, bradykinesia, rigidity, and postural instability, which are caused by dopaminergic cell loss in the substantia nigra, as demonstrated by both pathological and imaging studies. Dopaminergic treatment improves motor symptoms and several aspects of nonmotor symptoms in PD patients. Several studies have reported that the prevalence of RLS is more frequent in PD patients compared with control subjects, suggesting a significant link between the two disorders. However, the reported prevalence of RLS in PD patients ranges from 0 to 50%, depending on the study \[[@B9]\]. Importantly, no valid RLS criteria exist for PD patients, and whether RLS criteria for the general population are also suitable for PD patients has yet to be determined \[[@B10]\]. Immobility due to parkinsonism may augment subtle RLS symptoms, and several motor and sensory symptoms related to PD are difficult to distinguish from RLS. When dopaminergic therapy effectively treats RLS symptoms in PD patients, the prevalence of RLS in PD may be underestimated; thus, RLS in PD may be confounded by chronic dopaminergic treatment. Despite the similarly positive response to dopaminergic treatment and presumed central dopaminergic dysfunction in both RLS and PD, different mechanisms are suggested to be involved in the pathogenesis of PD and RLS. A recent study reported that leg motor restlessness (LMR) that does not fulfill the diagnostic criteria for RLS is more prevalent in de novo patients with PD than in healthy controls \[[@B11]\]. To examine the occurrence and characteristics of RLS and LMR in PD and understand the relationships between these disorders, we have conducted a literature search for articles published between January 1984 and November 2014 using MEDLINE and the terms "Parkinson\'s disease," "restless legs syndrome," and "leg motor restlessness." Among the 364 articles identified, we included articles based on their relevance to RLS and LMR in PD and important papers cited within these articles. In this paper, we provide an overview of RLS, LMR, and PD and of the relationships among these disorders.
2. The Diagnosis of RLS {#sec2}
=======================
In accordance with the NIH/IRLSSG diagnostic criteria published in 2003, RLS is diagnosed when four essential symptoms are present: (1) an urge to move the legs is usually accompanied or caused by uncomfortable and unpleasant sensations in the legs; (2) the urge to move or the unpleasant sensations begin or worsen during periods of rest or inactivity, such as when lying down or sitting; (3) the urge to move or the unpleasant sensations are partially or totally relieved by movement, such as walking or stretching, as long as the activity continues; and (4) the urge to move or the unpleasant sensations are worse in the evening or at night than during the day or only occur in the evening or at night \[[@B12]\]. Additional findings that support the diagnosis of RLS include a positive family history of RLS, a positive therapeutic response to dopaminergic drugs, and the presence of periodic limb movements during wakefulness or sleep. The revised IRLSSG criteria for RLS, published in 2014, added a fifth diagnostic criterion that the occurrence of the four other diagnostic criteria cannot be accounted for solely by symptoms of another medical or behavioral condition as shown below \[[@B4]\]. This additional criterion increases the specificity of the RLS diagnostic criteria. In addition, hyperarousal producing poor sleep without daytime sleepiness has been described in patients with RLS, which is reflected in the fourth finding that supports the diagnosis of RLS, a "lack of profound daytime sleepiness" as shown below.
*IRLSSG Consensus Diagnostic Criteria for Restless Legs Syndrome/Willis-Ekbom Disease (RLS/WED) \[[@B4]\]*
*(a) Essential Diagnostic Criteria (All Must Be Met)*. Consider the following:An urge to move the legs is usually but not always accompanied by, or felt to be caused by, uncomfortable and unpleasant sensations in the legs.The urge to move the legs and any accompanying unpleasant sensations begin or worsen during periods of rest or inactivity such as lying down or sitting.The urge to move the legs and any accompanying unpleasant sensations are partially or totally relieved by movement, such as walking or stretching, at least as long as the activity continues.The urge to move the legs and any accompanying unpleasant sensations during rest or inactivity only occur or are worse in the evening or night than during the day.The occurrence of the above features is not solely accounted for as symptoms primary to another medical or a behavioral condition (e.g., myalgia, venous stasis, leg edema, arthritis, leg cramps, positional discomfort, and habitual foot tapping).
*(b) Supporting Features*. Consider the following:Periodic limb movements (PLM): presence of periodic leg movements in sleep (PLMS) or resting wake (PLMW) at rates or intensity greater than expected for age or medical/medication status.Dopaminergic treatment response: reduction in symptoms at least initially with dopaminergic treatment.Family history of RLS/WED among first-degree relatives.Lack of profound daytime sleepiness.
RLS patients usually complain of fatigue, reduced concentration, and depressive symptoms, which are suggestive of the consequences of sleep deprivation; however, excessive daytime sleepiness is uncommon in RLS patients, except in patients with severe RLS symptoms \[[@B4]\]. In contrast, according to the international classification of sleep disorders (third edition) \[[@B13]\], for RLS to be diagnosed, patient\'s symptoms must affect his or her sleep and daytime functioning, although this criterion may be omitted for certain research applications, such as genetic or epidemiological studies.
3. The Pathophysiology of RLS {#sec3}
=============================
The pathophysiology of idiopathic RLS remains unclear; however, the following mechanisms have been postulated: dopamine-related mechanisms, including reductions in striatal D2 receptor levels; iron-related mechanisms, including reductions in iron and ferritin levels in the cerebrospinal fluid (CSF) and genetic factors associated with altered brain iron levels; and altered microvascular flow in the legs \[[@B14]--[@B16]\]. Importantly, iron is a cofactor of tyrosine hydroxylase, which is the rate-limiting enzyme in the synthesis of dopamine. A study measuring oxygen and carbon dioxide partial pressures in the legs showed that peripheral hypoxia is associated with the appearance of RLS symptoms \[[@B15]\]. RLS patients exhibit lower ferritin levels and higher transferrin levels in the cerebrospinal fluid compared with control subjects; however, in the serum, ferritin and transferrin levels do not differ between RLS patients and control subjects \[[@B17]\]. Mizuno et al. \[[@B18]\] also reported no difference in serum iron, ferritin, and transferrin levels between RLS and non-RLS patients (psychological insomnia without RLS); in contrast, CSF iron and ferritin levels were significantly reduced and CSF transferrin levels were significantly increased in the RLS group compared with the non-RLS group. However, a weaker correlation between the CSF and serum ferritin levels in the RLS group suggests that impaired iron transport from the blood to the central nervous system may contribute to low brain iron concentrations in idiopathic RLS. These studies support the hypothesis that reduced CSF ferritin levels play a role in RLS. In addition, the endogenous opiate system may be involved in the pathogenesis of RLS, considering its ability to affect RLS symptoms \[[@B19]\].
According to a hypothesis about the pathogenesis of RLS reviewed by Clemens et al. \[[@B20]\], the hypothalamic dopaminergic A11 cell group projects to the neocortex, the serotonergic dorsal raphe nucleus, and the spinal cord, most strongly to the sensory dorsal horn and the intermediolateral nucleus of the spinal cord. The A11 nucleus exerts inhibitory controls in these areas; thus, dysfunction of the A11 nucleus or of these pathways is thought to lead to an increased sympathetic drive and the occurrence of abnormal sensations, focal akathisia, and muscle restlessness, contributing to the emergence of RLS. However, Earley et al. \[[@B21]\] investigated the A11 cell bodies in 6 RLS and 6 aged-matched control autopsy cases and found no dramatic cell loss or neurodegenerative process in the A11 hypothalamic region of patients with RLS. In the 4 autopsy cases of RLS, Lewy bodies were not found, and immunohistochemistry did not reveal accumulations of alpha-synuclein \[[@B22]\]. Connor et al. \[[@B23]\] reported that, in RLS autopsy cases, decreases in D2 receptor levels that correlated with RLS severity were observed in the putamen, and increased tyrosine hydroxylase levels were found in the substantia nigra but not in the putamen compared with controls. The authors suggested that their results were consistent with the finding that dopaminergic systems are activated in an animal model of iron insufficiency.
A study by Allen et al. \[[@B24]\] using proton magnetic resonance spectroscopy has demonstrated that increased glutamatergic activity is associated with the arousal sleep disturbance in RLS. This nondopaminergic abnormality may be responsible for sleep disruption in RLS patients and the observation that RLS patients rarely exhibit excessive daytime sleepiness despite sleep loss.
4. Imaging in RLS {#sec4}
=================
Allen et al. \[[@B25]\] assessed brain iron concentrations in 5 RLS and 5 control patients using a specific MRI measurement and showed that the iron content in the red nuclei and the substantia nigra was decreased in the RLS patients compared with the controls. The study, conducted using 3.0-Tesla MRI and T2 relaxometry, found that the iron index in the substantia nigra was lower in patients with late-onset RLS (onset age ≥45 years) than in controls, whereas no difference in the iron index was found between the controls and patients with early-onset RLS (onset age \<45 years) \[[@B77]\]. Brain imaging studies evaluating dopaminergic dysfunction in RLS patients have yielded inconclusive results. SPECT/PET studies have shown that nigrostriatal functions and ligand binding to the striatal dopamine transporters (DAT) and D2 receptors are normal in RLS \[[@B26]--[@B29]\], while other studies have found a reduced ability of D2 receptors to bind ligands and reduced 18F-dopa uptake in the striatum and putamen \[[@B30], [@B31]\]. In a study that examined real-time DAT binding potentials, RLS patients exhibited decreased DAT binding in the striatum in both day and night scans, suggesting that membrane-bound striatal DAT but not total cellular DAT is decreased in RLS \[[@B32]\]. Reduced echogenicity in the substantia nigra has been reported in RLS patients compared with healthy controls and PD patients \[[@B33], [@B34]\].
5. RLS and PD {#sec5}
=============
In view of the marked response to dopaminergic treatment in both RLS and PD, the relationship between PD and RLS has been investigated previously. Although the prevalence of RLS among PD patients varies widely (0--50%), depending on the study \[[@B9]\], several studies have found an increased prevalence of RLS in PD patients compared with controls. Several conditions observed in PD, including sensory symptoms, pain, motor restlessness, akathisia, and the wearing-off phenomenon, should be differentiated from RLS. However, no specific diagnostic criteria for RLS exist for PD patients; thus, an immobilization test to diagnose RLS in PD patients may be useful \[[@B35]\].
Similarities between PD and RLS include a marked response to dopaminergic agents, aggravation by dopaminergic antagonists, and an association with periodic limb movements in sleep. The differences between the two conditions include normal presynaptic nigrostriatal dopaminergic function, as shown by neuroimaging, and no neuronal loss in the substantia nigra in idiopathic RLS patients, whereas in PD, substantial neuronal loss in the substantia nigra and abnormal neuroimaging findings in the nigrostriatal dopaminergic system have been demonstrated \[[@B36]\]. In subjects with both PD and RLS, a significantly increased area of echogenicity in the substantia nigra was found compared with the controls and subjects with idiopathic RLS, suggesting the existence of different mechanisms for regulating brain iron in the idiopathic RLS patients and PD patients with RLS \[[@B37], [@B38]\]. In addition, significant increased echogenicity was detected in the substantia nigra in PD patients compared with the controls and the idiopathic RLS patients, but no significant difference in substantia nigra echogenicity was found between PD patients with RLS and PD patients without RLS \[[@B37], [@B38]\].
[Table 1](#tab1){ref-type="table"} summarizes the prevalence and relevant features of RLS and leg motor restlessness (LMR) in PD patients \[[@B2], [@B3], [@B11], [@B55]--[@B54]\]. RLS symptoms appear to be milder in PD-RLS patients compared with idiopathic RLS patients \[[@B55]\]; among 20 PD patients with RLS, only 3 patients requested treatment for RLS \[[@B3]\]. The risk factors for RLS in PD patients vary and include insomnia, depressive symptoms, cognitive impairment, longer disease duration, a higher dose of dopaminergic treatment, younger/older age, younger-onset PD, older-onset RLS, and severe or mild severity of PD, depending on the study (see [Table 1](#tab1){ref-type="table"}). A family history of RLS appears less frequently in PD patients with RLS than in idiopathic RLS patients \[[@B55]\]. A recent study found correlations between RLS and both nocturnal/supine hypertension and blood pressure fluctuations in newly diagnosed PD patients, suggesting the existence of cardiovascular and autonomic impairments in PD patients with RLS \[[@B45]\]. Peralta et al. \[[@B46]\] reported that, in 113 PD patients, comorbid RLS was associated with younger PD age of onset, lower levodopa-equivalent doses, and the presence of the wearing-off phenomenon (61%). Verbaan et al. \[[@B56]\] found that 11% of 269 patients with PD had RLS, and no differences were observed between the PD with RLS and PD without RLS groups, with the exception of female predominance in the PD with RLS group. The authors speculated that dopaminergic treatment may have led to the RLS prevalence being underestimated in PD patients. In contrast, Lee et al. \[[@B43]\] found that a longer duration of dopaminergic treatment was the most significant factor related to the presence of RLS in PD patients. RLS severity, as rated by the International RLS Scale, was significantly improved following subthalamic nucleus deep brain stimulation (STN-DBS) together with a reduction in dopaminergic treatment in PD patients \[[@B57]\], suggesting that dopamine receptor overstimulation may result in the emergence of RLS in PD. In RLS patients, prolonged dopaminergic treatment may result in augmentation, characterized by an overall increase in the severity of RLS symptoms \[[@B58]\] in which overstimulation of the D1 dopamine receptors compared with the D2 receptors in the spinal cord by dopaminergic treatment has been proposed as the mechanism \[[@B59]\]. In contrast, in PD patients, chronic dopaminergic treatment is associated with the development of dyskinesia and motor fluctuation rather than RLS, which are not observed in RLS patients. Chronic dopaminergic treatment may lead to augmentation of previously unrecognized RLS in PD patients; however, this hypothesis should be confirmed by additional studies comparing the incidence of RLS between untreated PD patients and treated PD patients prospectively.
A questionnaire-based study of a large sample of PD patients (*n* = 661) showed that the presence of RLS, nightmares, hallucinations, and sleep talking was associated with probable REM sleep behavior disorder (RBD), as defined by an RBD screening questionnaire score ≥ 6 \[[@B60]\]. A negative impact of RLS on the quality of life in PD patients has been described previously \[[@B54]\].
The effect of STN-DBS on RLS symptoms in patients with PD is difficult to interpret, considering the reductions in dopaminergic medication dosages following surgery. Kedia et al. \[[@B61]\] found that 5.6% of 195 PD patients who underwent STN-DBS experienced new, problematic RLS. The total daily amounts of PD medications were reduced after DBS by a mean of 74%, suggesting that reductions in PD medication doses may have unmasked RLS in DBS-treated PD patients. In contrast, 6 advanced-stage PD patients with RLS reported significant improvements in RLS symptoms following STN-DBS surgery, despite a mean 56% decrease in the levodopa-equivalent dose postoperatively \[[@B62]\]. Similarly, Chahine et al. \[[@B57]\] observed that STN-DBS ameliorated not only RLS symptoms but also other symptoms, such as daytime sleepiness and sleep quality.
Thus, an assessment of RLS in drug-naïve patients with PD may provide a more accurate understanding of the associations between PD and RLS. Two case-controlled studies showed that the prevalence of RLS in drug-naïve PD patients was not significantly greater than that observed in healthy controls \[[@B11], [@B39]\]. In two other studies that investigated drug-naïve PD patients, the prevalence of RLS was found to be approximately 16%, which appears to be greater than the prevalence of RLS in the general population in Asia \[[@B48], [@B45]\]; however, those studies did not include control subjects. Further studies including large samples of drug-naïve PD patients are required to determine whether a significant association between RLS and PD exists.
Most studies have suggested that the onset of RLS follows the onset of PD (70--95%) \[[@B3], [@B55], [@B46]\], and, unlike the situation with RBD, there is insufficient evidence to suggest that RLS is a risk factor for the subsequent emergence of PD. Wong et al. \[[@B63]\] have reported the development of PD in men following severe RLS symptoms (\>15 times/month). However, this study suggests that severe RLS symptoms may represent an early feature of PD rather than a risk of developing PD.
Rios Romenets et al. \[[@B64]\] investigated the relationship between RLS and the use of domperidone, a peripheral dopamine blocker that does not cross the blood-brain barrier, and found that RLS was more prevalent in PD patients taking domperidone than in PD patients not taking domperidone (48% versus 21%). The authors speculated that dopaminergic receptors located outside the blood-brain barrier or circumventricular organs in the brain may be involved in the pathogenesis of RLS in PD.
6. RLS Mimics in PD {#sec6}
===================
In PD, the following conditions can mimic RLS: akathisia, the wearing-off phenomenon, pain, dystonia, inner tremor, and sensorimotor symptoms related to PD, as well as restlessness in the legs, such as LMR \[[@B36], [@B59], [@B65], [@B66]\]. It is important to distinguish wearing-off-related restlessness from RLS. Akathisia, a condition typically associated with exposure to neuroleptic medications, is characterized by inner restlessness affecting the whole body rather than only the legs and does not vary diurnally. Patients with akathisia feel compelled to move because of inner restlessness rather than an "urge to move the legs" \[[@B67]\]. In 100 patients with PD, 68% experienced a need to move and an inability to remain still due to parkinsonism and sensory complaints, and 26 patients experienced a state of true akathisia \[[@B68]\]. Another study found that among 56 consecutive PD patients, 45% exhibited akathisia, and the presence of akathisia in these patients was associated with disease severity and the age of onset of PD \[[@B69]\]. Akathisia can be observed in untreated PD patients, but it is usually associated with the initiation of treatment with PD medications and is more common in treated PD patients \[[@B66]\]. Importantly, in PD patients, the overlap between RLS, wearing-off-related lower limb discomfort and restlessness, and akathisia may complicate the clinical assessment and diagnosis of true RLS \[[@B66]\].
7. Variants of RLS in PD {#sec7}
========================
In RLS, body parts other than the legs, such as the arms and trunk, may also be involved \[[@B12]\]. With the exception of severe RLS cases, the involvement of these regions as an isolated or initial sign is rare. Patients with restlessness in body parts other than the legs, including the arms \[[@B70]\], bladder \[[@B71]\], chest \[[@B72]\], back \[[@B73]\], abdomen \[[@B74]\], and genital regions \[[@B75]\], with or without restlessness in the legs, have been described. We reported an 82-year-old man with PD who presented with an abnormal sensation limited to his "lower back" \[[@B73]\]. The patient complained of an urge to move his lower back, and symptoms occurred in the evening and while at rest. His symptoms completely resolved following the administration of a low-dose dopamine agonist at bedtime, and he had no motor fluctuation or dystonia, suggesting that the patient had a variant of RLS, "restless lower back." Aquino et al. \[[@B75]\] described a 65-year-old woman with PD who had disabling discomfort in her pelvis and genital region. The symptoms occurred only during the evening and at night and were triggered by sitting down or lying down, resulting in insomnia. A low-dose dopamine agonist markedly improved her genital symptoms. However, whether restlessness occurring in body parts other than the legs is truly associated with RLS remains to be determined. In view of the dramatic response to dopaminergic medication at bedtime in these patients, recognition and awareness of restlessness in body parts other than the legs are clinically important.
8. Leg Motor Restlessness in PD {#sec8}
===============================
Interestingly, Gjerstad et al. \[[@B11]\] found an increased rate of leg motor restlessness (LMR) that did not fulfill RLS diagnostic criteria in drug-naïve PD patients compared with age-matched healthy controls; however, the prevalence of RLS did not significantly differ between the PD patients and healthy controls. LMR was defined as an urge to move the legs that did not fulfill the 4 essential criteria for RLS. These authors concluded that LMR but not RLS occurs with a nearly 3-fold higher risk in early PD compared with the controls. As shown in [Table 1](#tab1){ref-type="table"}, the prevalence of LMR is higher in PD patients than in controls. In our study, we found a significantly higher prevalence of nocturnal restlessness, as measured by the scores on subitems 4 and 5 of the PD sleep scale-2, in PD patients compared with controls, but the prevalence of RLS did not significantly differ between PD patients and controls \[[@B49]\]. When nocturnal restlessness was defined by a sum of the scores for items 4 and 5 equal to or greater than 2, the prevalence of nocturnal restlessness was found to be 32.3% and 14.0% in PD patients and controls, respectively (see [Table 1](#tab1){ref-type="table"}). Nocturnal restlessness was associated with the PDSS-2 total score but not with disease severity, motor function, motor complications, dopaminergic treatment duration, or total levodopa-equivalent dose, suggesting that nocturnal restlessness may be related to endogenous dopamine deficits at nighttime rather than medication-related motor complications. However, it should be noted that nocturnal restlessness as measured by the PDSS-2 may be a reflection of LMR, but a high score response (very often) on items relevant to LMR may reflect symptoms unrelated to LMR. Rajabally and Martey \[[@B53]\] investigated the relationship between LMR, RLS, and neuropathy as evaluated using a validated neuropathy scale in PD patients. They found that although neuropathy was more prevalent in PD patients compared with controls (37.8% versus 8.1%), neuropathy was not associated with RLS or LMR in PD patients and that LMR but not RLS was associated with earlier age at PD onset. Whether LMR in PD eventually develops into true RLS is still unclear \[[@B10]\]. Although the frequency of excessive daytime sleepiness did not significantly differ between PD patients with LMR and PD patients without restlessness, an increased frequency of insomnia and reduced total sleep times were observed in PD patients with LMR compared with PD patients without restlessness \[[@B47]\]. It is important to recognize LMR in patients at early stages of PD. RLS and PD can coexist whether or not they share a common pathophysiology \[[@B59]\], and LMR may be a PD-related sensorimotor symptom.
9. Treatment of RLS and LMR in PD {#sec9}
=================================
If serum ferritin levels are below 50 *μ*g/L, treatment should begin with an iron supplement. Subsequently, adding a long-acting dopamine agonist before bedtime should be considered. For PD patients already taking long-acting dopamine agonists, alpha-2-delta ligands (i.e., gabapentin, pregabalin, and gabapentin enacarbil) or clonazepam may be added \[[@B76]\]. For LMR, if the condition represents dopamine deficiency, a long-acting dopamine agonist should be administered first; however, established data for the treatment of LMR in PD patients are currently lacking.
10. Conclusion {#sec10}
==============
We reviewed the literature on RLS and LMR in patients with PD. Longitudinal studies assessing the prevalences of RLS and LMR and their impacts in PD are imperative to clarify the true relationships among PD, LMR, and RLS. In addition, prospective studies comparing the incidence of RLS and LMR between untreated PD patients and treated PD patients are necessary to understand the effects of chronic dopaminergic treatment on LMR and RLS.
Conflict of Interests
=====================
The authors declare that there is no conflict of interests regarding the publication of this paper.
######
RLS and leg motor restlessness (LMR) in PD.
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Author Year Country PD/control RLS (%)\ LMR (%) PD/control Characteristics of PD/RLS
PD/control
------------------------------------ ------ ------------- ------------ ------------ -------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Ondo et al. \[[@B55]\] 2002 USA 303/--- 20.8/--- --- Lower serum ferritin levels. Older age at RLS onset, less frequent family history
Tan et al. \[[@B50]\] 2002 Singapore 125/--- 0/--- 15.2/--- 1 (0.8%) had RLS-like symptoms correlated with wearing off
Krishnan et al. \[[@B2]\] 2003 India 126/128 7.9/0.8 --- Older, higher rate of depression
Braga-Neto et al. \[[@B41]\] 2004 Brazil 86/--- 49.9/--- --- Longer disease duration of PD
Calzetti et al. \[[@B78]\] 2009 Italy 118/110 12.7/6.3 --- Absence of a comorbid association between RLS and PD
Nomura et al. \[[@B3]\] 2006 Japan 165/131 12.1/2.3 --- Insomnia (PSQI), younger age
Gómez-Esteban et al. \[[@B42]\] 2007 Spain 114/--- 21.9/--- Sleep disturbance (PDSS)
Loo and Tan \[[@B44]\] 2008 Singapore 200/200 3.0/0.5 --- Slightly younger age
Lee et al. \[[@B43]\] 2009 Korea 447/--- 16.3/--- --- Longer disease duration and dopaminergic treatment, more severe disability, and cognitive decline
Peralta et al. \[[@B46]\] 2009 Austria 113/--- 24.8/--- --- Younger, earlier onset of PD, lower levodopa-equivalent dosages, and wearing off
Verbaan et al. \[[@B56]\] 2010 Netherlands 269/--- 11.0/--- --- No increased frequency of RLS in PD patients\
RLS severity correlated with PD severity, motor fluctuations, depressive symptoms, daytime sleepiness, cognitive problems, autonomic symptoms, and psychotic symptoms.
Angelini et al.^∗^ \[[@B39]\] 2011 Italy 109/116 5.5/4.3 --- No increased frequency of RLS in drug-naïve PD patients
Gjerstad et al.^∗^ \[[@B11]\] 2011 Norway 200/173 15.5/9.2 25/8.7 Sleep disturbance (PDSS), depressive symptoms
Suzuki et al. \[[@B49]\] 2012 Japan 93/93 5.5/2.2 32.3/14.0 Higher UPDRS-3 score, depressive symptoms, sleep disturbance (PDSS-2), and impaired QOL
Shimohata and Nishizawa \[[@B47]\] 2013 Japan 158/--- 11.4/--- 19.0/--- Sleep disturbance, daytime sleepiness
Rana et al. \[[@B51]\] 2013 Canada 127/127 21.3/4.7 --- Pain was reported at a higher rate
Bhalsing et al. \[[@B52]\] 2013 India 134/172 11.9/2.9 --- Sleep disturbance (PDSS)
Shin et al.^∗^ \[[@B48]\] 2013 Korea 151/--- 16.6/--- --- Severe disease, tremor
Azmin et al. \[[@B40]\] 2013 Malaysia 113/--- 9.7/--- --- Younger age of onset of PD, male gender, higher MMSE score, and less advanced HY stage
Rajabally and Martey \[[@B53]\] 2013 UK 37/37 16.2/10.8 40.5/16.2 No correlation with neuropathy or symptomatic neuropathy, cumulative levodopa exposure, or serum vitamin B12 levels in patients with PD
Oh et al.^∗^ \[[@B45]\] 2014 South Korea 225/--- 16.0/--- --- Supine/nocturnal hypertension
Fereshtehnejad et al. \[[@B54]\] 2015 Iran 108/424 14.8/7.5 --- A higher anxiety score, worse nutritional status, and poorer QOL
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
^∗^The studies assessing untreated PD patients.
HY: Hoehn and Yahr; LMR: leg motor restlessness; MMSE: Mini-Mental State Examination; PD: Parkinson\'s disease; PDSS: Parkinson\'s Disease Sleep Scale; PSQI: Pittsburgh Sleep Quality Index; QOL: quality of life; RLS: restless legs syndrome; UPDRS: Unified Parkinson\'s Disease Rating Scale.
[^1]: Academic Editor: Jan O. Aasly
|
{
"pile_set_name": "PubMed Central"
}
|
Tag: smart home
Your home could become a burglar’s best friend, if no one is living or staying at your home for a longer period. Protect your abandoned home with ADT smart home security system. https://www.adt.com/smarthome Facebook: http://www.facebook.com/adt Twitter: http://www.twitter.com/adt Call Us: 877-915-4581
This video introduce YET smart home factory Products The smart home products main include: IP camera , wifi smart plug, wifi controller to control auto door,window, and auto curtain…etc And other security control system …. With your phone App ,…
🐱 Welcome to our channel (schoen jeremie). If you Love Buying online like us, and if You love buying on AliExpress like us, this is THE Channel For you. Hot Brands, Best Sellers, Hot Trends and much more is waiting…
Check out these wireless doorbells to make sure your door is attended and your home is secure with these motion sensor video and wireless smart doorbells. ## Links to check price or buy ✅ US:E Smart Camera Door Lock →…
Brinks Home Security’s Video Monitoring solution allows you to see what’s happening at your home when you can’t be there. Video Monitoring integrates seamlessly with your other smart home solutions. Learn more at www.brinkshome.com.
Saving energy at home should be simple. With a Smart Thermostat, it is. A Smart Thermostat offered by Brinks Home Security™ lets you save energy simply, with no guesswork, complexity or inconvenience necessary. In fact, you’ll barely notice… except when…
You have one home – why not have one app to control it? With the Brinks Home Security mobile app, control your security system, thermostat, locks, lights & more all from the palm of your hand. Watch as we demonstrate…
|
{
"pile_set_name": "Pile-CC"
}
|
The CRU graph. Note that it is calibrated in tenths of a degree Celsius and that even that tiny amount of warming started long before the late 20th century. The horizontal line is totally arbitrary, just a visual trick. The whole graph would be a horizontal line if it were calibrated in whole degrees -- thus showing ZERO warming
Monday, October 31, 2005
NOW FOR THE PROFOUND STUFF: THE RECENT HURRICANES WERE CAUSED BY JAPAN'S MAFIA!
An Idaho weatherman says Japan's Yakuza mafia used a Russian-made electromagnetic generator to cause Hurricane Katrina to strike America. Meteorologist Scott Stevens, a nine-year veteran of KPVI-TV in Pocatello, said he believes the artificially created hurricane was a bid to avenge Japan for the Hiroshima atomic bomb attack -- and that this technology will soon be wielded again to hit another U.S. city. Stevens said he had been struggling to forecast weather patterns starting in 1998 when he discovered the theory on the Internet. It's now detailed on Stevens' Web site.
Scientists discount Stevens' claims as ludicrous. "I have been doing hurricane research for the better part of 20 years now, and there was nothing unusual to me about any of the satellite imagery of Katrina," said Rob Young, a hurricane expert at Western Carolina University in Cullowhee, N.C. "It's laughable to think it could have been man-made."
Stevens, who is among several people to offer alternative and generally discounted theories for the storm that flooded New Orleans, says a little-known oversight in physical laws makes it possible to create and control storms. That's especially true, he contends, if you're armed with the Cold War-era weapon said to have been made by the Russians in 1976. Stevens became convinced of the existence of the Russian device when he observed an unusual Montana cold front in 2004. "I just got sick to my stomach because these clouds were unnatural and that meant they had (the machine) on all the time," Stevens said. "I was left trying to forecast the intent of some organization rather than the weather of this planet."
Stevens said oddities in Hurricane Katrina storm patterns underpin his theory. And, according to his Web site, so does the fact that Katrina and Ivan -- the name given to a destructive hurricane that hit Florida in September 2004 -- both sound Russian.
Stevens' bosses at KPVI-TV say their employee can think and say what he wants -- as long as he keeps the station out of the debate and acknowledges that his views are his own opinion. Bill Fouch, KPVI's general manager, compared Stevens' musings to political or religious beliefs that journalists suppress on the job. "He doesn't talk about it on his weathercast," Fouch said. "He's very knowledgeable about weather, and he's very popular."
A REBUTTAL OF SOME POLITICALLY-MOTIVATED GREENIE NONSENSE FROM AUSTRALIA
An amused comment by Bob Carter, professor of paleoclimatology at Australia's James Cook University, on some recent pandering to Greenie fantasies by Australia's Environment Minister
The debate on climate change is over, says Ian Campbell. Claiming to be speaking on behalf of the federal Government, and expressly John Howard, the Environment Minister, according to the front page of this newspaper yesterday, said "he agreed broadly with the contention promoted recently in ... Tim Flannery's book The Weather Makers". The report summarised Campbell's opinion as follows: "As far as the Howard Government is concerned, Australians must accept that humans contribute to global warming and adapt their behaviour to save the planet."
Fine environmental rhetoric, which could have come straight out of Flannery's book. But what exactly is the threat that is now proclaimed to be beyond doubt and that we are being exhorted to avoid? Is it "global warming"? Unlikely. Earth's been there and done that plenty of times before, without our help. During the past 5000 years, in Greenland there have been five previous gentle warming cycles similar to that of the late 20th century. Two of these attained temperatures a little higher than was achieved in 1998, which marks the apparent peak of our most recent, and seemingly entirely normal, warming cycle.
Or we could go back another few thousand years, to the earlier part of our present warm, interglacial interval. Climatic records from many places in the world record temperatures then that were 1C-2C higher than today's, which is why that period of time is sometimes called the early Holocene climatic optimum.
We could take a deeper breath and think about four earlier interglacial intervals during the past 400,000 years, when on each occasion temperatures lasted for about 10,000 years at levels comparable with today's.
Each such interglacial interval also features a short, especially warm climatic optimum. Counting backwards in the Antarctic ice core record, these optima attained temperatures respectively of about 4C, 1C, 6C and 3C warmer than today. Must have been lots of parrots dropping out of the trees in Cairns on those heatwave occasions. Where are their bodies?
Ah, but surely it is "human-caused global warming" that's the problem? Who says? The UN's Intergovernmental Panel on Climate Change? Recall that its 2001 report said that "there is new and stronger evidence that most of the warming observed over the past 50 years is attributable to human activities." Well, not much credibility left there. Not only has some of the pivotal science of the IPCC report been undermined by later analysis, but its econometric modelling has been so bad that Nigel Lawson, former British chancellor of the exchequer, has recently recommended to the US Senate that the panel be closed down. He is not alone in this view. It seems, then, that it wouldn't be sensible for Australia to base its climate policy on advice from the now-discredited IPCC.
Humans certainly have an effect on local climate. For instance, the surrounds of Melbourne are now about 1C warmer than they were before European settlement. This, the urban heat island effect, is because modern metropolises comprise extensive areas of concrete, macadam, steel, bricks and glass, all of which act to trap more solar energy than did the preceding virgin landscape.
You might think that this effect, aggregated all over the world and added to by other landscape changes associated with modern agricultural practices, would produce the human-caused global warming signature that the minister seems to be worrying about. You might think so. But truth to tell, and IPCC views notwithstanding, no global human temperature-change signal has yet been detected that stands out from the natural background vagaries of the climate system. And this despite worldwide expenditure of about $US50 billion on climate-related research since the early 1990s.
So, finally, it must be the increasing carbon dioxide in the atmosphere that's worrying the minister, who says he has "spent an enormous amount of my time getting to understand the problem and getting to understand the solutions", noting that the world's carbon levels began spiking alarmingly in the 1950s and were headed for dangerously high levels. But why is the minister worried about this? Dangerously high? Is this more IPCC mythology? All the evidence is that atmospheric carbon dioxide is beneficial to the ecology of the planet. There have been many times in the geological past when carbon dioxide levels exceeded those of today by up to an order of magnitude. The main result was lush plant growth and a diversified ecology.
The minister may be right to assert that the debate on climate change is over. But only in two ways: that contemporary climate change is proceeding in the same manner as known earlier episodes of natural climate change; and that any human-caused global climate signature is buried in the noise of the climate system.
That raw sewage bubbles out of the ground at Yellowstone National Park--after more than a century of botched conservation--would come as no surprise to Alston Chase, who 20 years ago wrote "Playing God in Yellowstone: The Destruction of America's First National Park." Mr. Chase, a former professor of philosophy turned journalist, presents a clear critique of ever-changing environmental beliefs and the damage that they have caused the actual environment. As a philosopher, he is contemptuous of much conventional wisdom and the muddle-headed attitudes he calls "California cosmology."
2. "The Culture Cult" by Roger Sandall (Westview, 2001).
In "The Culture Cult: Designer Tribalism and Other Essays," anthropologist Roger Sandall explores romantic primitivism--the myth of Eden and the Noble Savage. Mr. Sandall's histories of utopian communities (Robert Owen's New Harmony, John Humphrey Noyes's disastrous Oneida) are vivid, and his portraits of leading primitivists, from Rousseau to Mead to Levi-Strauss, are sharply drawn. This ignorant nostalgia for our tribal past ignores the truly horrific reality of tribal initiation, warfare, mutilation and human sacrifice.
3. "Man in the Natural World" by Keith Thomas (Oxford, 1984).
Don't be put off by the academic title of Keith Thomas's "Man in the Natural World: Changing Attitudes in England 1500-1800." The book's a delight. Mr. Thomas's account is both detailed and charming as he guides the reader from the Tudor view, that nature was made for man to exploit, through the later sense that nature was to be worshipped and cherished (such that trees became pets and aristocrats gave names to their great estate trees and said good-night to them each evening). Still later came the Romantic preference for untouched nature and rough settings, a rarified taste that required "a long course of aesthetic education." At every turn, Mr. Thomas emphasizes the contradictions between belief and behavior.
No one should miss Bjrn Lomborg's "The Skeptical Environmentalist." The author, a Danish statistician and former Greenpeace activist, set out to disprove the views of the late Julian Simon, who claimed that environmental fears were baseless and that the world was actually improving. To Mr. Lomborg's surprise, he found that Simon was mostly right. Mr. Lomborg's text is calm and devastating to established dogma.
5. "The Logic of Failure" by Dietrich Doerner (Perseus, 1998).
Future environmentalists will heed Dietrich D"rner's "The Logic of Failure." Mr. Doerner is a cognitive psychologist who invited academic experts to manage the computer simulations of various environments (an African herding society, a town in Maine). Most experts made things worse. Those managers who did well gathered information before acting, thought in terms of complex-systems interactions instead of simple linear cause and effect, reviewed their progress, looked for unanticipated consequences, and corrected course often. Those who did badly relied on a fixed theoretical approach, did not correct course and blamed others when things went wrong. Mr. Doerner concludes that our failure to manage complex systems such as the environment reflects bad habits of thought, overreliance on theory and lazy procedures. His book is brief, cheerful and profound.
***************************************
Many people would like to be kind to others so Leftists exploit that with their nonsense about equality. Most people want a clean, green environment so Greenies exploit that by inventing all sorts of far-fetched threats to the environment. But for both, the real motive is to promote themselves as wiser and better than everyone else, truth regardless.
Global warming has taken the place of Communism as an absurdity that "liberals" will defend to the death regardless of the evidence showing its folly. Evidence never has mattered to real Leftists
Comments? Email me here. My Home Page is here or here. For times when blogger.com is playing up, there are mirrors of this site here and here.
Sunday, October 30, 2005
BEWARE THE LOCO-VORES
The latest twist in the endless saga of Greenie nuttiness. These guys must have very little to do with themselves if they have time to worry about this stuff. I'll bet not many of them are raising families
Anti-globalists and environmentalists often decry the increasingly complex interactions between producers and consumers. They prefer that people buy products grown or made closer to home even when they cost substantially more than goods transported longer distances.
Take, for example, Locavores, a San Francisco-based "group of concerned culinary adventurers" who try to only eat foods that were grown within a hundred miles of that city. This summer the group suggested that people try to go the entire month of August without buying food produced more than that distance from home.
Locavores obviously has not heard of "roundabout methods of production" as described by Austrian economist Eugen von B”hm-Bawerk in the late 1800s. He concluded that productivity increases often result from more time-consuming methods of production. (See Economics for Real People by Gene Callahan, pp. 133-37, for a short explanation of the concept.)
The only reason a longer production process would be adopted is because it is more economically efficient than the alternatives. In a developed economy most of the direct approaches to increased productivity have already been tried and only the roundabout ones remain to be pursued.
The Locavores claim that our food travels an average of 1,500 miles en route to our homes. They do not cite a source, but that seems reasonable to me. Sound economic reasons have caused food-supply chains to lengthen over the past 50 years. Most of the population growth of the United States over that time has been in the coastal states, while much of the best farmland is in the heartland of the country. People have chosen to live in suburbs with more living space, which means that land which 50 years ago could have grown locally produced food now is unavailable for food production.
As family incomes have gone up over that time, we have sought out more varied diets. People in Boston have grown accustomed to Florida orange juice and California whole oranges. We also want oranges 365 days a year, not just as a Christmas treat, as was common a few generations ago. Chilean grapes are available in my suburban-Chicago grocery store in the middle of the winter. Consumers in Minneapolis would have a pretty bland diet in January if all their food came from within a hundred miles.
The supply chain has also lengthened because North Dakota is simply a better place to grow high-protein wheat for pasta than is Florida. Economies of scale in production and processing also create longer supply chains. Many places in the country could grow processing pumpkins, but about 60 percent of the harvested acreage is in central Illinois because that provides the greatest economic efficiencies.
What is true in the United States is even truer in the rest of the world. Our country is blessed with some of the best farmland in the world, a varied climate, and a large population. Try anything close to the locavore approach in Tokyo or Helsinki, and it would be a disaster.
The Locavores website makes obvious that their concerns are broader than just eating locally to get fresh food. The members believe that corporations are the principal beneficiaries of the global food system rather than family farms, local businesses, and consumers. In reality, corporations serve as a vital link between crop and livestock producers and consumers. A family hog farmer in central Nebraska needs some type of business, such as a corporation, to transform a hog into a pork crop and transport it to consumers on the West Coast. Local firms in central Nebraska would be out of business without an intermediary linking farmers to consumers in far off cities.
The keen defence of kangaroos means real dangers to native animals are overlooked, writes a really sincere environmentalist -- Barry Cohen. Barry Cohen was an Australian federal legislator from 1969 until 1990. He recently sold his feral-animal-proof wildlife sanctuary on the Central Coast, which was created to show that the exclusion of cats and foxes would ensure native wildlife would not only survive, but thrive.
On my first trip to Britain as federal environment minister, having just announced the 1984 annual kangaroo cull quota of 2 million, I was unprepared for the reception at my London hotel. A seven-metre-high inflatable kangaroo and a sign, "MR COHEN THE KANGAROO KILLER IS IN TOWN", greeted me.
I asked the protester what concerned him. "This Cohen fellow is massacring Australia's national symbol. They'll soon be extinct," he bellowed. "Which species do you object to Australia culling?" He looked at me blankly. "Do you know how many species there are?" After a long silence he answered, "Three? Five?"
"Close. There are 51 species of kangaroos (macropods) of which seven are believed to be extinct with many others rare, endangered or vulnerable. Smaller species, under five kilograms, such as the parma, yellow foot, brushtail and bridle nail-tailed rock wallabies, are very rare and highly protected. The species culled are the eastern and western grey kangaroos, the red kangaroo, the wallaroo, whiptail, agile and Bennett's wallaby. Increased crops, pastures and dams and the lack of natural predators ensures these larger species are often in their tens of millions and in plague proportions. If we didn't control their numbers there wouldn't be any farmers left."
He looked at me with disbelief. "How do you know all this?" "I'm Barry Cohen."
Discussion elicited that he had been fed "information" by some Australian conservation organisations. The lies some told were legendary, their predictions grotesque. Foremost among the predictions was the imminent extinction of the "kangaroo". It never happened. A few years ago the cull quota rose to about 7 million. This year, it's just under 4 million.
When their dire predictions failed to eventuate the conservationists talked of the inhumane methods of killing. One fanatic produced a photograph of a kangaroo supposedly skinned alive to save the cost of a bullet. I suggested she try catching a kangaroo and skinning it alive. Not surprisingly, the tabloid press and TV had a field day.
I had thought this nonsense had finished but with the release of the book Kangaroos: Myths and Realities, by the Australian Wildlife Protection Council, the usual suspects surfaced mouthing the same old cliches. No one ever asks them the obvious question: "You were predicting the extinction of the kangaroo 40 years ago, yet despite an annual cull quota averaging about 3 to 4 million the population of the culled species is still in the tens of millions. How is that?"
I loathe this nonsense because of the damage it does to the cause of the preservation of species that are genuinely endangered - the small species - and the failure by governments to tackle the problem of the introduced predators - cats and foxes - that are also destroying a vast array of native wildlife including birds, reptiles and amphibians. More than 20 years ago the NSW government, under pressure from the anti-fox-fur lobby, abolished the bounty on fox skins. The fox population exploded. The effect on native wildlife was devastating. I take a different view from the animal liberationists. Every woman who wears a fox fur should get an Order of Australia medal.
And then there are cats. Beautiful creatures, but they have no place in the Australian bush. No matter how well fed, they are natural hunters. You can have cats or native wildlife; you can't have both. Fortunately, a more environmentally aware generation is opting not to have cats as pets. Don't, however, hold your breath waiting for politicians or conservationists to call for action against cats. One politician in Western Australia did and was pilloried. Foxes and cats do more damage to our native wildlife than all the farmers, loggers, miners and developers put together. The latter do their share of damage but don't come close to that wrought by the ferals.
The danger from the latest outburst against the scientifically determined kangaroo cull is that it will divert attention from the task of preserving genuinely endangered native wildlife.
One More Chance For Sound Energy Policy
As debate begins in the U.S. Senate on an energy bill, government needs to remove barriers outside of Hurricane Alley that restrict domestic energy production and refining that would benefit consumers, according to NCPA Senior Fellow H. Sterling Burnett, noting that private firms both prepared for and responded to the recent hurricanes better and with more effectiveness than governments.
"The moratorium on new oil and gas development and production along the Atlantic shelf and California must end, and we must move forward with production in the Arctic National Wildlife Refuge (ANWR)," Burnett said. "A disruption in the supply of energy, especially gasoline, as a result of Hurricanes Katrina and Rita has highlighted a problem that policy makers have ignored for too long." A bill has already passed the House of Representatives, but contains several potential pitfalls, Burnett explained:
* The House bill does not provide for expanding energy production outside the Gulf of Mexico.
* Political road blocks - federal, state and local - continue to inhibit expansion, even though market conditions were already encouraging companies to seek out new opportunities.
* Allowing new refineries to be built on public lands could preempt state and local restrictions, but the bill should make it clear that any leasing arrangements should be done at market rates with subsidies.
Burnett also pointed out that new energy legislation need not address price gouging, since government already has the power to investigate such behavior through the Federal Trade Commission and other agencies and, indeed, an investigation of pricing following the devastation from Katrina and Rita is already underway. Based on past experience, another study of the issue will be a waste of scarce federal resources at a time when money and manpower are scarce. "There is very little that can be done short-term to improve America's energy prospects in the short term," Burnett added, but allowing states to share the wealth from new energy development off their coasts is a good start to correcting these errors."
Many people would like to be kind to others so Leftists exploit that with their nonsense about equality. Most people want a clean, green environment so Greenies exploit that by inventing all sorts of far-fetched threats to the environment. But for both, the real motive is to promote themselves as wiser and better than everyone else, truth regardless.
Global warming has taken the place of Communism as an absurdity that "liberals" will defend to the death regardless of the evidence showing its folly. Evidence never has mattered to real Leftists
Comments? Email me here. My Home Page is here or here. For times when blogger.com is playing up, there are mirrors of this site here and here.
Saturday, October 29, 2005
DUBIOUS REASONING
The research abstracted below shows that decreased snow cover is a major CAUSE of regional warming in the Arctic. But what causes the decreased snow cover? The authors assert, of course, that it is our old villain, global warming. But, in very cold climates, the amount of precipitation is the major influence on what builds up on the ground so it seems likely that the reduced snow cover is the effect of reduced precipitation (snowfall) rather than anything else. And since global warming should INCREASE precipitation, that effect can hardly be traceable to global warming -- so is most probably traceable to more local climate cycles
Role of Land-Surface Changes in Arctic Summer Warming
By F. S. Chapin, III et al
A major challenge in predicting Earth's future climate state is to understand feedbacks that alter greenhouse-gas forcing. Here we synthesize field data from arctic Alaska, showing that terrestrial changes in summer albedo contribute substantially to recent high-latitude warming trends. Pronounced terrestrial summer warming in arctic Alaska correlates with a lengthening of the snow-free season that has increased atmospheric heating locally by about 3 watts per square meter per decade (similar in magnitude to the regional heating expected over multiple decades from a doubling of atmospheric CO2). The continuation of current trends in shrub and tree expansion could further amplify this atmospheric heating by two to seven times.
There were floods in Hawick this week. Not quite Hurricane Katrina, but with basking sharks invading Scottish waters we all know our climate is doing funny things. A consensus has emerged over the past couple of decades that it is best to be safe rather than sorry in this situation. So public policy has moved in the direction of redirecting the emission of greenhouse gases such as carbon dioxide, which scientists have implicated as a possible factor behind global warming. But just how serious are our politicians about cutting carbon dioxide emissions? Do they really mean it or is it just playing to the gallery? And how committed are the various environmental pressure groups to making the many political compromises needed to effect change in the energy market? Are they players or merely utopians who reject any compromise solution - which is no solution at all.
The facts speak for themselves. The Blair government has set a target for achieving 10 per cent of Britain's energy from renewable sources by 2010. however we can barely manage 4 per cent, & most of it from large-scale hydro-electric plants which the environmental lobby would oppose if built today. Wind power is the only practical renewable technology available in the timeframe but it struggles to produce 0.5 per cent of electrical power after 15 years of development at enormous public subsidy. Besides the environmental lobby has now turned its guns against shore-based wind turbines. Lesson: the government will not meet its 2010 renewable energy target as the Prime Minister, Tony Blair, hinted loudly in his speech to the Labour Party conference a few weeks ago.
In Scotland, championed by the environmental minister, Ross Finnie, illusions regarding renewable power illusions are even more fanciful. Scotland has the advantage of the great hydroelectric schemes built in the 1940s & 1950s, which provide around 13% of our electric needs. Rather than build on this legacy in a sensible fashion, Mr Finnie has set an absurd target of generating 40 per cent of power generation needs provided by renewables in 2020. This makes the Executive - especially its Liberal Democratic part - look heroic to the more impressionable part wing of the environmental lobby. However any sensible observer realises Mr Finnie's figure is either hopelessly farfetched or a cynical ploy be a politician who won't be around in 15 years time when it is exposed as a fraud.
A look at the small print of the Executive's policy on renewables reveals it is premised on the untenable assumption that future growth in energy demand is limited to between zero and 1% per annum. But governments of all parties have championed energy conservation in Britain for 30 years only to see demand soar by 60%. Electricity demand in the United Kingdom rises at 1-1.5% a year. Unless Mr Finnie plans to knock down most of Scotland's houses over the next 15 years & rebuild them with a serious eye to energy conservation you can forget the 40% figure. Even if Mr Finnie did succeed in his plans, renewable energy is substantially more expensive than other forms of generation. Household bills would skyrocket, while what is left of Scottish industry would be put at a serious competitive disadvantage.
Fortunately a little common sense has started to break out in government circles in the past few weeks, especially at Westminster. Mr Blair has begun a not-so-subtle campaign to put nuclear power back on the agenda as an alternative that renewables or conservation can do the job of cutting down on fossil fuel emissions fast enough to help with global climate change.
A clue as to how serious the Prime Minister is can be found in the fact that that the Department of Trade & Industry has recently confirmed it has been holding preliminary talks with major nuclear utilities in Germany & France. The DTI has already identified 3 sites to host new reactors, including Hunterston in Ayrshire. That puts Scotland squarely in the nuclear frame.
Not for the first time, the Executive is prevaricating. The Hunterston B nuclear power station in Ayrshire is set to close in 2011, while Torness in East Lothian will last until around 2020. Together they supply some 20% of Scotland's electricity. Take them out of the game & renewable will have to fill even more than that impossible 40% target. Unless new nuclear stations are commissioned, the reality is that Britain & Scotland are going to have to burn a lot more expensive, imported natural gas. So much for cutting fossil fuel emissions. So much for security of energy supply.
The conclusion is inescapable: if we want to cut fossil fuel emission in a reasonable timeframe, the only practical policy is to build a new generation of nuclear generating plant. Others are thinking this way too. China plans to build 30 new reactors by 2020, while environmentally-conscious Finland has already broken Europe's long moratorium on commissioning atomic power stations.
The latest designs of nuclear plant embody passive safety systems that do not require human intervention in the case of an accident. The Chernobyl reactor on the other hand, relied on human operating procedures which were violated. The new reactors are also much more economical to build, operate & maintain than the current generation.
Long term waste storage remains an issue, but if there is a choice to be made it is surely more to cut the fossil fuel emissions now and sort out the nuclear waste at our leisure. Half a loaf is always better than nothing to a starving man. It is just such hard political choices that the Executive has to start making.
EU BUREAUCRATS PISSING INTO THE WIND
Revised EU climate change programme launched
The second European programme includes road transport, aviation and shipping, plus carbon capture and storage, for the first time. The programme will form part of the EU’s strategy on climate change after 2012, which will include setting new targets on greenhouse gas emissions.
The European Commission plans to develop its climate change programme by bringing new sectors under carbon management, including shipping, light-duty vehicles and aviation - which is also due to be included in the EU emissions trading scheme. The Commission wants a “strong push for innovation” in new technologies such as carbon capture and storage, and in “adaptation to those aspects of climate change that are unavoidable”.
Four working groups will report to the Commission next spring on: The existing climate change programme: This includes EU Directives that seek to reduce energy demand and change the energy mix, plus the contribution of agriculture, transport and greenhouse gases other than CO2. The Commission plans to issue a policy paper on the review by mid-2006. Carbon capture and storage: The technology’s potential, costs and risks will be examined, plus the outline of a regulatory framework that would encourage its development. A Commission Communication is planned by early 2007. Aviation and how it should be incorporated into the emissions trading scheme. Reducing emissions from light-duty vehicles. A fifth working group will report in September 2006 on measures to help the EU adapt to climate change, including the likely impacts on land, agriculture and water resources, and on human health and habitation.
The existing climate change programme: This includes EU Directives that seek to reduce energy demand and change the energy mix, plus the contribution of agriculture, transport and greenhouse gases other than CO2. The Commission plans to issue a policy paper on the review by mid-2006. Carbon capture and storage: The technology’s potential, costs and risks will be examined, plus the outline of a regulatory framework that would encourage its development. A Commission Communication is planned by early 2007. Aviation and how it should be incorporated into the emissions trading scheme. Reducing emissions from light-duty vehicles. A fifth working group will report in September 2006 on measures to help the EU adapt to climate change, including the likely impacts on land, agriculture and water resources, and on human health and habitation.
Many people would like to be kind to others so Leftists exploit that with their nonsense about equality. Most people want a clean, green environment so Greenies exploit that by inventing all sorts of far-fetched threats to the environment. But for both, the real motive is to promote themselves as wiser and better than everyone else, truth regardless.
Global warming has taken the place of Communism as an absurdity that "liberals" will defend to the death regardless of the evidence showing its folly. Evidence never has mattered to real Leftists
Comments? Email me here. My Home Page is here or here. For times when blogger.com is playing up, there are mirrors of this site here and here.
Friday, October 28, 2005
THE EVILS OF CHEESE
California water-quality enforcers have agreed to drop all allegations of wrongdoing against the world's largest cheese factory in the biggest water pollution case in Central Valley history, according to a tentative settlement released Tuesday. In exchange, Hilmar Cheese Co. of Merced County will pay $3 million to be divided between the state and a Hilmar-commissioned study of groundwater pollution of the food processing industry as a whole, according to the agreement. The pact becomes effective upon approval by the politically appointed members of the state Central Valley Regional Water Quality Control Board, which is scheduled to consider it at a Nov. 28-29 meeting in Sacramento.
Settlement talks began six weeks ago after Hilmar launched a vigorous legal defense against a $4 million penalty, the largest the environmental enforcement agency has levied in the Central Valley. The penalty followed a Sacramento Bee investigation showing that the company saved millions of dollars by delaying required wastewater treatment. Neighbors of the plant south of Turlock have complained of hordes of flies and putrid odors coming off fields of milky waste. The factory produces more than 1 million pounds of cheese daily.
The agreement effectively acknowledges the company's key argument that the water board's pollution limits were technologically impractical to meet, considerably relaxing for the interim restrictions on salinity in Hilmar's wastewater. The settlement explicitly releases Hilmar from all allegations, not only those raised by the water-quality regulators, but also allegations of criminal wrongdoing - including illegally dumping wastewater into an irrigation canal. The criminal allegations were under investigation by the attorney general's office, which determined in July it would not file charges. In all, the settlement shifts the enforcement spotlight away from its wastewater disposal practices to those of food processors statewide, many of which operate under less restrictive pollution limits. "We are embracing this settlement because it sets the foundation for solutions to the issues that plague the entire food processing industry in the Valley," said John Jeter, Hilmar's chief operating officer.
Attorneys for the regional water board accepted the settlement "to avoid the uncertainty and expense of protracted litigation, and for Hilmar to focus its resources and efforts instead on seeking solutions to salinity issues confronting the Central Valley and other areas of the state," the agreement states.
The enforcement action against the cheese maker spotlighted the salty wastes from lightly regulated cheese manufacturers in the nation's No. 1 dairy state along with the factory leftovers that wineries, canneries and other food processors routinely spread on land. Industry representatives say the wastes break down as they percolate through the soil, keeping harmful levels of salts and other pollutants out of groundwater. Hilmar Cheese's own pollution tests in the past 15 years, however, showed otherwise, The Bee found. And state water board regulators said that the more they look, the more they find high levels of salinity in the groundwater beneath other waste fields of food processors.
The settlement requires Hilmar to dedicate $1 million of its $3 million payment to studying ways food processors can reduce salinity in wastewater. Although the salts, sugars and organic wastes from food processors are not considered toxic, [Salt and sugar not toxic! Phew! Glad we found that out] high levels can render groundwater economically untreatable for drinking water and irrigation.
The agreement states that the study "will not directly benefit" state water-quality regulators. "We're not directing the study, but we're hopeful that it will be beneficial across the board to government, to industry and to the public," said Catherine George, an attorney with the regional water board. The settlement payment also includes $1.85 million to the state for water pollution cleanups and $150,000 to reimburse the attorney general's office for helping the water board fend off Hilmar's legal challenges.
The truth is, the number and scale of disasters worldwide has been rising rapidly in recent decades because of changes in society, not global warming. In the case of hurricanes, the continuing development and urbanization of coastal regions around the world accounts for all of the increases in economic and human losses that we have experienced.
Even if tomorrow we could somehow magically put an end to global warming, the frequency and magnitude of climate-related disasters would continue to rise unabated into the indefinite future as more people inhabit vulnerable locations around the world. Our research suggests that for every $1 of future hurricane damage that scientists expect in 2050 related to climate change, we should expect an additional $22 to $60 in damage resulting from putting more people and property in harm's way.
None of this means that we should not pursue reducing greenhouse gas emissions, or that mitigating climate change is a bad idea. But we simply cannot expect to control the climate's behavior through energy policies aimed at lowering greenhouse gas emissions.
The current international policy framework for reducing greenhouse gas emissions - the Kyoto Protocol - is far too modest to have any meaningful effect on the behavior of the climate system. And even the modest agreements reached under Kyoto are failing.
For example, the European Environment Agency reported in 2004 that 11 of the 15 European Union signatories to Kyoto "are heading toward overshooting their emission targets, some by a substantial margin." And the other four are meeting their targets only because of non-repeatable circumstances, such as Britain's long-term move away from coal-based energy generation. To make matters much worse, most of the growth in emissions in coming decades will occur in rapidly industrializing nations such as China and India, which are exempt from Kyoto targets.
To make matters still worse, because of the way that greenhouse gases behave in the atmosphere, even emissions reductions far more rapid and radical than those mandated under Kyoto would have little or no effect on the behavior of the climate for decades. As James Hurrell, a scientist at the U.S. National Center for Atmospheric Research, testified before the U.S. Senate in July, "It should be recognized that [emissions reductions actions] taken now mainly have benefits 50 years and beyond now."
The implications are clear: More storms like Katrina are inevitable. And the effects of future Katrinas and Ritas will be determined not by our efforts to manage changes in the climate but by the decisions we make now about where and how to build and rebuild in vulnerable locations.
Do we have the will to pay the upfront economic and political costs of strict building-code enforcement and prudent land-use restrictions? Will we have the imagination to build resilience into the local economy, rewarding companies that find ways to preserve jobs after a disaster and contribute to a faster recovery? Do we have the decency to counter the market forces that cause poor people to live in the most vulnerable areas?
As we learn the lessons of this terrible hurricane season, the answers we give to these kinds of questions will create the conditions that determine the effects of future hurricanes. We are, that is, about to begin the process of managing the next disaster. What kind of disaster do we want it to be?
Hurricanes Katrina and Rita, the muscular headliners of this hurricane season, are just a preview of what to expect in coming years: More powerful storms. And the trend could span decades.... "We are solidly into one of these active periods," said Colin McAdie, a meteorologist with the National Hurricane Center. "We're figuring we're 10 years into this one. We could be looking at 10 to 20 more years."
That means next year, and the years after that, could be just as scary as this one, with mega-storms taking aim at Florida, the Carolinas or the Gulf Coast, spiking the anxiety levels of those in their path.
Hurricanes feed on warm water and scientists say the pattern of increased storm frequency and strength is caused by a cyclical rise in ocean temperatures. Besides fueling more powerful hurricanes, a higher number of storms means more of them will be stronger. "Certainly, with more frequency of active systems, we can see a lot more chances to have more intense hurricanes," said another storm forecaster, Chris Sisko.
The cycles commonly run about 25 to 30 years, scientists say, but can vary and see breaks of as much as a decade. The current cycle started around 1995. Prior to then, from 1975 to 1995, only four major hurricanes, defined as a Category 3 or higher, impacted the state. "In the `70s and `80s," McAdie said, "people were saying, `I guess we don't get hurricanes any more.'"
By contrast, 23 hurricanes hit South Florida alone during the last cycle of high hurricane activity, from 1926 to 1965. Of those storms, 15 were major ones. "We had about a 40-year period when it was very busy," said meteorologist Chris Landsea with the National Hurricane Center. During that cycle, on Labor Day 1935, a Category 5 hurricane hit the Florida Keys....
A cycle of warm ocean water fuels individual storms like Rita, and gives rise to stronger hurricanes during high activity cycles such as the present one. Researchers say a higher salt content in the Atlantic causes the water to become more dense, which in turn causes the water to grow warmer, perhaps by as much as a degree. That single degree can make a difference in whether a tropical wave rolling across the sea will develop into a devastating hurricane.
Researchers have yet to decipher the rhythm of the storm cycles. "The oceanographers are looking into that, trying to understand that," Landsea said. Contrary to speculation, the cycles may not result from human-induced global warming. Prevailing scientific opinion says global warming has little or nothing to do with the trend. "The science is not settled on that," McAdie said. "It's an open question."
Many people would like to be kind to others so Leftists exploit that with their nonsense about equality. Most people want a clean, green environment so Greenies exploit that by inventing all sorts of far-fetched threats to the environment. But for both, the real motive is to promote themselves as wiser and better than everyone else, truth regardless.
Global warming has taken the place of Communism as an absurdity that "liberals" will defend to the death regardless of the evidence showing its folly. Evidence never has mattered to real Leftists
Comments? Email me here. My Home Page is here or here. For times when blogger.com is playing up, there are mirrors of this site here and here.
Thursday, October 27, 2005
THE BIOTECH PANIC
Biotechnology holds the promise of some day allowing people to enhance themselves and their children using pharmaceuticals or genetic interventions. This prospect is welcomed by some, but causes a great deal of anxiety in many people: Are there enhancements whose benefits would come at the price of our humanity?
The President's Council on Bioethics worries that people who choose to use biotech enhancements would somehow lose themselves: The Council's report "Beyond Therapy" warns "we risk 'turning into someone else,' confounding the identity we have acquired through natural gift cultivated by genuinely lived experiences, alone and with others." Liberal bioethicist George Annas from Boston University is pushing for a global treaty that would ban all inheritable modifications to any person's genetic makeup. He favors such a treaty because he believes that "species-altering genetic engineering [is] a potential weapon of mass destruction, and [that] makes the unaccountable genetic engineer a potential bioterrorist." These are not objections grounded in concerns about safety or equity, but in the fear that such changes threaten the very humanity of those who choose them. But do they really?
At the annual conference of the Association for Politics and the Life Sciences last month, George Washington University philosopher David DeGrazia offered a quite different perspective. DeGrazia, who was participating in panel discussion on "Genetic Engineering and the Concept of Human Nature" asked: Are there core characteristics of being human that are inviolable? He concluded that "traits that are plausibly targeted by enhancement are not problematic." DeGrazia considered several traits as candidates for inviolability: internal psychological style, personality, general intelligence and memory, sleep, normal aging, gender, and being a member of the species Homo sapiens. He then systematically demolished various concerns that had been raised about each.
Regarding psychological style, there is no ethical reason to require that a particular person remain worried, suspicious, or downbeat if they want to change. As DeGrazia pointed out, psychotherapy already aims at such self-transformation. If a pill will make a person more confident and upbeat, then there is no reason for them not to use it if they wish. Personality is perhaps the external manifestation of one's internal psychological style, and here, too, it's hard to think of any ethical basis for requiring someone to remain cynical or excessively shy.
But what about boosting intelligence and memory? Of course, from childhood on, we are constantly exhorted to improve ourselves by taking more classes, participating in more job training, and reading good books. Opponents of biotech enhancements might counter that all of these methods of improvement manipulate our environments and do not reach to the genetic cores of our beings. DeGrazia points out that that the wiring of our brains is the result of the interaction between our genes and our environment. For example, our intellectual capacities depend on proper nutrition as well as on our genetic endowments. DeGrazia concludes that one's genome is not fundamentally more important than environmental factors. "They are equally important, so we should bear in mind that no one objects to deliberately introducing environmental factors [schools and diet] that promote intelligence," declares DeGrazia. It does not matter ethically whether one's intellectual capacities are boosted by schooling, a pill, or a set of genes.
All vertebrates sleep. Sleep, unlike cynicism, does seem biologically fundamental, but so what? Nature is not really a reliable source for ethical norms. If a person could safely reduce her need for sleep and enjoy more waking life, that wouldn't seem at all ethically problematic. I suspect that our ancestors without artificial light got a lot more sleep than we moderns do, yet history doesn't suggest that they were morally superior to us.
As everyone knows, the only inevitabilities are taxes and death. Death used to come far more frequently at younger ages, but globally average life expectancy has now risen from around 30 years in 1900 to about 66 years today. "Is normal aging an essential part of any recognizable human life?" asks DeGrazia. He falters here, admitting, "Frankly, I do not know how to determine whether aging is an inviolable characteristic." The question, then, is whether someone who does try to "violate" this characteristic by biotechnological means is acting unethically. It is hard to see why the answer would be yes. Such would-be immortals are not forcing other people to live or die, nor are they infringing on the rights and dignities of others. DeGrazia does recognize that biotech methods aimed at slowing or delaying aging significantly are not morally different from technologies that would boost intelligence or reduce the need for sleep. He concludes, "Even if aging is an inviolable core trait of human beings, living no more than a specified number of years is not."
In the age of transgendered people, it seems a bit outmoded to ask if one's biological sex is an inviolable core characteristic. Plenty of people have already eagerly violated it. Yet, the President's Council on Bioethics declared, "Every cell of the body marks us as either male or female, and it is hard to imagine any more fundamental or essential characteristic of a person." Clearly, thousands of people's fundamental sexual identities depend on more than the presence of an X or Y chromosome in their bodies' cells.
Finally, DeGrazia wonders if even being a member of the species Homo sapiens constitutes an inviolable core trait. He specifically thinks of a plausible future in which parents add an extra pair of artificial chromosomes carrying various beneficial genetic modifications to the genomes of the embryos that will become their children. Such people would have 48 chromosomes, which means that they could not reproduce with anyone who carries the normal 46 chromosomes. "It seems to me, however, that these individuals would still be 'human' in any sense that might be normatively important," concludes DeGrazia. I believe that DeGrazia is correct. After all, infertile people today are still fully human. Oddly, DeGrazia thinks that this "risk to reproductive capacities" might warrant restricting the installation of extra chromosomes to consenting adults only. But why should one think that a person with 48 chromosomes who falls in love with a person with only 46 chromosomes can't simply use advanced genetic engineering techniques to overcome that problem?
DeGrazia convincingly argues that whatever it is that makes us fundamentally us is not captured by the set of characteristics he considers. The inviolable core of our identities is the narrative of our lives—the sum of our experiences, enhanced or not. If we lose that core, say through dementia, we truly do lose ourselves. But whoever we are persists and perhaps even flourishes if we choose to use biotech to brighten our moods, improve our personalities, boost our intelligence, sleep less, live longer healthier lives, change our gender, or even join a new species.
In the 1970s, disco was groovy and Congress enacted a lot of counterproductive, over-regulatory energy policies. Subsequent years saw both polyester suits and command-and-control energy policy fall out of favor, to the nation’s benefit. The heavy hand of Uncle Sam, however, today still governs automakers with an outdated scheme called the Corporate Average Fuel Economy (CAFE) standards. In some quarters, disco is making a comeback, and similarly the bad ideas embodied in CAFE are also threatening to make another go-round on Capitol Hill.
Back in 1975, Congress responded to the 1973 OPEC oil embargo by creating the CAFE regulatory program. CAFE works by mandating a “sales-weighted mean” or average of the fuel economies for the fleets of new cars and light trucks that a manufacturer sells each year. As it currently stands, every automaker must meet a total average mileage requirement for their fleet of cars of 27.5 miles per gallon. For heavier trucks and SUVs, the standard is lower, rising from 21 mpg this year to 22.2 mpg in 2007. Got that?
In the face of rising gasoline and oil prices, some in Congress and the Administration are feeling the temptation to tighten the CAFE standards for U.S. automotive fleets.
In September, the National Highway Traffic Safety Administration proposed a major tightening of CAFE for light trucks. And earlier this month, Sen. Pete V. Domenici, New Mexico Republican and chairman of the Senate Energy and Natural Resources Committee told The Hill newspaper that "…we must take another look at the CAFE standards” in the wake of Katrina. Tightening CAFE, however, would be a major policy blunder. In fact, CAFE needs to be substantially reformed or even repealed and replaced with market-based incentives to reduce fuel consumption and improve air quality.
First, CAFE has not really worked. America’s national “total fleet fuel economy” peaked in 1987 at 26.2 mpg and has been declining slightly since then, primarily because the nation prefers heavier and more powerful light trucks and SUVs, which have a lower fuel economy. Beyond consumer preference, CAFE also does not work in part because as cars become more fuel-efficient, we drive them further.
More troubling is the tragic unintended consequence of CAFE, which prompts automakers to build cars that are lighter and use less steel. The result is cars that are less safe, and the additional deaths of literally thousands of Americans on our roadways every year. A 1999 USA TODAY analysis of crash data from the National Highway Traffic Safety Administration and the Insurance Institute for Highway Safety found that CAFE standards have resulted in about 46,000 people dying in accidents where the victims would have survived if their cars had been bigger and heavier. That is an extraordinary loss of life and a largely untold story.
Many in Congress and the environmentalist movement, and their co-conspirators in the mainstream media, seem to care more about imposing misguided feel-good conservation measures on American motorists than protecting the lives of innocent drivers. It is outrageous that some of the biggest Congressional supporters of CAFE also oppose new drilling for U.S. oil in Alaska (drilling that can be done using modern techniques that minimize the environmental impact) and oppose construction of new oil and gas refineries. These politicians and their environmentalist supporters are making an explicit choice that values the false promise of CAFE over safer cars and trucks for American families.
CAFE was part of a number of ill-considered policy responses to the oil shock that also included lowering the national speed limit to 55 mph and imposing price controls on oil and gasoline. Price controls and the national speed limit were both foolish ideas and have been repealed, but sadly CAFE lives on.
The goals of CAFE are admirable, but there are much better ways to encourage conservative than mandating the design of automotive fleets. We will never be able to regulate our way to fuel economy. It is time to reform or repeal CAFE, and instead pass forward-thinking policy measures that use market mechanisms to advance the goal of conservation while also giving consumers more choice and safety. Let consumers choose and let markets work.
Two hugely challenging global issues have dominated the UK's Presidency of the G8 - climate change and Africa. So far, there is no doubt that the second has captured the limelight, in terms of public awareness, media attention, political progress and the effectiveness of campaign groups. Ashok Sinha is in a unique position to understand the interface between the two issues and, perhaps, to redress the balance. In the 1990s, his background as a physicist led him into research on renewable energy and climate change issues, culminating in work for Forum for the Future on solar power. "I'd always been interested and active in development issues, but not professionally," Dr Sinha says. "It's become clear that there is a need to bring together development and environment campaigns."
Four years ago, the chance came to put this thinking into practice when he took over as coordinator of the Jubilee Debt Campaign, a coalition of local and national groups campaigning for the cancellation of debt for the poorest countries. It built on the hugely successful Jubilee 2000 movement, which secured 24 million signatories for its petition.
More recently, Dr Sinha sat on the coordinating committee for Make Poverty History, the coalition set up to pressurise G8 leaders to deliver on third world debt and aid. Members of this coalition are concerned that they were sold short at July's Gleneagles summit - but even so, there is no doubt that the summit achieved much more for Africa than on climate change (ENDS Report 366, p 53 ).
Coming back to climate change issues as director of Stop Climate Chaos "wraps things together on a personal level," Dr Sinha says. "Climate change is the most cross-cutting, unifying moral issue of our time. No sector of society is unaffected - it brings together really hard questions on global energy security with working with community groups in sub-Saharan Africa to help them adapt."
Stop Climate Chaos, launched in September, is explicitly modelled on the Jubilee 2000 and Make Poverty History campaigns. "Jubilee 2000 is a great example of how it's possible to take a difficult issue that's widely seen as remote and detailed, and effect change by pooling resources, developing a diverse but common platform, and taking a strong moral stance," Dr Sinha says.
The new climate movement employs only three people, but its mission is ambitious: "To build a massive coalition that will create an irresistible public mandate for political action to stop climate change." The coalition will draw strength from supporting organisations. So far it has won backing from 18 campaign groups, representing several million supporters. Major environmental groups - Greenpeace, Friends of the Earth, WWF and the RSPB - are key players. But other participants include large development groups such as Oxfam, Cafod and Christian Aid, together with several faith organisations and the National Federation of Women's Institutes.
Dr Sinha argues that the breadth of the coalition is one of its key strengths. "We've got to turn climate change from an environmental question to a moral imperative, the same way we did with third world debt and poverty. If we continue to be seen as 'green' we're not going to be successful - we've got to break out of the green ghetto." .....
The launch of Stop Climate Chaos coincided with the devastating aftermath of Hurricane Katrina. The images of a flooded New Orleans, and scenes of the world's superpower struggling to cope with a climatic disaster, provided a powerful reminder of the potential impacts of global warming. Stop Climate Chaos took out a full-page advert in the Times under the headline "Global warning". "We can't be sure Hurricane Katrina was caused by global warming," the advert read. "But without urgent action to slash greenhouse gas emissions we can expect hurricanes as powerful as Katrina to occur more often."
***************************************
Many people would like to be kind to others so Leftists exploit that with their nonsense about equality. Most people want a clean, green environment so Greenies exploit that by inventing all sorts of far-fetched threats to the environment. But for both, the real motive is to promote themselves as wiser and better than everyone else, truth regardless.
Global warming has taken the place of Communism as an absurdity that "liberals" will defend to the death regardless of the evidence showing its folly. Evidence never has mattered to real Leftists
Comments? Email me here. My Home Page is here or here. For times when blogger.com is playing up, there are mirrors of this site here and here.
Wednesday, October 26, 2005
NOW THE GREENIES WANT TO DICTATE WHAT YOU PLANT IN YOUR BACKYARD
The following moan is about a capital city (Adelaide) of an Australian State but I am sure similar moans are coming from Greenies in many American cities too. The claim is that housing drives out wildlife -- and that claim is is just plain wrong. I live in an old inner-city suburb of another Australian State capital (Brisbane) and over the years people have planted or let grow on their properties all sorts of trees and other greenery -- so that there are in fact many more trees than houses -- and many of them are towering trees at that. And all sorts of wildlife have taken up residence in the habitats so provided. I hear all sorts of bird calls of a morning, possums thunder around in my roof at night so much that I would be scared stiff if I was not used to them. I have a blue-tongue lizard living under my front stairs that occasionally frightens my Asian tenants to death (although it is of course harmless), I once had to rescue one of my Indian tenants from a large moth that had fluttered into his room and was terrifying him and a large python (about 8' long) recently took up residence in one of the toilets here. And I see little geckoes scuttling about nearly every day. And as for tadpoles, there are plenty of toads about so all of them would have been tadpoles once. And we won't mention the spiders and wasps.
The land may have originally have been cleared but it has been recolonized with a vengeance over the last 100 years. No doubt the pattern of species at present is different to what it once was but there is life abundant here nonetheless. The passage I have highlighted in red reveals the authoritarian intentions behind this massively overblown scare.
"Seventy-five of the state's top scientists have issued an alarming warning that unless attitudes change towards Adelaide's environment, it will become an "urban wasteland" devoid of much of the plant and animal life existing today. In a groundbreaking new book, to be launched next month, the team of scientists claims that by 2036 Adelaide's range of naturally occuring flora and fauna could be reduced from thousands of species to about 100.
Adelaide, Nature of a City is the largest biodiversity analysis of a city done in the world. A team of historians, geographers, architects, biologists and social scientists spent the past three years documenting the city as a living, breathing environment. Co-editor of the book and environmental biology professor Chris Daniels says a loss of biodiversity could make quality of life "appalling". "Children could grow up in a community that's free of our natural environment, so they don't get exposed to blue tongues and tadpoles," he says. "If we lose contact with the environment, our children could grow up thinking concrete and bricks is all there is. I don't think life would be worth anything, the quality of life would be appalling."
The study comes as Adelaide's urban sprawl - now stretching across 80km in mainly single storey housing - has reached proportions exceeding Rome, Mexico and Kolkata (formerly Calcutta). The book finds that if Adelaide continues to develop without being sympathetic to the natural environment: WEEDS such as boneseed and feral olive trees will continue to overtake parks and open areas; NATIVE animals will empty from national parks; Thousands of animal species today could be reduced to a meagre 50 species of birds, 16 species of mammals, 20 reptile species and as few as two frog species by 2036.
But the authors of Adelaide, Nature of a City stress while the predictions are dire, the 600-page book empowers people to do something about it - but we need to act now. Dr Daniels said poor planning, a lack of open space, habitat clearance and new housing and city office developments which failed to consider biodiversity were killing the natural environment. "For years we have been driving out our plant and animal life, building without thinking about how it will affect the ecology," Dr Daniels said. "We are building sprawling developments, clearing native habitats and creating tiny backyards. And when we compare our open space to other cities it is not as impressive as we might think." As the cityscape becomes more dense, residential blocks decrease in size and inner city living becomes more popular, there is less green space. Already, Adelaide is the most urbanised Australian city with 1.1 million of the 1.3 million South Australians living in the metropolitan area between the Hills and the sea.
In order to avoid a desolate future, people had to realise their backyards and parks interacted with native ecosystems and had a profound impact on local biodiversity, Dr Daniels said. "What you plant, clear, build and tear down could be the difference between a species' survival and extinction. To be visionary, we must be conservationists." "
A new report by economic consultancy Castalia reiterates and amplifies earlier warnings about the cost to the economy of attempts to meet greenhouse emission goals set out by the Kyoto accord, saying significant social and economic dislocations are in the cards should the government make a serious compliance attempt. The report was prepared by Castalia for the Greenhouse Policy Coalition, an industry association representing energy intensive companies on greenhouse gas and climate change issues. Last year, Castalia prepared an immensely controversial report saying the government had massively underestimated the costs of Kyoto compliance. Rubbished at the time by the government, the report has since been validated in its central points.
Author of that -- and this -- report, Alex Sundakov, says until new technologies have been developed to reduce greenhouse gas emissions from fossil fuel use and agriculture, it will be impossible to reduce emissions in New Zealand if we want to continue to grow our economy. “With nearly half our greenhouse gas emissions coming from agriculture, where there are no easy solutions, it will be very expensive if we have to try to wring the required national emission reductions out of the remaining sectors of the economy”, he said. “In addition, increasing CO2 emissions from transport are closely related to economic growth.”
He says another factor is that industrial process emissions are all associated with sectors that are globally mobile, so companies can move their operations to countries where they would not face carbon taxes and price based measures. The result would be a loss of business for New Zealand - the emissions simply moving to another country.
The Castalia report notes that New Zealand already has a large amount of renewables in our electricity generation system (hydro, wind, geothermal) and looking into the future, we have more thermal than renewable options to meet increasing demand for electricity. The report says the economic growth that New Zealand has enjoyed recently has been and will continue to be driven by the industrial processing/commodity exporting sectors and tourism, and while these are energy intensive activities this does not mean we are inefficient in our use of energy.
“We can not let climate change policies put a handbrake on the economy when we produce only 0.2 per cent of global emissions, half of which are from agriculture which is still the backbone of the economy," said Catherine Beard, executive director of the Greenhouse Policy Coalition. She said it is clear that price based measures like carbon taxes or carbon trading will do nothing in the absence of alternative technologies to reduce emissions, rather they will be a drag on the economy. “Even Britain’s Tony Blair has recently conceded that technology is the answer to the problem and that no country will willingly sacrifice its economic growth." She said the Asia Pacific Partnership -- a partnership between Australia, China, India, Japan, the Republic of Korea, and the United States of America -- represents nearly half the world’s population and 48 per cent of global emissions. "They are investing in technology solutions – we need to ask is this a more effective path for New Zealand to take?," she asked.
I'm so glad I am not an ecofundamentalist. To be a 'Green Bunny' is to doom oneself to perpetual unhappiness, frustration, and anger with your fellow human beings and the state of the world. The reason is simple. Ecofundamentalism is utopian (and remember "utopia" means "nowhere"). People will just not do what you demand. You are never going to achieve even a smidgen of your desires, and whatever you do manage to squeeze from a reluctant and unconvinced populace, you will always, like Oliver Twist, be left wanting more.
'Global warming' is the classic instance. Forget the science. The real drive for 'global warming' has always been a neo-puritan agenda to limit growth, to make small beautiful, to reduce population to some nebulous optimum, to rein in the 'Great Satan' (America), to crush the car and aeroplanes, to curb capitalism and globalisation, to continue to laud it over the developing world, especially those rampant Asian dragons, and to return us all to a 'Golden Organic Age' that never was. So powerful is 'global warming' as a legitimising 'science' for this deeply emotional agenda that there is no way the 'Green Bunnies' can drop it, whatever the scientific, economic, and political realities. The burrow would collapse. I actually feel sorry for them.
For reality will always bring a cold chill to the burrow. As Mr. Blair reminded us only a couple of weeks ago, no country can afford to abandon growth, and debating globalisation in the face of Brazil, China, India, Indonesia, Korea, South Africa - you name them - is like arguing whether summer comes before autumn. Indeed, rising CO2 levels are little more than a proxy measure of much-needed growth.
The truth is that the 'Green Bunny' agenda is just not going to happen, whatever the column inches of angst and anger in The Guardian, The Independent, and on Channel 4 (watch out for the new digital spin-off channel, More4, which launches this Monday, the 10th). There will be no limits to growth. Humans will continue to outpace limitations through constant adaptation and technological wizardry. Population will continue to rise naturally to around 8.9 billion, before the curve flattens through normal economic processes, through increasing wealth, and, hopefully, through the empowerment of women. Overall, life expectancy will continue to rise, despite the inevitable setbacks of AIDS and other viruses.
We will also, of course, continue to be afflicted by an ever-unstable earth, with earthquake, fire and flood, although the evidence clearly indicates that the more wealthy the country, the less damage these inflict. But stuff happens; that's life on a restless planet. And, there may indeed be that ultimate supervolcano or asteroid about which we can do absolutely nothing but pour out the single malt.
The 'Green Bunnies' are silflaying in the wind, and their increasingly shrill squeaks will follow, one by one, a pattern outlined in a brilliant article in The Economist way back in 1997 (I précis):
In Phase 1, some obscure scientists discover what they think is a potential threat to the Earth. In Phase 2, left-wing journalists oversimplify and grossly exaggerate the threat. The scientists become minor celebrities (The Big Brother Lab?). In Phase 3, the 'Green Bunnies' seize their opportunity, and they deliberately aim to polarise the issue - in the words of the original article: "Either you agree that the world is about to come to an end and are fired by righteous indignation, or you are a paid lackey of big business." In Phase 4, the bureaucrats emerge out of their cocoons, with international conferences mooted, thus keeping public officials well plied with club-class tickets and treats abroad. This inevitably diverts the argument to regulation, and totemic targets are set - and then ignored. In Phase 5, it is time to pick on a scapegoat. This is usually America, or 'big business'. Phase 6 sees the entrance of the sceptics who declare that the scare is grossly exaggerated. Again, in the words of The Economist article: "This drives greens into paroxysms of pious rage. 'How dare you give space to fringe views?' cry these once-fringe people to newspaper editors." Phase 7 witnesses the politicians and bureaucrats, and even some of the scientists who first proposed the scare, waivering, and trying to re-emphasise the scientific and political complexities. Meanwhile, the journalists start to get bored with the topic. Phase 8 becomes the quiet climb-down, while the issue slowly dies away from the headlines, to be replaced, of course, by a totally new scare. "And so", as Samuel Pepys might have said, "Back to Phase 1"...
In the long run, to be a 'Green Bunny' is going to make you a very 'Unhappy Bunny' indeed. Rupert Bear's 'Nutwood' is but a childhood Utopia; 'Virtualia', by contrast, is a future we cannot even yet conceive.
And 'Green Bunny' anger (not to mention More4) is but the rage of impotence.
Philip, academically intrigued by Utopian 'Golden Ages'. They always end in tears before bedtime.
***************************************
Many people would like to be kind to others so Leftists exploit that with their nonsense about equality. Most people want a clean, green environment so Greenies exploit that by inventing all sorts of far-fetched threats to the environment. But for both, the real motive is to promote themselves as wiser and better than everyone else, truth regardless.
Global warming has taken the place of Communism as an absurdity that "liberals" will defend to the death regardless of the evidence showing its folly. Evidence never has mattered to real Leftists
Comments? Email me here. My Home Page is here or here. For times when blogger.com is playing up, there are mirrors of this site here and here.
Tuesday, October 25, 2005
Labor productivity is perhaps the most important statistic in the economy. Over time, output per worker is what drives wage rates and the standard of living. Economists routinely forecast annual growth in U.S. labor productivity of roughly two percent for the next several decades. For example, the Trustees' Report for the Social Security Administration assumes productivity growth of 1.6 percent in its "intermediate" scenario.
To Kurzweil, this forecast would be ludicrously pessimistic. He would see it as an example of what he calls "intuitive linear" thinking, in which people forecast the future on the basis of a linear extrapolation of the past. For example, from 1960 through 1992, productivity growth in the nonfarm business sector averaged 1.6 percent. Accordingly, that may seem to be a reasonable rate of increase to project going forward.
However, since 1992, productivity growth has sped up. As this article from the Federal Reserve Bank of San Francisco points out, "The performance of productivity in the U.S. economy has delivered some big surprises over the last several years. One surprise was in the latter half of the 1990s, when productivity growth surged to average an annual rate of over 3%, more than twice as fast as the rate in the previous two decades. A bigger surprise has been the further ratcheting up...productivity growth averaged around 3.8% for the 2001 through 2004 period." This good news on productivity rarely surfaces in the media. In part, this represents a general pessimistic bias in the media and among the population at large. In part, it reflects the inability of people to grasp nonlinear thinking.
Technological innovation is what drives productivity growth. Kurzweil argues that the rate of technological innovation is doubling every decade, which to me would imply that the rate of productivity growth will double every decade. If annual productivity growth was 3.5 percent in the decade ending in 2005, then it will be 7 percent in the decade ending in 2015 and 14 percent in the decade ending in 2025. By that time, productivity would be more than 7 times what it is today. Thus, if average income per person is $35,000 today, then it will be over $250,000 per person (in today's purchasing power) in 2025.
At a growth rate of 14 percent, output per person "only" doubles at a rate of about every 5 years. Using a more elegant mathematical model of technological change, George Mason University economist Robin Hanson arrives at an even more striking forecast. He writes, "we might see yet another transition to a much faster mode, if such faster modes are possible. The suggestion is fantastic, namely of a transition to a doubling time of two weeks or less sometime within roughly the next century."....
In The Great Race, an essay that reflects the influence of Kurzweil's earlier writings, I pointed out that a question going forward is whether the economy can grow faster than Medicare. I argued that Moore's Law favors the economy, but demographic and political considerations favor Medicare.
If output per person in 2025 is more than 5 times what it is today, then the economy will have won the race. That means that all of the concerns that economists raise about the middle of this century, such as the external debt of the U.S. economy (the cumulative trade deficit), the fiscal implications of Social Security and Medicare, or gloomy scenarios for global warming, will be trivialized by the sheer heights that economic wealth will have scaled by that time. If Kurzweil is correct, then the mountain of debt that we fear we are accumulating now will seem like a molehill by 2040. We will pay off this debt the way someone who wins a million-dollar lottery pays off a car loan.
I hope that the lottery-winning Kurzweil scenario materializes, but I am still not comfortable watching our government accumulate obligations to future entitlement recipients at the current rate. As of now, however, the data on average productivity growth over the past decade is reasonably consistent with the hypothesis that the economy is winning the Great Race.
CLIMATE CHANGE (THE WRONG SORT) IS GREENING THE SAHEL
But that's still bad news to the Greenies, of course
Rainfall over parts of Africa's Sahel appears to be rising but its greening could prove a mixed blessing if the population surges as a result and drought follows, a leading ecologist said on Monday.
"Climate change models suggest the Sahel should be getting drier but observations suggest it is currently getting wetter," Jon Lovett of the University of York in Britain told Reuters on the sidelines of a conference on climate change in Johannesburg. "This could lead to an increase in food production and population, but this will be bad if it suddenly goes into another cycle of drought which cannot support all of the additional people and livestock," he said. "It has cycles of boom and bust."
Lovett said the Sahel was relatively green during the 1940s through to the 1960s but since then it has gone into a dry phase that seems to be ending. Intriguingly, he said research done more than a decade ago linked a wetter Sahel to increased hurricane activity in the Gulf of Mexico -- and this appeared to be occurring in the wake of the devastation wrought by Hurricanes Katrina and Rita. "This shows that what is happening in Africa can have an affect on the Gulf of Mexico," he said.
The Sahel is a transition zone between the arid Sahara to the north and the wetter more tropical areas in Africa to the south. It includes Senegal, Mauritania, Mali, Burkina Faso, Niger, Nigeria and Chad. Niger experienced a famine this year brought on by poor rains and locust swarms, underscoring the region's vulnerability.
Green Pirates for Peace landed outside MP Martin Salter's west Reading office to demand Tony Blair be keelhauled for caving-in on international efforts to avert climate change. The pirates - from environ-mental action group Rising Tide - want the Prime Minister to declare an immediate state of emergency in a last ditch attempt to avoid "catastrophic" climate change. Rising Tide also wants the government to slap economic sanctions on the USA for refusing to sign up to the Kyoto agreement.
The pirates almost ran their ship aground when they heard Tony Blair's "brutally honest" statement on drafting a successor agreement to Kyoto. He said: "The truth is, no country is going to cut its growth or consumption substantially in the light of a long-term environmental problem."
The pirates are also demanding the prime minister get down to work and negotiate a "fair and equitable" replacement for Kyoto, and to make a full apology for his comments.
But, after avoiding the plank over his captain's sins, Mr Salter said: "I do not believe in engaging in childish gesture politics such as calling for sanctions against the USA, or merely shouting at those who take a different point of view. "But when parliament returns after the summer recess I will be sponsoring two private members bills on climate change and sustainable energy and on the management of energy in buildings."
The smart-growth scam: "Transportation is essential to the daily life of nearly every American. Millions of people flock onto the freeways and streets to accomplish innumerable tasks each day. Americans love their cars. No other mode of transportation provides the same individualized choices, schedules, and overall convenience as the automobile. Despite the obvious advantages of automotive transportation, politicians and environmentalists continue to praise mass transit. They cite all kinds of data aimed at denigrating automotive transportation while claiming that public transportation works better and is more efficient. However, even though billions of dollars have been spent on such systems, they continue to lose money and passengers."
***************************************
Many people would like to be kind to others so Leftists exploit that with their nonsense about equality. Most people want a clean, green environment so Greenies exploit that by inventing all sorts of far-fetched threats to the environment. But for both, the real motive is to promote themselves as wiser and better than everyone else, truth regardless.
Global warming has taken the place of Communism as an absurdity that "liberals" will defend to the death regardless of the evidence showing its folly. Evidence never has mattered to real Leftists
Comments? Email me here. My Home Page is here or here. For times when blogger.com is playing up, there are mirrors of this site here and here.
Monday, October 24, 2005
I'VE HEARD IT ALL NOW
As I've often said: There's no such thing as a happy Greenie. This one certainly believes in the old ZPG slogan: "People are pollution"
It is healthier to walk along a busy road and breathe in exhaust fumes than to sit in the comfort of an airconditioned car, a U.S. researcher said on Wednesday. Robert Baker, president of the non-profit U.S. Indoor Air Quality Association, said American scientists have found the air inside cars to be more contaminated than the air outside, even in urban areas. This was due to unfiltered air from exhaust fumes and chemical smells from car seats, audio equipment and air fresheners. "The air in an indoor space does not clean itself, unlike the outdoors, where air travels," Baker told a news conference at the launch of a Singapore Web site on car cabin air quality.
The site, www.healthycarcabin.org.sg, says prolonged exposure to bad cabin air can cause cancer and respiratory diseases. Drivers often install air cleaning devices, such as airconditioner filters.
The biggest pollutants in indoor spaces, however, are people, said Baker. "We release gases, bacteria and fungi into the air. The more people there are in an enclosed area, the more harmful it is," he said. One solution was to open the car's windows, though the Web site recommends doing so only along country roads.
Twice in the last week I've seen mention of a new "crisis" in energy markets. The crisis? We may have reached the peak in oil production, meaning that in future years, the amount of oil available will dwindle. This story is the lead story on today's front page of USA Today. The headline:
Debate Brews: Has Oil Production Peaked?
The story begins:
Almost since the dawn of the oil age, people have worried about the taps running dry. So far, the worrywarts have been wrong. Oil men from John D. Rockefeller to T. Boone Pickens always manage to find new gushers.
But now, a vocal minority of experts says world oil production is at or near its peak. Existing wells are tiring. New discoveries have disappointed for a decade. And standard assessments of what remains in the biggest reservoirs in the Middle East, they argue, are little more than guesses.
The first expert is an investment banker:
"There isn't a middle argument. It's a finite resource. The only debate should be over when we peak" says Matthew Simmons, a Houston investment banker and author of a new book that questions Saudi Arabia's oil reserves.
In case you think this is no big deal, think again:
If the "peak oil" advocates are correct, however, today's transient shortages and high prices will soon become a permanent way of life. Just as individual oil fields inevitably reach a point at which it gets harder and more expensive to extract the oil before output declines, global oil production is about to crest, they say. Since 2000, the cost of finding and developing new sources of oil has risen about 15% annually, according to the John S. Herold consulting firm.
As global demand rises, American consumers will find themselves in a bidding war with others around the world for scarce oil supplies. That will send prices of gasoline, heating oil and all petroleum-related products soaring.
"The least-bad scenario is a hard landing, global recession worse than the 1930s, says Kenneth Deffeyes, a Princeton University professor emeritus of geosciences. "The worst-case borrows from the Four Horsemen of the Apocalypse: war, famine, pestilence and death".
The Four Horsemen of the Apocalypse? War, famine, pestilence and death? They ought to put this quote in the next OED under "hyperbole". And I thought this guy was trying to really scare us (Ht: Alan Nemes) but it turns out he's a moderate.
"This fear that we're running out of oil or some other key resource is a steady feature of the worrying class. The worriers have a bad track record. I understand that just because you're paranoid it doesn't mean people aren't chasing you and just because the worriers have always been wrong doesn't mean this won't be the time they get it right. But I still sleep well.
"There's nothing inherently worrisome about a peak in oil production. Such imagery preys on a quick emotional response--before the peak, we're going up. After the peak, it's all downhill. But there's nothing significant about a world where we produce and consume less oil next year than this year. If that's because remaining oil stocks are increasingly costly to bring up from the ground, that increases the incentive to economize on oil usage and find cheaper ways to get it out of the ground. That mitigates the harm.
"The worriers like to say that we've had cheap oil in the past and now we're going to have expensive oil in the future. They make it sound like it's a geophysical relationship between production and prices. As long as we're finding more oil, oil is cheap. When we're past the peak, it'll be expensive. Cheap oil means the good life. Expensive oil means misery. But prices aren't high or low. They move around. They are high or low relative to other prices. If oil becomes increasingly scarce, we'll do a thousand, (more like a billion) things to find other ways of doing what oil does.
"If it happened tomorrow, if tomorrow, there were no gas in the pumps and this persisted forever, it would be a very unpleasant adjustment. It isn't going to happen tomorrow. If it happens gradually over the next 30 or 50 or 100 years, it will have little or no impact on our overall well-being.
"And wasn't it supposed to be good not to rely on fossil fuels? Why all this new worrying? I think the worriers are trying to exploit the recent spike in gasoline prices to push public policy in directions that won't happen otherwise.
"Meanwhile, read Julian Simon. Remember that human creativity is the ultimate resource. Remember that the geophysicists don't understand prices. Sleep well, despite the worriers' desire to keep you tossing and turning. And if you hear the sound of hoofbeats in the still small spaces of the night, it's probably just a horse.
For more on why the "peak oil" simpletons are wrong, see my own article on peak oil here
The NYTimes Magazine has an excellent article on the housing market based around a discussion of the development firm Toll Brothers. Bob Toll the president of the firm is predicting that US housing prices will converge with those in Europe.
"In Britain you pay seven times your annual income for a home; in the U.S. you pay three and a half." The British get 330 square feet, per person, in their homes; in the U.S., we get 750 square feet. Not only does Toll say he believes the next generation of buyers will be paying twice as much of their annual incomes; in terms of space, he also seems to think they're going to get only half as much. "And that average, million-dollar insane home in the burbs? It's going to be $4 million."
Toll agrees with Glaeser et al. that the key force driving up prices is zoning and growth regulations. In New Jersey it now takes Toll Brothers up to two million dollars in legal fees and ten years in time to get the permits necessary to build.
Susan Wachter, a housing economist at the Wharton School at the University of Pennsylvania, has an interesting public choice insight about why zoning is worse in Europe.
European towns also have less incentive to encourage development, Wachter says, because they generally do not, unlike their American equivalents, depend on their local tax base to pay for education and services, which tend to be federalized.
This implies that towns in states that reduce their reliance on the property tax - often done, as in CA, in order to "equalize" school funding or other expenditure - will soon restrict development. Go to it graduate students.
***************************************
Many people would like to be kind to others so Leftists exploit that with their nonsense about equality. Most people want a clean, green environment so Greenies exploit that by inventing all sorts of far-fetched threats to the environment. But for both, the real motive is to promote themselves as wiser and better than everyone else, truth regardless.
Global warming has taken the place of Communism as an absurdity that "liberals" will defend to the death regardless of the evidence showing its folly. Evidence never has mattered to real Leftists
Comments? Email me here. My Home Page is here or here. For times when blogger.com is playing up, there are mirrors of this site here and here.
Background
This site is in favour of things that ARE good for the environment. That the usual Greenie causes are good for the environment is however disputed. Greenie policies can in fact be actively bad for the environment -- as with biofuels, for instance
This Blog by John Ray (M.A.; Ph.D.), writing from Brisbane, Australia.
I am the most complete atheist you can imagine. I don't believe in Karl Marx, Jesus Christ or global warming. And I also don't believe in the unhealthiness of salt, sugar and fat. How skeptical can you get? If sugar is bad we are all dead
And when it comes to "climate change", I know where the skeletons are buried
There are no forbidden questions in science, no matters too sensitive or delicate to be challenged, no sacred truths.
Context for the minute average temperature change recorded in the graph above: At any given time surface air temperatures around the world range over about 100°C. Even in the same place they can vary by nearly that much seasonally and as much as 30°C or more in a day. A minute rise in average temperature in that context is trivial if it is not meaningless altogether. Scientists are Warmists for the money it brings in, not because of the facts
"Thinking" molecules?? Terrestrial temperatures have gone up by less than one degree over the last 150 years and CO2 has gone up long term too. But that proves nothing. It is not a proven causal relationship. One of the first things you learn in statistics is that correlation is not causation. And there is none of the smooth relationship that you would expect of a causal relationship. Both temperatures and CO2 went up in fits and starts but they were not the same fits and starts. The precise effects on temperature that CO2 levels are supposed to produce were not produced. CO2 molecules don't have a little brain in them that says "I will stop reflecting heat down for a few years and then start up again". Their action (if any) is entirely passive. Theoretically, the effect of added CO2 in the atmosphere should be instant. It allegedly works by bouncing electromagnetic radiation around and electromagnetic radiation moves at the speed of light. But there has been no instant effect. Temperature can stay plateaued for many years (e.g. 1945 to 1975) while CO2 levels climb. So there is clearly no causal link between the two. One could argue that there are one or two things -- mainly volcanoes and the Ninos -- that upset the relationship but there are not exceptions ALL the time. Most of the time a precise 1 to 1 connection should be visible. It isn't, far from it. You should be able to read one from the other. You can't.
Warmists depend heavily on ice cores for their figures about the atmosphere of the past. But measuring the deep past through ice cores is a very shaky enterprise, which almost certainly takes insufficient account of compression effects. The apparently stable CO2 level of 280ppm during the Holocene could in fact be entirely an artifact of compression at the deeper levels of the ice cores. . Perhaps the gas content of an ice layer approaches a low asymptote under pressure. Dr Zbigniew Jaworowski's criticisms of the assumed reliability of ice core measurements are of course well known. And he studied them for over 30 years.
The world's first "Green" party was the Nazi party -- and Greenies are just as Fascist today in their endeavours to dictate to us all and in their attempts to suppress dissent from their claims.
Was Pope Urban VIII the first Warmist? Below we see him refusing to look through Galileo's telescope. People tend to refuse to consider evidence— if what they might discover contradicts what they believe.
Warmism is a powerful religion that aims to control most of our lives. It is nearly as powerful as the Catholic Church once was
Believing in global warming has become a sign of virtue. Strange in a skeptical era. There is clearly a need for faith
Climate change is the religion of people who think they're too smart for religion
Some advice from the Buddha that the Green/Left would do well to think about: "Three things cannot be long hidden: The Sun, The Moon and The Truth"
Leftists have faith that warming will come back some day. And they mock Christians for believing in the second coming of Christ! They obviously need religion
Global warming has in fact been a religious doctrine for over a century. Even Charles Taze Russell, the founder of Jehovah's Witnesses, believed in it
A rosary for the church of global warming (Formerly the Catholic church): "Hail warming, full of grace, blessed art thou among climates and blessed is the fruit of thy womb panic"
Pope Francis is to the Catholic church what Obama is to America -- a mistake, a fool and a wrecker
Global warming is the predominant Leftist lie of the 21st century. No other lie is so influential. The runner up lie is: "Islam is a religion of peace". Both are rankly absurd.
"When it comes to alarmism, we’re all deniers; when it comes to climate change, none of us are" -- Dick Lindzen
The EPA does everything it can get away with to shaft America and Americans
Cromwell's famous plea: "I beseech you, in the bowels of Christ, think it possible you may be mistaken" was ignored by those to whom it was addressed -- to their great woe. Warmists too will not consider that they may be wrong ..... "Bowels" was a metaphor for compassion in those days
Inorganic Origin of Petroleum: "The theory of Inorganic Origin of Petroleum (synonyms: abiogenic, abiotic, abyssal, endogenous, juvenile, mineral, primordial) states that petroleum and natural gas was formed by non-biological processes deep in the Earth, crust and mantle. This contradicts the traditional view that the oil would be a "fossil fuel" produced by remnants of ancient organisms. Oil is a hydrocarbon mixture in which a major constituent is methane CH4 (a molecule composed of one carbon atom bonded to four hydrogen atoms). Occurrence of methane is common in Earth's interior and in space. The inorganic theory contrasts with the ideas that posit exhaustion of oil (Peak Oil), which assumes that the oil would be formed from biological processes and thus would occur only in small quantities and sets, tending to exhaust. Some oil drilling now goes 7 miles down, miles below any fossil layers
As the Italian chemist Primo Levi reflected in Auschwitz, carbon is ‘the only element that can bind itself in long stable chains without a great expense of energy, and for life on Earth (the only one we know so far) precisely long chains are required. Therefore carbon is the key element of living substance.’ The chemistry of carbon (2) gives it a unique versatility, not just in the artificial world, but also, and above all, in the animal, vegetable and – speak it loud! – human kingdoms.
David Archibald: "The more carbon dioxide we can put into the atmosphere, the better life on Earth will be for human beings and all other living things."
Fossil fuels are 100% organic, are made with solar energy, and when burned produce mostly CO2 and H2O, the 2 most important foods for life.
Warmists claim that the "hiatus" in global warming that began around 1998 was caused by the oceans suddenly gobbling up all the heat coming from above. Changes in the heat content of the oceans are barely measurable but the ARGO bathythermographs seem to show the oceans warming not from above but from below
Consensus: As Ralph Waldo Emerson said: 'A foolish consistency is the hobgoblin of little minds, adored by little statesmen and philosophers and divines.'
Consensus is invoked only in situations where the science is not solid enough - Michael Crichton
Bertrand Russell knew about consensus: "The fact that an opinion has been widely held is no evidence whatever that it is not utterly absurd; indeed in view of the silliness of the majority of mankind, a widespread belief is more likely to be foolish than sensible.”
"Science is the belief in the ignorance of the experts" – Richard Feynman
"I always think it's a sign of victory when they move on to the ad hominem -- Christopher Hitchens
"The desire to save humanity is always a false front for the urge to rule it" -- H L Mencken
'Nothing is more terrible than ignorance in action' -- Goethe
“Doubt is not a pleasant condition, but certainty is absurd.” -- Voltaire
Lord Salisbury: "No lesson seems to be so deeply inculcated by experience of life as that you should never trust experts. If you believe doctors, nothing is wholesome; if you believe theologians, nothing is innocent; if you believe soldiers, nothing is safe."
Calvin Coolidge said, "If you see 10 troubles coming down the road, you can be sure that nine will run into the ditch before they reach you." He could have been talking about Warmists.
Some advice from long ago for Warmists: "If ifs and ans were pots and pans,there'd be no room for tinkers". It's a nursery rhyme harking back to Middle English times when "an" could mean "if". Tinkers were semi-skilled itinerant workers who fixed holes and handles in pots and pans -- which were valuable household items for most of our history. Warmists are very big on "ifs", mays", "might" etc. But all sorts of things "may" happen, including global cooling
There goes another beautiful theory about to be murdered by a brutal gang of facts. - Duc de La Rochefoucauld, French writer and moralist (1613-1680)
"Pluralitas non est ponenda sine necessitate" -- William of Occam
Was Paracelsus a 16th century libertarian? His motto was: "Alterius non sit qui suus esse potest" which means "Let no man belong to another who can belong to himself." He was certainly a rebel in his rejection of authority and his reliance on observable facts and is as such one of the founders of modern medicine
"In science, refuting an accepted belief is celebrated as an advance in knowledge; in religion it is condemned as heresy". (Bob Parks, Physics, U of Maryland). No prizes for guessing how global warming skepticism is normally responded to.
"Almost all professors of the arts and sciences are egregiously conceited, and derive their happiness from their conceit" -- Erasmus
"The improver of natural knowledge absolutely refuses to acknowledge authority, as such. For him, scepticism is the highest of duties; blind faith the one unpardonable sin." -- Thomas H. Huxley
Time was, people warning the world "Repent - the end is nigh!" were snickered at as fruitcakes. Now they own the media and run the schools.
"One of the sources of the Fascist movement is the desire to avoid a too-rational and too-comfortable world" -- George Orwell, 1943 in Can Socialists Be Happy?
The whole problem with the world is that fools and fanatics are always so certain of themselves, but wiser people so full of doubts -- Bertrand Russell
“Affordable energy in ample quantities is the lifeblood of the industrial societies and a prerequisite for the economic development of the others.” -- John P. Holdren, Science Adviser to President Obama. Published in Science 9 February 2001
The closer science looks at the real world processes involved in climate regulation the more absurd the IPCC's computer driven fairy tale appears. Instead of blithely modeling climate based on hunches and suppositions, climate scientists would be better off abandoning their ivory towers and actually measuring what happens in the real world.' -- Doug L Hoffman
Something no Warmist could take on board: "Knuth once warned a correspondent, "Beware of bugs in the above code; I have only proved it correct, not tried it." -- Prof. Donald Knuth, whom some regard as the world's smartest man
"To be green is to be irrational, misanthropic and morally defective. They are the barbarians at the gate we have to stand against" -- Rich Kozlovich
“We’ve got to ride this global warming issue. Even if the theory of global warming is wrong, we will be doing the right thing in terms of economic and environmental policy.“ – Timothy Wirth,
President of the UN Foundation
“Isn’t the only hope for the planet that the industrialized civilizations collapse? Isn’t it our responsibility to bring that about?” – Maurice Strong, founder of the UN Environment Programme (UNEP)
Leftists generally and Warmists in particular very commonly ascribe disagreement with their ideas to their opponent being "in the pay" of someone else, usually "Big Oil", without troubling themselves to provide any proof of that assertion. They are so certain that they are right that that seems to be the only reasonable explanation for opposition to them. They thus reveal themselves as the ultimate bigots -- people with fixed and rigid ideas.
ABOUT:
This is one of TWO skeptical blogs that I update daily. During my research career as a social scientist, I was appalled at how much writing in my field was scientifically lacking -- and I often said so in detail in the many academic journal articles I had published in that field. I eventually gave up social science research, however, because no data ever seemed to change the views of its practitioners. I hoped that such obtuseness was confined to the social scientists but now that I have shifted my attention to health related science and climate related science, I find the same impermeability to facts and logic. Hence this blog and my FOOD & HEALTH SKEPTIC blog. I may add that I did not come to either health or environmental research entirely without credentials. I had several academic papers published in both fields during my social science research career
Update: After 8 years of confronting the frankly childish standard of reasoning that pervades the medical journals, I have given up. I have put the blog into hibernation. In extreme cases I may put up here some of the more egregious examples of medical "wisdom" that I encounter. Greenies and food freaks seem to be largely coterminous. My regular bacon & egg breakfasts would certainly offend both -- if only because of the resultant methane output
Since my academic background is in the social sciences, it is reasonable to ask what a social scientist is doing talking about global warming. My view is that my expertise is the most relevant of all. It seems clear to me from what you will see on this blog that belief in global warming is very poorly explained by history, chemistry, physics or statistics.
Warmism is prophecy, not science. Science cannot foretell the future. Science can make very accurate predictions based on known regularities in nature (e.g. predicting the orbits of the inner planets) but Warmism is the exact opposite of that. It predicts a DEPARTURE from the known regularities of nature. If we go by the regularities of nature, we are on the brink of an ice age.
And from a philosophy of science viewpoint, far from being "the science", Warmism is not even an attempt at a factual statement, let alone being science. It is not a meaningful statement about the world. Why? Because it is unfalsifiable -- making it a religious, not a scientific statement. To be a scientific statement, there would have to be some conceivable event that disproved it -- but there appears to be none. ANY event is hailed by Warmists as proving their contentions. Only if Warmists were able to specify some fact or event that would disprove their theory would it have any claim to being a scientific statement. So the explanation for Warmist beliefs has to be primarily a psychological and political one -- which makes it my field
And, after all, Al Gore's academic qualifications are in social science also -- albeit very pissant qualifications.
A "geriatric" revolt: The scientists who reject Warmism tend to be OLD! Your present blogger is one of those. There are tremendous pressures to conformity in academe and the generally Leftist orientation of academe tends to pressure everyone within it to agree to ideas that suit the Left. And Warmism is certainly one of those ideas. So old guys are the only ones who can AFFORD to declare the Warmists to be unclothed. They either have their careers well-established (with tenure) or have reached financial independence (retirement) and so can afford to call it like they see it. In general, seniors in society today are not remotely as helpful to younger people as they once were. But their opposition to the Warmist hysteria will one day show that seniors are not completely irrelevant after all. Experience does count (we have seen many such hysterias in the past and we have a broader base of knowledge to call on) and our independence is certainly an enormous strength. Some of us are already dead. (Reid Bryson and John Daly are particularly mourned) and some of us are very senior indeed (e.g. Bill Gray and Vince Gray) but the revolt we have fostered is ever growing so we have not labored in vain.
Jimmy Carter Classic Quote from 1977: "Because we are now running out of gas and oil, we must prepare quickly for a third change, to strict conservation and to the use of coal and permanent renewable energy sources, like solar power.
SOME POINTS TO PONDER:
Today’s environmental movement is the current manifestation of the totalitarian impulse. It is ironic that the same people who condemn the black or brown shirts of the pre WW2 period are blind to the current manifestation simply because the shirts are green.
Climate is just the sum of weather. So if you cannot forecast the weather a month in advance, you will not be able to forecast the climate 50 years in advance. And official meteorologists such as Britain's Met Office and Australia's BOM, are very poor forecasters of weather. The Met office has in fact given up on making seasonal forecasts because they have so often got such forecasts embarrassingly wrong. Their global-warming-powered "models" just did not deliver
Another 97%: Following the death of an older brother in a car crash in 1994, Bashar Al Assad became heir apparent; and after his father died in June 2000, he took office as President of Syria with a startling 97 per cent of the vote.
Hearing a Government Funded Scientist say let me tell you the truth, is like hearing a Used Car Salesman saying let me tell you the truth.
A strange Green/Left conceit: They seem to think (e.g. here) that no-one should spend money opposing them and that conservative donors must not support the election campaigns of Congressmen they agree with
David Brower, founder Sierra Club: “Childbearing should be a punishable crime against society, unless the parents hold a government license"
After three exceptionally cold winters in the Northern hemisphere, the Warmists are chanting: "Warming causes cold". Even if we give that a pass for logic, it still inspires the question: "Well, what are we worried about"? Cold is not going to melt the icecaps is it?"
It's a central (but unproven) assumption of the Warmist "models" that clouds cause warming. Odd that it seems to cool the temperature down when clouds appear overhead!
To make out that the essentially trivial warming of the last 150 years poses some sort of threat, Warmists postulate positive feedbacks that might cut in to make the warming accelerate in the near future. Amid their theories about feedbacks, however, they ignore the one feedback that is no theory: The reaction of plants to CO2. Plants gobble up CO2 and the more CO2 there is the more plants will flourish and hence gobble up yet more CO2. And the increasing crop yields of recent years show that plantlife is already flourishing more. The recent rise in CO2 will therefore soon be gobbled up and will no longer be around to bother anyone. Plants provide a huge NEGATIVE feedback in response to increases in atmospheric CO2
Every green plant around us is made out of carbon dioxide that the plant has grabbed out of the atmosphere. That the plant can get its carbon from such a trace gas is one of the miracles of life. It admittedly uses the huge power of the sun to accomplish such a vast filtrative task but the fact that a dumb plant can harness the power of the sun so effectively is also a wonder. We live on a rather improbable planet. If a science fiction writer elsewhere in the universe described a world like ours he might well be ridiculed for making up such an implausible tale.
Greenies are the sand in the gears of modern civilization -- and they intend to be.
The Greenie message is entirely emotional and devoid of all logic. They say that polar ice will melt and cause a big sea-level rise. Yet 91% of the world's glacial ice is in Antarctica, where the average temperature is around minus 40 degrees Celsius. The melting point of ice is zero degrees. So for the ice to melt on any scale the Antarctic temperature would need to rise by around 40 degrees, which NOBODY is predicting. The median Greenie prediction is about 4 degrees. So where is the huge sea level rise going to come from? Mars? And the North polar area is mostly sea ice and melting sea ice does not raise the sea level at all. Yet Warmists constantly hail any sign of Arctic melting. That the melting of floating ice does not raise the water level is known as Archimedes' principle. Archimedes demonstrated it around 2,500 years ago. That Warmists have not yet caught up with that must be just about the most inspissated ignorance imaginable. The whole Warmist scare defies the most basic physics. Yet at the opening of 2011 we find the following unashamed lying by James Hansen: "We will lose all the ice in the polar ice cap in a couple of decades". Sadly, what the Vulgate says in John 1:5 is still only very partially true: "Lux in tenebris lucet". There is still much darkness in the minds of men.
The repeated refusal of Warmist "scientists" to make their raw data available to critics is such a breach of scientific protocol that it amounts to a confession in itself. Note, for instance Phil Jones' Feb 21, 2005 response to Warwick Hughes' request for his raw climate data: "We have 25 years or so invested in the work. Why should I make the data available to you, when your aim is to try and find something wrong with it?" Looking for things that might be wrong with a given conclusion is of course central to science. But Warmism cannot survive such scrutiny. So even after "Climategate", the secrecy goes on.
Most Greenie causes are at best distractions from real environmental concerns (such as land degradation) and are more motivated by a hatred of people than by any care for the environment
Global warming has taken the place of Communism as an absurdity that "liberals" will defend to the death regardless of the evidence showing its folly. Evidence never has mattered to real Leftists
‘Global warming’ has become the grand political narrative of the age, replacing Marxism as a dominant force for controlling liberty and human choices. -- Prof. P. Stott
The modern environmental movement arose out of the wreckage of the New Left. They call themselves Green because they're too yellow to admit they're really Reds. So Lenin's birthday was chosen to be the date of Earth Day. Even a moderate politician like Al Gore has been clear as to what is needed. In "Earth in the Balance", he wrote that saving the planet would require a "wrenching transformation of society".
For centuries there was a scientific consensus which said that fire was explained by the release of an invisible element called phlogiston. That theory is universally ridiculed today. Global warming is the new phlogiston. Though, now that we know how deliberate the hoax has been, it might be more accurate to call global warming the New Piltdown Man. The Piltdown hoax took 40 years to unwind. I wonder....
Motives: Many people would like to be kind to others so Leftists exploit that with their nonsense about equality. Most people want a clean, green environment so Greenies exploit that by inventing all sorts of far-fetched threats to the environment. But for both, the real motive is generally to promote themselves as wiser and better than everyone else, truth regardless.
Policies: The only underlying theme that makes sense of all Greenie policies is hatred of people. Hatred of other people has been a Greenie theme from way back. In a report titled "The First Global Revolution" (1991, p. 104) published by the "Club of Rome", a Greenie panic outfit, we find the following statement: "In searching for a new enemy to unite us, we came up with the idea that pollution, the threat of global warming, water shortages, famine and the like would fit the bill.... All these dangers are caused by human intervention... The real enemy, then, is humanity itself." See here for many more examples of prominent Greenies saying how much and how furiously they hate you.
After fighting a 70 year war to destroy red communism we face another life-or-death struggle in the 21st century against green communism.
The conventional wisdom of the day is often spectacularly wrong. The most popular and successful opera of all time is undoubtedly "Carmen" by Georges Bizet. Yet it was much criticized when first performed and the unfortunate Bizet died believing that it was a flop. Similarly, when the most iconic piece of 20th century music was first performed in 1913-- Stravinsky's "Rite of Spring" -- half the audience walked out. Those of us who defy the conventional wisdom about climate are actually better off than that. Unlike Bizet and Stravinsky in 1913, we KNOW that we will eventually be vindicated -- because all that supports Warmism is a crumbling edifice of guesswork ("models").
Al Gore won a political prize for an alleged work of science. That rather speaks for itself, doesn't it?
Jim Hansen and his twin
Getting rich and famous through alarmism: Al Gore is well-known but note also James Hansen. He has for decades been a senior, presumably well-paid, employee at NASA. In 2001 he was the recipient of a $250,000 Heinz Award. In 2007 Time magazine designated him a Hero of the Environment. That same year he pocketed one-third of a $1 million Dan David Prize. In 2008, the American Association for the Advancement of Science presented him with its Scientific Freedom and Responsibility Award. In 2010 he landed a $100,000 Sophie Prize. He pulled in a total of $1.2 million in 2010. Not bad for a government bureaucrat.
See the original global Warmist in action here: "The icecaps are melting and all world is drowning to wash away the sin"
I am not a global warming skeptic nor am I a global warming denier. I am a global warming atheist. I don't believe one bit of it. That the earth's climate changes is undeniable. Only ignoramuses believe that climate stability is normal. But I see NO evidence to say that mankind has had anything to do with any of the changes observed -- and much evidence against that claim.
Seeing that we are all made of carbon, the time will come when people will look back on the carbon phobia of the early 21st century as too incredible to be believed
Meanwhile, however, let me venture a tentative prophecy. Prophecies are almost always wrong but here goes: Given the common hatred of carbon (Warmists) and salt (Food freaks) and given the fact that we are all made of carbon, salt, water and calcium (with a few additives), I am going to prophecy that at some time in the future a hatred of nitrogen will emerge. Why? Because most of the air that we breathe is nitrogen. We live at the bottom of a nitrogen sea. Logical to hate nitrogen? NO. But probable: Maybe. The Green/Left is mad enough. After all, nitrogen is a CHEMICAL -- and we can't have that!
The intellectual Roman Emperor Marcus Aurelius (AD 121-180) must have foreseen Global Warmism. He said: "The object in life is not to be on the side of the majority, but to escape finding oneself in the ranks of the insane."
The Holy Grail for most scientists is not truth but research grants. And the global warming scare has produced a huge downpour of money for research. Any mystery why so many scientists claim some belief in global warming?
For many people, global warming seems to have taken the place of "The Jews" -- a convenient but false explanation for any disliked event. Prof. Brignell has some examples.
Global warming skeptics are real party-poopers. It's so wonderful to believe that you have a mission to save the world.
There is an "ascetic instinct" (or perhaps a "survivalist instinct") in many people that causes them to delight in going without material comforts. Monasteries and nunneries were once full of such people -- with the Byzantine stylites perhaps the most striking example. Many Greenies (other than Al Gore and his Hollywood pals) have that instinct too but in the absence of strong orthodox religious committments they have to convince themselves that the world NEEDS them to live in an ascetic way. So their personal emotional needs lead them to press on us all a delusional belief that the planet needs "saving".
The claim that oil is a fossil fuel is another great myth and folly of the age. They are now finding oil at around seven MILES beneath the sea bed -- which is incomparably further down than any known fossil. The abiotic oil theory is not as yet well enough developed to generate useful predictions but that is also true of fossil fuel theory
Medieval Warm Period: Recent climatological data assembled from around the world using different proxies attest to the presence of both the MWP and the LIA in the following locations: the Sargasso Sea, West Africa, Kenya, Peru, Japan, Tasmania, South Africa, Idaho, Argentina, and California. These events were clearly world-wide and in most locations the peak temperatures during the MWP were higher than current temperatures.
Both radioactive and stable carbon isotopes show that the real atmospheric CO2 residence time (lifetime) is only about 5 years, and that the amount of fossil-fuel CO2 in the atmosphere is
maximum 4%.
How 'GREEN' is the FOOTPRINT of a WIND TURBINE? 45 tons of rebar and 630 cubic yards of concrete
Green/Left denial of the facts explained: "Rejection lies in this, that when the light came into the world men preferred darkness to light; preferred it, because their doings were evil. Anyone who acts shamefully hates the light, will not come into the light, for fear that his doings will be found out. Whereas the man whose life is true comes to the light" John 3:19-21 (Knox)
Against the long history of huge temperature variation in the earth's climate (ice ages etc.), the .6 of one degree average rise reported by the U.N. "experts" for the entire 20th century (a rise so small that you would not be able to detect such a difference personally without instruments) shows, if anything, that the 20th century was a time of exceptional temperature stability.
Recent NASA figures tell us that there was NO warming trend in the USA during the 20th century. If global warming is occurring, how come it forgot the USA?
Warmists say that the revised NASA figures do not matter because they cover only the USA -- and the rest of the world is warming nicely. But it is not. There has NEVER been any evidence that the Southern hemisphere is warming. See here. So the warming pattern sure is looking moth-eaten.
The latest scare is the possible effect of extra CO2 on the world’s oceans, because more CO2 lowers the pH of seawater. While it is claimed that this makes the water more acidic, this is misleading. Since seawater has a pH around 8.1, it will take an awful lot of CO2 it to even make the water neutral (pH=7), let alone acidic (pH less than 7).
In fact, ocean acidification is a scientific impossibility. Henry's Law mandates that warming oceans will outgas CO2 to the atmosphere (as the UN's own documents predict it will), making the oceans less acid. Also, more CO2 would increase calcification rates. No comprehensive, reliable measurement of worldwide oceanic acid/base balance has ever been carried out: therefore, there is no observational basis for the computer models' guess that acidification of 0.1 pH units has occurred in recent decades.
The chaos theory people have told us for years that the air movement from a single butterfly's wing in Brazil can cause an unforeseen change in our weather here. Now we are told that climate experts can "model" the input of zillions of such incalculable variables over periods of decades to accurately forecast global warming 50 years hence. Give us all a break!
Scientists have politics too -- sometimes extreme politics. Read this: "This crippling of individuals I consider the worst evil of capitalism... I am convinced there is only one way to eliminate these grave evils, namely through the establishment of a socialist economy, accompanied by an educational system which would be oriented toward social goals. In such an economy, the means of production are owned by society itself and are utilized in a planned fashion. A planned economy, which adjusts production to the needs of the community, would distribute the work to be done among all those able to work and would guarantee a livelihood to every man, woman, and child." -- Albert Einstein
The Lockwood & Froehlich paper was designed to rebut Durkin's "Great Global Warming Swindle" film. It is a rather confused paper -- acknowledging yet failing to account fully for the damping effect of the oceans, for instance -- but it is nonetheless valuable to climate atheists. The concession from a Greenie source that fluctuations in the output of the sun have driven climate change for all but the last 20 years (See the first sentence of the paper) really is invaluable. And the basic fact presented in the paper -- that solar output has in general been on the downturn in recent years -- is also amusing to see. Surely even a crazed Greenie mind must see that the sun's influence has not stopped and that reduced solar output will soon start COOLING the earth! Unprecedented July 2007 cold weather throughout the Southern hemisphere might even have been the first sign that the cooling is happening. And the fact that warming plateaued in 1998 is also a good sign that we are moving into a cooling phase. As is so often the case, the Greenies have got the danger exactly backwards. See my post of 7.14.07 and very detailed critiques here and here and here for more on the Lockwood paper and its weaknesses.
As the Greenies are now learning, even strong statistical correlations may disappear if a longer time series is used. A remarkable example from Sociology:"The modern literature on hate crimes began with a remarkable 1933 book by Arthur Raper titled The Tragedy of Lynching. Raper assembled data on the number of lynchings each year in the South and on the price of an acre’s yield of cotton. He calculated the correlation coefficient between the two series at –0.532. In other words, when the economy was doing well, the number of lynchings was lower.... In 2001, Donald Green, Laurence McFalls, and Jennifer Smith published a paper that demolished the alleged connection between economic conditions and lynchings in Raper’s data. Raper had the misfortune of stopping his analysis in 1929. After the Great Depression hit, the price of cotton plummeted and economic conditions deteriorated, yet lynchings continued to fall. The correlation disappeared altogether when more years of data were added." So we must be sure to base our conclusions on ALL the data. In the Greenie case, the correlation between CO2 rise and global temperature rise stopped in 1998 -- but that could have been foreseen if measurements taken in the first half of the 20th century had been considered.
Greenie-approved sources of electricity (windmills and solar cells) require heavy government subsidies to be competitive with normal electricity generators so a Dutch word for Greenie power seems graphic to me: "subsidieslurpers" (subsidy gobblers)
Many newspaper articles are reproduced in full on this blog despite copyright claims attached to them. I believe that such reproductions here are protected by the "fair use" provisions of copyright law. Fair use is a legal doctrine that recognises that the monopoly rights protected by copyright laws are not absolute. The doctrine holds that, when someone uses a creative work in way that does not hurt the market for the original work and advances a public purpose - such as education or scholarship - it might be considered "fair" and not infringing.
There are also two blogspot blogs which record what I think are my main recent articles here and here. Similar content can be more conveniently accessed via my subject-indexed list of short articles here or here (I rarely write long articles these days)
NOTE: The archives provided by blogspot below are rather inconvenient. They break each month up into small bits. If you want to scan whole months at a time, the backup archives will suit better. See here or here .....
|
{
"pile_set_name": "Pile-CC"
}
|
Our wonderful soil
the Merino Group rocks are soft, rich in lime and soda, andhigh in clay-forming minerals, lack a coarse fabric, are well supplied with potassium and moderately supplied withphosphorus
“The features of this landscape are the generally treeless dark heavy soils, a sharp escarpment at the junction with the tablelands, deep valleys with concave lower slopes and flat valley floors...The Casterton land-system extends over most of the area between and around Merino, Coleraine and Casterton andseparates the two sub-systems of the Glenelg land-system so that to the north-east, the Glenelg land-system is mainlyof earlier Palaeozoic rocks, whilst to the south-west, it is mainly of later rocks, early Cainozoic capping upperMesozoic. Both Kenley and Boutakoff refer to the soft nature of theMerino Group sediments. From all this, it is clear that the Merino Group rocks are soft, rich in lime and soda, andhigh in clay-forming minerals, lack a coarse fabric, are well supplied with potassium and moderately supplied withphosphorus.. an area of moderate rainfall, where the parent material is Mesozoic sediments, which are calcareous, soda-rich, without an open fabric and containing abundant clay-forming minerals or clay particles, the soils have thefollowing features:
uniformly heavy texture down the profile ; swelling clay throughout and therefore deeply cracking when dry but tightly closed when wet ; strong medium-sized structural units at the surface due to calcium
larger, well-formed units in the subsoil, because of sodium on the exchange complex, the faces of the units being slippery when first wetted.
Related
Post navigation
Tim Entwisle (Gardening Australia: A Change of Seasons) suggests we adopt at least 5 seasons, such is the variability of the Australian climate from region to region. he speaks from Sydney to Jane Edmanson:
“There’s an early spring, which I call Sprinter – August and September, there’s a Sprummer which comes after that for 2 months – October and November. There’s a long summer which goes right from December through to March, a short autumn, a short winter – both just two months long and then you’re back at Sprinter.”
He explains what he means by Sprinter. “Spring in Australia, particularly in southern Australia, comes a bit earlier – it comes in about August. That’s really when we start to see things flowering. It’s the time of year we see change.”
“Sprummer is a season between Sprinter – this early spring – and summer, but it’s a changeable season. One of the interesting things with Sprummer is that it’s quite windy in lots of parts of Australia, you get hot weather and cold weather. It’s a really transition time between Sprinter and Summer.”
|
{
"pile_set_name": "Pile-CC"
}
|
Every year, there's something that unites Glastonbury.
Last year, it was the universal disdain for the Brexit vote, myself waking up to megaphones announcing David Cameron had resigned and Boris Johnson may take the PM position.
This year, though, the people of Worthy Farm have a new hero, one to bring everyone together: Jeremy Corbyn.
Barely a moment goes by without someone chanting the Labour leader's name to the tune of 'Seven Nation Army'.
For instance, waiting for Everything Everything's secret set in William's Green, he crowd erupted into 'Oh, Jeremy Corbyn' rather than the band's name.
At a silent disco, as The White Stripes' song began playing, people began singing Corbyn's name. There's no-one not talking about him.
Normally this sort of worship is held back for founder Michael Eavis, whose picture has often graced flags.
But, having won the young person's vote, Corbyn has become rock'n'roll royalty and Glastonbury's unchallenged king.
|
{
"pile_set_name": "OpenWebText2"
}
|
About Optics & Photonics TopicsOSA Publishing developed the Optics and Photonics Topics to help organize its diverse content more accurately by topic area. This topic browser contains over 2400 terms and is organized in a three-level hierarchy. Read more.
Topics can be refined further in the search results. The Topic facet will reveal the high-level topics associated with the articles returned in the search results.
Abstract
We report forward and backward THz-wave difference frequency generations at 197 and 469 μm from a PPLN rectangular crystal rod with an aperture of 0.5 (height in z) × 0.6 (width in y) mm2 and a length of 25 mm in x. The crystal rod appears as a waveguide for the THz waves but as a bulk material for the optical mixing waves near 1.54 μm. We measured enhancement factors of 1.6 and 1.8 for the forward and backward THz-wave output powers, respectively, from the rectangular waveguide in comparison with those from a PPLN slab waveguide of the same length, thickness, and domain period under the same pump and signal intensity of 100 MW/cm2.
Figures (3)
(a) Photograph of the three 2-D NOSWs fabricated from PPLN for our experiment. Experimental data in this paper were taken from the longest one. (b) Schematic of the forward and backward THz-wave DFG in PPLN NOSW. The pump and signal are initially combined from a distributed feed-back diode laser (DFBDL) and a tunable external cavity diode laser (ECDL), and then boosted up in power by an Erbium-doped fiber amplifier (EDFA) and a pulsed optical parametric amplifier (OPA). A 4K silicon bolometer detects the backward and forward THz waves before and after the PPLN NOSW, respectively.
(a) Measured forward THz-wave tuning curves from the 1-D (dots) and 2-D (crosses) PPLN NOSWs. The dashed and continuous lines are fitting curves of Eq. (4) with Γ = 0.53 and 0.65 cm−1 for the 1-D and 2-D NOSW, respectively, given an attenuation coefficient of 40 cm−1. (b) The Measured THz-wave output power versus pump intensity from the 1-D (squares) and 2-D (circles) NOSWs, indicating some nonlinear THz-wave power enhancement in the 2-D NOSW.
(a) Measured backward THz-wave tuning curves from the 1-D (dots) and 2-D (crosses) PPLN NOSWs. The dashed and continuous lines are fitting curves of Eq. (7) with Γ = 0.34 and 0.44 cm−1 for the 1-D and 2-D NOSWs, respectively, and with an assumed attenuation coefficient of 6 cm−1. (b) The Measured backward THz-wave output power versus pump intensity from the 1-D (squares) and 2-D (circles) NOSWs, indicating enhanced THz-wave power from the 2-D NOSW over the whole range of measurement.
|
{
"pile_set_name": "Pile-CC"
}
|
Coelopellini
Coelopellini is a tribe of kelp flies in the family Coelopidae.
Genera
Amma McAlpine, 1991
Beaopterus Lamb, 1909
Coelopella Malloch, 1933
Icaridion Lamb, 1909Halteres absent and the wings are reduced to strips.New Zealand.
Rhis McAlpine, 1991
This McAlpine, 1991
References
Category:Coelopidae
|
{
"pile_set_name": "Wikipedia (en)"
}
|
Q:
Question concerning a swimwing pool's volumes, and minim size of sides for the given volume
Question:
a volume of the swimming pool must be $25\,\mathrm m^3$. the base is squared. What are the sides of the swimming pool, that we can use the minimum amount of materials?
My working
We know that the base is squared and the volume formula is $V=a\cdot b\cdot h$. So here
$$V = x\cdot x\cdot h = 25 = x^2h$$
Now I thought to find the $x$ value I calculate the area of one of the 5 sides. so:
$A = x \cdot h$
and from the volume formula:
$h = 25/x^2$ so $A = x \cdot \dfrac{25}{x^2}$
and after this I am confused. Any help would be appreciated. Thanks you
Correct answer:
$3.7$, $3.7$ and $1.8$
A:
You are supposed to minimize the area (v.g. $A=\frac{25}x$). One alternative is using calculus, another is observing if you can use a known inequality such as $\frac{25}x>0$ and observing that $\frac{25}x\to0$ when $x\to\infty$. This will make a very wide yet zero-depth pool, which is probably not the answer needed.
You have forgotten the pool's floor, now, adding all four sides of the pool and its floor you have:
$$A=4xh+x^2,$$
and you can replace $h$ so you get:
$$A=\frac{100}x+x^2$$
A know inequality is not likely here, so use calculus to get $x=\sqrt[3]{50}\simeq3.7$.
|
{
"pile_set_name": "StackExchange"
}
|
124 F.Supp.2d 876 (2000)
Michael A. HARRIS, Plaintiff,
v.
UNITED STATES of America, Defendant.
No. CIV.A. 00-5194(JEI).
United States District Court, D. New Jersey.
November 6, 2000.
Michael A. Harris, Fort Dix, NJ, Plaintiff Pro Se.
AMENDMENT TO ORDER DISMISSING PETITION DATED NOVEMBER 1, 2000
IRENAS, District Judge:
It appearing that:
1. In ruling that the petitioner's sentence did not violate the Supreme Court's holding in Apprendi v. New Jersey, 530 U.S. 466, 120 S.Ct. 2348, 147 L.Ed.2d 435, the Court expressly relied on the language of the indictment which specifically charged that petitioner conspired to distribute more than 5 kilograms of cocaine;
2. In the court's holding, but not expressly stated, was the assumption that the trial court charged the jury that the amount of cocaine, i.e., more than 5 kilograms, was an allegation of the offense that had to be found beyond a reasonable doubt;
3. The Court is aware that prior to Apprendi many courts treated language in an indictment alleging amount as being surplusage because 21 U.S.C. § 841(a) purports to define the crime without regard to amount, while § 841(b) treats the amount of drugs as a sentencing factor;
4. This Court has not seen a copy of the jury charge pursuant to which petitioner was convicted. If the charge were to treat the allegation of amount in the indictment as surplusage, a genuine Apprendi issue might be raised and dismissal at this point in the case would be inappropriate; and
5. The judgment of the Court rendered on November 1, 2000, should not become final until the Court has had a chance to review the charge to the jury which was delivered during petitioner's trial in the United States District Court for the Eastern District of Virginia.
Based on the foregoing and good cause appearing,
IT IS on this 6th day of November, 2000,
ORDERED THAT:
1. The Court's Order Dismissing Petition of November 1, 2000, shall not be deemed a final decision of this Court for purposes of 28 U.S.C. § 1291 and Fed.R.Civ.P. 54(a);
2. If petitioner has a copy of the jury charge pursuant to which he was convicted, he shall forward a copy of that charge forthwith to the Court and a copy of that charge to Paul Blaine, Assistant United States Attorney, 401 Market Street, 4th Floor, P.O. Box 2098, Camden, NJ 08101;
3. If petitioner does not produce a copy of the jury charge as provided in the second decretal paragraph, this Court directs the United States Attorney to procure a copy of the jury charge and to file it with the Court and serve it on the petitioner no later than December 18, 2000.
|
{
"pile_set_name": "FreeLaw"
}
|
Q:
Query to obtain the maximum primary key ID of a table for each unique value of the customerID column in the same table
I'm trying to run a query that takes obtains the maximum QuestionnaireId value for each unique value in column VendorId.
So for example from this table:
QuestionnaireId VendorId
1 10003
2 10004
3 10004
4 10006
5 10005
6 10007
7 10005
8 10005
I would obtain:
QuestionnaireId VendorId
1 10003
3 10004
8 10005
4 10006
6 10007
I'm using the following code to get the maximum QuestionnaireId, but need another statement alongside it to get the unique VendorIds as well. Note that the statement I've included is just last segment of a large Join function to combine all of my tables into one.
WHERE Questionnaire.QuestionnaireId = (SELECT MAX(Questionnaire.QuestionnaireId) FROM Questionnaire)
A:
Just use aggregation:
select max(q.QuestionnaireId) as QuestionnaireId, VendorId
from Questionnaire
group by VendorId;
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How can I reliably check client identity whilst making DCOM calls to a C# .NET 3.5 Server?
I have an old Win32 C++ DCOM Server that I am rewriting to use C# .NET 3.5. The client applications sit on remote Windows XP machines and are also written in C++. These clients must remain unchanged, hence I must implement the interfaces on new .NET objects.
This has been done, and is working successfully regarding the implementation of the interfaces, and all of the calls are correctly being made from the old clients to the new .NET objects.
However, I'm having problems obtaining the identity of the calling user from the DCOM Client. In order to try to identify the user who instigated the DCOM call, I have the following code on the server...
[DllImport("ole32.dll")]
static extern int CoImpersonateClient();
[DllImport("ole32.dll")]
static extern int CoRevertToSelf();
private string CallingUser
{
get
{
string sCallingUser = null;
if (CoImpersonateClient() == 0)
{
WindowsPrincipal wp = System.Threading.Thread.CurrentPrincipal as WindowsPrincipal;
if (wp != null)
{
WindowsIdentity wi = wp.Identity as WindowsIdentity;
if (wi != null && !string.IsNullOrEmpty(wi.Name))
sCallingUser = wi.Name;
}
if (CoRevertToSelf() != 0)
ReportWin32Error("CoRevertToSelf");
}
else
ReportWin32Error("CoImpersonateClient");
return sCallingUser;
}
}
private static void ReportWin32Error(string sFailingCall)
{
Win32Exception ex = new Win32Exception();
Logger.Write("Call to " + sFailingCall + " FAILED: " + ex.Message);
}
When I get the CallingUser property, the value returned the first few times is correct and the correct user name is identified, however, after 3 or 4 different users have successfully made calls (and it varies, so I can't be more specific), further users seem to be identified as users who had made earlier calls.
What I have noticed is that the first few users have their DCOM calls handled on their own thread (that is, all calls from a particular client are handled by a single unique thread), and then subsequent users are being handled by the same threads as the earlier users, and after the call to CoImpersonateClient(), the CurrentPrincipal matches that of the initial user of that thread.
To illustrate:
User Tom makes DCOM calls which are handled by thread 1 (CurrentPrincipal correctly identifies Tom)
User Dick makes DCOM calls which are handled by thread 2 (CurrentPrincipal correctly identifies Dick)
User Harry makes DCOM calls which are handled by thread 3 (CurrentPrincipal correctly identifies Harry)
User Bob makes DCOM calls which are handled by thread 3 (CurrentPrincipal incorrectly identifies him as Harry)
As you can see in this illustration, calls from clients Harry and Bob are being handled on thread 3, and the server is identifying the calling client as Harry.
Is there something that I am doing wrong?
Are there any caveats or restrictions on using Impersonations in this way?
Is there a better or different way that I can RELIABLY achieve what I am trying to do?
All help would be greatly appreciated.
A:
OK, so I've taken a different appropach, and finall come up with a method that seems to work (tested for 8 different remote users).
I've ditched the Impersonation route in favour of ClientBlankets...
[DllImport("ole32.dll")]
static extern int CoQueryClientBlanket(out IntPtr pAuthnSvc, out IntPtr pAuthzSvc,
[MarshalAs(UnmanagedType.LPWStr)] out StringBuilder pServerPrincName, out IntPtr
pAuthnLevel, out IntPtr pImpLevel, out IntPtr pPrivs, out IntPtr pCapabilities);
public static string CallingUser
{
get
{
IntPtr pAthnSvc = new IntPtr();
IntPtr pAthzSvc = new IntPtr();
StringBuilder pServerPrincName = new StringBuilder();
IntPtr pAuthnLevel = new IntPtr();
IntPtr pImpLevel = new IntPtr();
IntPtr pPrivs = new IntPtr();
IntPtr pCaps = new IntPtr(4);
string sCallingUser = string.Empty;
try
{
CoQueryClientBlanket(out pAthnSvc,
out pAthzSvc,
out pServerPrincName,
out pAuthnLevel,
out pImpLevel,
out pPrivs,
out pCaps);
}
catch (Exception ex)
{
Logger.Write(ex.Message);
}
finally
{
sCallingUser = System.Runtime.InteropServices.Marshal.PtrToStringAuto(pPrivs);
}
return sCallingUser;
}
}
Using CoCreateClientBlanket seems to have the desired results, and I'm able to reliably obtain the identity of the calling user everytime, regardless of which thread is used to process the message.
|
{
"pile_set_name": "StackExchange"
}
|
Central Florida is best known for a high-powered offense, but the Knights are capable of big plays on defense, too.
Richie Grant and Tre'mon Morris-Brash had first-quarter defensive touchdowns, Dillon Gabriel led three third-quarter scoring drives and UCF beat Marshall 48-25 in the Gasparilla Bowl on Monday.
“We finished the season the right way,” UCF coach Josh Heupel said. “The turnovers were critical. I thought defensively we started about as fast as you can.”
Grant had a 39-yard interception return on Marshall's third play from scrimmage 56 seconds into the game, and Morris-Brash recovered a fumble and ran it 55 yards for a score that helped UCF go up 21-0 with 7 minutes left in the first quarter.
Gabriel threw a 35-yard touchdown pass to Otis Anderson, connected on a 75-yard score with Marlon Williams, and added a 3-yard TD run as UCF went ahead 45-22 with 6:39 remaining in the third.
Gabriel completed 14 of 24 passes for 260 yards as the Knights (10-3) reached 10 or more wins in a school-record third consecutive season. Williams caught seven passes for 132 yards and Greg McCrae had 80 rushing yards on 14 carries.
“We're happy with the 10-3, but at the same time we know what we could have done,” Gabriel said.
Isaiah Green went 9 of 23 for 173 yards with a TD and had a rushing touchdown for Marshall (8-5). Brenden Knox, the Conference USA MVP, had 103 yards on 26 carries.
“I thought our kids fought their tail off,” Marshall coach Doc Holliday said. “They fought the entire game.”
Green had a 3-yard rushing touchdown and hit Willie Johnson on a 70-yard TD pass, and Justin Rohrwasser made a 50-yard field goal during the third quarter.
Marshall got within 21-7 with 12:28 to go in the second when Micah Abraham picked off backup quarterback Darriel Mack Jr.'s pass and returned it 75 yards for a touchdown.
Abraham's father, Donnie, played his home games with the Tampa Bay Buccaneers (1996-01) at Raymond James Stadium, the site of the Gasparilla Bowl.
McCrae had a 26-yard TD run early in the first and Dylan Barnas made a 36-yard field goal as time expired as UCF took a 24-7 halftime advantage.
THE TAKEAWAY
UCF: The Knights had no letdown after playing in New Year’s Day bowl games the previous two seasons, beating Auburn two years ago in the Peach Bowl and losing to LSU in the last season’s Fiesta Bowl.
“You look at this season, there's goals that we didn't accomplish, right?" Heupel said. “But I hope we never get to the point as a program and a fan base where fans and outside voices aren't really appreciative of what our kids do every single day.”
Marshall: It was the first bowl loss for the Thundering Herd under Holliday, who entered 6-0 in bowls at the school. Knox, a sophomore, topped 100 yards rushing for the seventh time.
STRANGE STATS
UCF lead 21-0 after the first quarter despite being outgained 133-86. Marshall, however, had three turnovers. The Knights had a 168-0 yardage advantage during the second quarter and were outscored 7-3.
STANDOUT ‘D’
UCF linebacker Nate Evans finished with 12 tackles and a sack. Defensive back Antwan Collier made six tackles, recovered two fumbles and had an interception.
UP NEXT
UCF: Hosts North Carolina on Sept. 3 to open the 2020 season.
Marshall: Begins the 2020 season at East Carolina on Sept. 5.
|
{
"pile_set_name": "OpenWebText2"
}
|
Alex Iwobi has always been a player known for his potential, rather than his performances; a fan favourite whose Instagram stories and Big 17 hashtag see more minutes than he does.
Opinion of the player has always varied, for some he’s the Academy product that Arsenal is missing, a powerful forward with quick feet, for others nothing more than the team’s inadequate replacement for Alexis Sánchez.
At the moment, he’s the Gunners golden boy. Standout performances against Brentford in the Carabao Cup and an impressive 27 minutes off the bench against Watford have supporters calling for his inclusion in the Gunners starting eleven, and with good reason; he’s perfect for Unai Emery’s Arsenal.
In the Gunners typical set-up, Aaron Ramsey, Mesut Özil, Pierre-Emerick Aubameyang, and Alexandre Lacazette all tend to want to occupy the same space in the middle of the pitch. Lacazette – the only one playing in his natural position – usually finds himself slightly higher up the pitch than the other three, but the congestion in the middle of the pitch tends to lead to an isolated striker who rarely gets the service needed for a dominant attack.
It’s a pattern that rang true against Watford, and that has been consistent throughout the season.
Arsenal's #passmap from their home win over Watford shows Xhaka as the impressive ball progressor that he is.
Torreira covered midfield a bit deeper.
Özil, Aubameyang & Ramsey operated on average in the same zone, but with lots of mobility and switching.#AFC pic.twitter.com/uEhNFkIxRA — Between The Posts (@BetweenThePosts) September 30, 2018
Enter Alex Iwobi, the wide-man is just that: wide. He brings balance to the Gunners attack that simply isn’t there with Emery’s first choice attacking four. Although it was early in the season, Arsenal’s dominant offensive performance against Chelsea – despite the loss – was a perfect display of what that looks like.
Against Chelsea, Iwobi found himself on the end of two big chances, and he converted one for his only goal of the season. He also created two big chances himself, adding width and dynamism to the Gunners attacking three. It was more of the same against Brentford where Iwobi looked like the team’s best forward for most of the first half, and again the Nigerian international found the back of the net — albeit from an offside position.
Unlike the pass map against Watford, there is a clear structure to the Gunners side with Iwobi in it; a balance to the attack that doesn’t just present itself down the middle. It also offers clear, defined roles for the players that maximize their tools; for Iwobi that’s his distribution.
Iwobi’s inclusion has consistently come at the cost of Pierre-Emerick Aubameyang this season, often substituting the Gabonese forward midway through the second half, but against Watford Emery moved Aubameyang to the right-hand side, allowed Özil to slide central, and fit Iwobi on the left.
It worked wonders.
Aubameyang was able to provide a willing offensive outlet on the right flank for Héctor Bellerín that stayed on the right flank. Özil, no longer shifted wide, put in his best performance of the season, notching his third goal in all competitions and amassing 71 touches and making 64 passes, 31 of which were in the offensive third.
All three of those numbers are season highs for Arsenal’s under-performing creative outlet.
Iwobi did more than provide an opportunity for others to succeed. He notched an MLS assist, orchestrating the counter that lead to Özil’s goal, and his 25 touches were just two behind Aaron Ramsey – whom he substituted in the 62 minute.
After the match against Watford, Emery said “I am very happy with him because of his mentality. He can play right or left, maybe he can do more one on one. Also, I want him to stay nearer the box to score and find assists for his teammates. This is the way for him.”
Iwobi matched the sentiment, noting “The main thing that the boss stresses with me is to get in the positions to score. I scored against Brentford but I was offside. I had another chance too, and I just need to convert them. He’s happy with me getting into those positions and the next step is to convert the chances.”
In just 168 minutes, Iwobi has created 3 big chances, scored once, and added an assist. Not only is he getting into the positions Emery wants, but he’s also making the most of his opportunities once he’s there.
Iwobi is making the case against starting him hard enough to deny, but when combined with the boost he provides Özil, Aubameyang and Héctor Bellerín, Big 17 is making a case for himself, in a big way.
A Canadian video journalist, Colton Praill is quick with a pen or a microphone. He’s a podcaster, writer and visual storyteller covering Arsenal, English football and the world around him.
Like this: Like Loading...
|
{
"pile_set_name": "OpenWebText2"
}
|
Q:
Return counter in db.collection.count()'s callback doesn't work, why?
I want to track the number of documents I have within a collection in a node.js server
using mongodb driver. I can insert, delete and update propperly but when I try to count, it works until I try to store that value, moment in which it returns nothing.
Here is my code:
var db_collection = db.collection('collection');
var countCollections = function () {
var response_count_collections = null;
db_mensajes.count(function(err,number_of_collections){
if (err){
console.log(err);
} else {
response_of_collections = number_of_collections;
console.log('Number of collections: '+number_of_collections);
}
});
return response_count_collections;
};
It logs the number_of_collections correctly but it doesn't return me the right value. In my example it returns null (how I defined my var response_count_collections;) and if I
try to return number_of_collections; within the db_collections.count()'s callback like this:
var db_collection = db.collection('collection');
var countCollections = function () {
var response_count_collections = null;
db_mensajes.count(function(err,number_of_collections){
if (err){
console.log(err);
} else {
console.log('Number of collections: '+number_of_collections);
return number_of_collections;
}
});
};
It returns me "undefined". Is there any way to achieve what I want?
A:
Its because it returns the variable before the function is completely executed.
If you want it to be asynchronous then you will have to learn to control the flow of the program using some node modules like async.
Update:
Have a look at this question, it shows how to return value from an async function using callback correctly.
|
{
"pile_set_name": "StackExchange"
}
|
This week David Falk was behind the camera at Evergreen Premier League matches in Vancouver and Silverdale. Both times he was able to capture goal scoring sequences. The first was a bang-bang play at the endline that saw Jamoy Stevens score for Vancouver Victory FC. The second was a majestic free kick from Wenatchee United’s Adrian Espinoza. Enjoy both full sequences below!
See the complete galleries: Vancouver Victory v. Spokane Shadow and WestSound FC v. Wenatchee United
Jamoy Stevens goal sequence
Adrian Espinoza free kick goal
goalWA.net Local Soccer News is sponsored by Pro Roofing Northwest, Kirkland, Bellevue, Seattle, Redmond, Woodinville, Federal Way, Everett, Snohomish, Issaquah, Renton, Kent, Bothell, Edmonds Washington roofing company.
|
{
"pile_set_name": "OpenWebText2"
}
|
Background {#Sec1}
==========
People-centred approaches to mental health care are increasingly promoted in low- and middle-income countries (LMICs) through global mental health policy, practice and research directives \[[@CR1], [@CR2]\]. The World Health Organisation defines people-centred health care as: "*an approach to care that consciously adopts the perspectives of individuals, families and communities, and sees them as participants as well as beneficiaries of trusted health systems that respond to their needs and preferences in humane and holistic ways*." \[[@CR3]\]. People-centred health care is proposed to apply to people with all types of health conditions.
Intersectoral collaboration is one of the key strategies for achieving people-centred health care in the World Health Organisation Framework on Integrated People-Centred Health Services (WHO IPCHS) \[[@CR3]\]. There is no definitional consensus on intersectoral collaboration. In line with recent conceptual developments in global health, we adopt a broad definition of intersectoral collaboration for mental health as: any planning, information and resource sharing to institute mental health care between organisations from different sectors (i.e. public, private, not-for-profit) and/or across thematic areas (i.e. health, social services) \[[@CR4], [@CR5]\]. This definition encompasses collaborations for mental health service referrals and back referrals, as well as for the purposes of mental health system governance, including the involvement of mental health service user and family organisations.
Emerging from the 1978 Declaration of Alma Ata \[[@CR6]\], and subsequent action to embed Health in All Policies (HiAP) \[[@CR7], [@CR8]\], intersectoral collaboration underpins current global movements to achieve health equity and sustainable development \[[@CR9]\]. Intersectoral collaboration is fundamental to the provision of people-centred mental health care because many of the sociocultural and economic determinants of mental health and wellbeing lie outside the health sector \[[@CR10]--[@CR12]\]. Furthermore, in many LMICs, people rely on customary (traditional, religious or faith-based) or private mental health providers, particularly in the absence of well-developed public health infrastructure \[[@CR13]--[@CR15]\].
Intersectoral collaboration for mental health has been shown to be effective. A systematic review of research from high-income countries (HICs) revealed that collaboration between mental health and non-clinical services improves clinical recovery and other outcomes for mental health service users (e.g. employment, housing stability), as well as system outcomes (e.g. service and cost efficiency) \[[@CR16]\]. Such collaborations included service co-location, joint interorganizational training and use of a shared information system between services \[[@CR16]\].
However, intersectoral collaboration is difficult to achieve. Collaboration is often challenged by systemic factors (e.g. inadequate resourcing, lack of shared interorganisational structures, goals, and trust) and interpersonal factors (e.g. poor communication) \[[@CR5], [@CR17]--[@CR20]\]. In many LMICs, partnerships are challenged because Ministries of Health are hierarchically structured and seen as solely responsible for health activities \[[@CR19]\]. Hence, there may be feasibility issues for promoting intersectoral collaboration for mental health in LMICs.
Despite the global imperative to increase the people-centredness of mental health care in all countries \[[@CR2], [@CR3]\], there is a dearth of research investigating intersectoral collaboration for mental health care across the multitude of sociocultural and resource settings that constitute the grouping LMICs. To fill this knowledge gap, this study was conducted in Timor-Leste, a LMIC in South-East Asia in the process of strengthening its public mental health system.
Study setting: Timor-Leste {#Sec2}
--------------------------
Timor-Leste is a small island nation of 1.3 million people \[[@CR21]\]. Promoting mental wellbeing is a government priority in Timor-Leste due to a range of sociocultural and economic risk factors for distress including poverty, unemployment, and past and continuing experiences of violence \[[@CR22], [@CR23]\]. Rigorous estimates of the population prevalence of mental illness are limited and inconsistent. The only household survey of mental illness in Timor-Leste was conducted in 2004 with 1544 adults in the aftermath of the conflict, and estimated an adjusted 5.08% population prevalence of mental disorders \[[@CR24]\]. However, this estimate is now 15 years old and likely does not represent the burden of mental illness in present day, more stable Timor-Leste. As well, their validity is weakened by the predominantly urban sample and the use of assessment tools that may have missed culturally meaningful idioms of mental distress. The 2016 Global Burden of Disease study estimates a 11.6% prevalence of mental and substance use problems \[[@CR25]\].
Multiple stakeholders are involved in mental health care in Timor-Leste. Family and civil society including customary healers are the main form of support for Timorese people with mental health problems \[[@CR26], [@CR27]\]. Within government, responsibility for mental health is split between the Ministry of Health (MoH) and the Ministry of Social Solidarity and Inclusion (MSSI). MoH coordinates the integration of a basic package of mental health care into primary health care, and the training and deployment of the mental health workforce \[[@CR28]\]. Community-based mental health care is mainly provided by mental health nurses, and there is one psychiatrist and one psychologist working in the National Hospital. MSSI coordinates the 2012 National Disability Policy \[[@CR29]\], and the social protection program and disability pension, which some people with psychosocial disability resulting from mental illness receive. Ministries of Education and Justice are involved peripherally with the institution of education and legal systems that some people with mental illness have contact with. NGOs provide a psychosocial rehabilitation service (Pradet), long-term stay service (Klibur Domin) and inpatient psychiatric service (São João de Deus, Laclubar). Social and violence support NGO services including for victims of family violence and legal assistance are also accessed by some people with mental health problems. International development organisations provide financial and in-kind support to MoH, MSSI and NGO service providers through health, and disability- and gender-inclusive development activities \[[@CR30]\].
Intersectoral collaboration is a key strategy of the yet-to-be implemented Timor-Leste National Mental Health Strategy 2018--2022, which aims to provide "*comprehensive culturally*-*appropriate community*-*based mental health and social services*" \[[@CR22]\]. To achieve this, the National Strategy specifies collaborations between mental health, general health, maternal and child health and social support services.
However, it is not known how prevailing collaboration is structured and operates between the different stakeholders involved in mental health care in Timor-Leste. This is important to understand given the limited human and financial resources for mental health in Timor-Leste, which have been identified as barriers to collaboration in other settings. Specifically, there are only three mental health professionals per 100,000 people, and less than 0.29% of the 2018 government budget was allocated to the Public Health Directorate (including mental health) \[[@CR31]\].
Hence, this study aimed to investigate intersectoral collaboration for people-centred mental health care in Timor-Leste's mental health system. The study aimed to answer the following research questions:To what extent is intersectoral collaboration for mental health outlined in existing government, NGO, civil society and international agency documents in Timor-Leste?What are the perspectives and experiences of multiple stakeholders about intersectoral collaboration for mental health?What is the strength and structure of intersectoral collaboration in the national mental health system?
This research builds upon previous research by the authors that informed the Timor-Leste National Mental Health Strategy \[[@CR27]\], and was conducted to inform the implementation of this Strategy.
Methods {#Sec3}
=======
Study sites {#Sec4}
-----------
Dili, the capital of Timor-Leste, was selected as a research site to understand intersectoral collaboration across national government ministries, the national hospital, NGOs (including Pradet and Klibur Domin), and international organisations. Baucau municipality in Eastern Timor-Leste, and its administrative post, Venilale, provided a comparison of collaborative processes at sub-national levels. Baucau municipality is host to the country's second largest city where there are sub-national government ministry offices, a municipality referral hospital providing mental health care, and mental health and social support NGO service providers \[[@CR32]\]. Venilale is a mountainous rural township which has an administration office and a government health clinic providing outreach mental health care to the surrounding villages. Laclubar administrative post in Manatuto municipality was also included as a data collection site because it hosts the São João de Deus inpatient mental health facility.
Design {#Sec5}
------
This research employed a mixed-methods convergent design to investigate intersectoral collaboration for people-centred mental health care in Timor-Leste using qualitative data derived from in-depth interviews and document review, and quantitative social network analysis. The social network analysis findings enhanced understandings derived from document review and interview data to provide a holistic and rigorous picture of intersectoral collaboration that would not have been possible using only the qualitative data \[[@CR33]\]. This article reports findings from the third component of a larger study investigating people-centred mental health care in Timor-Leste \[[@CR34]\].
### Document review {#Sec6}
A review of electronic documents was conducted to provide information about the policy context, plans and implementation of intersectoral collaboration for mental health care in Timor-Leste (research question 1). Documents reviewed were produced between 2002 and 2019 by government, NGO, civil society and international organisations, including strategic plans, policies, legislation, and reports (*n *= 33). Key documents were sourced by conducting internet or reference list searches between September 2017 and March 2019 or were provided by participants during data collection. Information emerging from the document review was interrogated further during interviews, and compared against interview data during analysis.
### Semi-structured interviews {#Sec7}
In-depth semi-structured interviews were conducted to ascertain the experiences and opinions of multiple stakeholders about intersectoral collaboration for mental health (research question 2). Interviews were conducted with 85 adults (≥ 18 years) who were: (1) mental health service users (*n* = 20) and their families (*n* = 10); (2) government decision makers (*n* = 10); (3) mental health and social service providers (*n* = 23); (4) civil society (*n* = 9); and (5) other groups including international development organisations involved in mental health or social policy or service delivery (*n* = 13, see Table [1](#Tab1){ref-type="table"}). Mental health service users were defined as adults aged 18 years or older who had used health or social support services related to their mental health and were able to provide informed consent and respond to interview questions. In the absence of a Timorese culturally-validated psychiatric diagnostic tool, the definition of mental illness was intentionally kept broad to capture the range of people who were considered to use services for mental illness. Mental health service users and their families were recruited through the administrative post health staff in Venilale and NGO service providers in Dili. Participants in groups 2 to 5 were recruited purposively by First Author TH based on their positions in government, NGO, international development and civil society organisations and institutions. In the first instance, participants were identified through a document review and the existing research collaborations that supported the development of the National Mental Health Strategy. Snowball sampling was used to identify and recruit subsequent participants who were mentioned in interviews and not already identified. Data were collected from September 2017 to August 2018.Table 1Participant demographics.Table adapted from \[[@CR62]\]Mental health service usersFamily membersService providersDecision makersCivil societyOther community members and organisationsTotalN2010231091385**n%n%n%n%n%n%N%**Age 26--4012602201043.5110444.4646.23541.2 41--55630550834.8880333.3538.53541.2 56--70210330521.7110222.2215.41517.6Gender Male7357701356.5990888.9753.85160.0 Female13653301043.5110111.1646.23440.0Education None1522000.00000.000.033.5 Primary115555000.00000.000.01618.8 Secondary42011014.300444.4323.11315.3 Tertiary4202202295.710100555.61076.95362.4Location Dili525001565.2550666.7969.24047.1 Baucau210110417.444000.0323.11416.5 Venilale1365990313.0110333.317.73035.3 Laclubar000014.30000.000.011.2We adopt WHO's definition of civil society as individuals and organisations working for "*collective action around shared interests, purposes and values, generally distinct from government and commercial for*-*profit actors*" \[[@CR65]\]. Civil society includes community groups, social movements and advocacy groups. Civil society also includes local chiefs and customary healers who may not be mobilised in formal groups. Other community members and organisations include representatives from international development agencies, law enforcement, universities, and other people with relevant knowledge but who do not work specifically in mental health in Timor-Leste
Interviews were semi-structured using an interview guide tailored to participant type. The interview guide was structured around the five strategies of the WHO Framework on Integrated People-Centred Health Services (2016): engage service users; strengthen governance; re-orient the model of care; forge intersectoral collaboration; and foster an enabling environment. This article reports findings pertaining to intersectoral collaboration. The interview guide contained open-ended questions and quantitative measures of collaboration. Open-ended interview questions enquired about the experiences, structures and processes of mental health service delivery and policy making (see Interview guides in Additional file [1](#MOESM1){ref-type="media"}). Quantitative measures are outlined below in "[Descriptive social network analysis](#Sec8){ref-type="sec"}". The interview guides were translated, their meaning checked, and piloted before data collection commenced. Author TH conducted all interviews directly in English, or with a trained interpreter in Tetum or Portuguese (national languages) or several Baucau local languages (Makassai and Cairui). Interviews lasted on average 47 min (range 7 to 111 min), and were in private places, including workplaces, health facilities or community houses.
Framework analysis, an inductive and deductive qualitative data analysis method \[[@CR35]\], was used to analyse interview data in NVivo version 12 \[[@CR36]\]. Author TH conducted the framework analysis and an independent researcher validated coding. Author TH employed a combination of emergent themes and a priori codes (e.g. enabling factors, barriers). This article reports three main themes and 15 sub-themes relevant to intersectoral collaboration. Preliminary results were presented back to participants and interested parties in communities in Dili and Venilale to verify the authors' interpretation of the data.
### Descriptive social network analysis {#Sec8}
Intersectoral collaboration, as well as being difficult to achieve, is difficult to measure with traditional methods. Intersectoral collaboration can be considered a type of networked relationship \[[@CR17]\]. Social network analysis (SNA), a complex systems discipline and quantitative methodology, is widely used in HICs to measure health policy networks \[[@CR37]--[@CR40]\]. SNA has more recently been applied in LMICs \[[@CR41]--[@CR45]\] in line with calls to adopt systems thinking to understand health system governance in these contexts \[[@CR19]\]. For example, Hagaman et al. demonstrated the utility of SNA for understanding surveillance systems for suicide in Nepal \[[@CR45]\]. Prior to our study, SNA had not been used to investigate both mental health service and system governance networks in a LMIC.
We used SNA to measure the strength and structure of connections between organisations operating at the national level of the mental health system in Timor-Leste (research question 3). SNA complemented the understanding about intersectoral collaboration garnered through qualitative data by examining the role of each organisation in the mental health network, as well as properties of the overall network \[[@CR46]\].
SNA methods are summarised in Table [2](#Tab2){ref-type="table"}. For SNA, the network was defined as 27 organisations from government, NGO, civil society and other organisations working in national mental health and social care (participant categories 2 to 5). Organisations were identified through previous research informing the National Mental Health Strategy 2018--2022 \[[@CR27]\] and the document review. There were insufficient numbers of mental health organisations at sub-national levels to conduct SNA. As stated above, stakeholders were recruited using purposive and snowball sampling methods because SNA seeks to understand collaborative patterns between specific stakeholders and randomisation is unlikely to incorporate all central stakeholders \[[@CR47]\].Table 2Stages of social network analysis.Table adapted from \[[@CR50]\]StageProcesses and measures1. Defined the networki. Listed all organisations involved in the national mental health system based on previous research and document reviewii. Supplemented list with additional organisations identified through snowballing during interviews2. Defined the relationships between organisationsiii. Displayed the list of organisations in a tableiv. During interviews, asked participants with knowledge of their organisation about the relationship between their organisation and other organisationsv. Two quantitative indicators were collected. Participants rated the frequency of contact and frequency of resource sharing over the preceding yearvi. Once all responses were received, scores from each organisation were combined into a single matrix for each key indicator3. Analysed the structure of the system using UCINET to generate measuresNetwork metrics i. Density ii. Average degree iii. Average distanceOrganisation metrics i. In-degree centrality ii. Betweenness
SNA questions were embedded in interviews with one participant from each national organisation with knowledge of operations (i.e. manager level). These participants were presented with a list of organisations and asked about connections between their organisation and these listed organisations. These participants also nominated any missing organisations that they worked with. This 'recall list' is a validated technique for prompting participants to accurately report connections \[[@CR48]\].
Two widely-used quantitative SNA indicators were collected. Participants rated the frequency of contact/information sharing (e.g. meetings, phone calls, emails) and the frequency of resource sharing (e.g. funding, building space, transport, printing, materials) between their organisation and others over the preceding year on a six-point scale (*none*, *yearly*, *quarterly*, *monthly*, *weekly*, *daily*). Resource sharing is assumed to indicate a stronger degree of relationship than information sharing \[[@CR5]\]. If there was overlap in categories (e.g. car sharing to transport patients involved both contact and resource sharing), participants rated contact and resource sharing separately.
Descriptive quantitative analysis of the two SNA indicators was conducted using UCINET software \[[@CR49]\]. SNA data resulted in one matrix for demand and a second matrix for supply of information/resource sharing \[[@CR50]\]. The rows in each matrix corresponded to the 27 organisations and were inputted with the frequency rating for information/resource sharing such that 0 indicated no relationship and 1--5 indicated an ascending order of connection. For each indicator, a network dataset was produced by combining these demand and supply matrices into a single matrix \[[@CR48]\]. UCINET mapped each network and generated network-level and organisation-level metrics \[[@CR49]\] (see Table [3](#Tab3){ref-type="table"} for a definition of each metric). Data cleaning was conducted in Microsoft Excel. Missing values for three organisations who were not interviewed were replaced with connection ratings reported by organisations who did respond \[[@CR51]\].Table 3Definition of key network and organisation metrics.Table contents adapted from \[[@CR47]\]MetricDefinition and mental health system interpretationNetwork metrics DensityRatio of the number of connections to the number of possible connections in the network. A dense network indicates that organisations are well-connected and information/resources flow rapidly between them Average degreeAverage number of relationships in the network. Like density, this assumes that more connections indicate greater information/resource flow between organisations Average distanceNumber of connections that separate two organisations, whereby an average distance of 1 indicates that all organisations are directly connected Degree centralisationRatio of the sum of the differences in centrality between the most central organisation and all other organisations in the network to the largest possible sum of these differences. Higher values indicate a more centralised networkOrganisation metrics In-degree centralityNumber of direct connections an organisation has with other organisations as reported by partnering organisations. A measure of the importance of each organisation. Identifies which organisations act as stewards organisations in the network BetweennessExtent that an organisation is located on the path between other organisations (indirect connections). The extent that an organisation is a bridge between other organisations
Ethics {#Sec9}
------
Verbal or written consent (depending on participant preference and literacy) was provided before interviews commenced and were audio recorded. Participants responding to SNA questions provided separate consent to include their organisation. Participant quotations and organisations in SNA were de-identified to fulfill the governing ethics agreements. Ethical approval was granted by University of Melbourne Human Ethics Sub-Committee (HESC: 1749926) and National Institute of Health in Timor-Leste (1070MS-INS/DE-DP/CDC-DEP/IX/2017).
Results {#Sec10}
=======
The results section presents a synthesis of qualitative findings from the document review and interviews, and separately reports social network analysis findings. The mixed-methods findings are integrated in "[Discussion](#Sec19){ref-type="sec"}". Table [4](#Tab4){ref-type="table"} presents the framework analysis themes and sub-themes for intersectoral collaboration from interviews and documents (research questions 1 and 2). See Additional file [2](#MOESM2){ref-type="media"} for a summary table of extant government strategy, policy and legal documents related to mental health and psychosocial disability in Timor-Leste (research question 1).Table 4Framework analysis themes and sub-themes for intersectoral collaborationThemeSub-themes1.1 Enabling factorsImportance of intersectoral collaborationResponsibility of allAddress broader determinants of mental healthDifferent roles for health and social sectors1.2 BarriersSocial importance of mental healthResource restrictionsCompeting demands on government1.3 Intersectoral collaboration for policy making and planningMinisterial working groupsSocial sector working groups1.4 Intersectoral collaboration for service deliveryCustomary healersGovernment health providersNGO service providersAuthoritiesSocial sectorDisabilityViolence support organisations
Interviews and documents: perspectives and experiences about and documented approaches to intersectoral collaboration {#Sec11}
---------------------------------------------------------------------------------------------------------------------
### Enabling factors for intersectoral collaboration {#Sec12}
The importance of intersectoral collaboration for mental health was a prominent theme across participant interviews and documents. Intersectoral collaboration between ministries, public institutions, development partners, civil society and communities was a key strategy in the National Mental Health Strategy of Timor-Leste 2018--2022 \[[@CR22]\], National Disability Policy 2012 \[[@CR29]\], and Disability Action Plan (unapproved) \[[@CR52]\]. One MoH representative advocated for: "*socialis\[ing\] all the other institutions and NGOs so they know that they can't only walk their part, \[mental health is\] not only \[the responsibility of\] Ministry of Health.*" (Decision maker \#5, 36--40 years, male). A Baucau service provider explained that intersectoral collaboration was important because of the broader drivers of mental health:"*Mental health is not only the responsibility of the health \[sector\]. For example, people have problems with food, with money, so we all need to work together to collaborate to provide treatment for people with mental health problems. The community, the families and the local authorities need to work together.* (Service provider \#4, 46--50 years, male)"
Similarly, a MSSI representative described complementary roles for MoH and MSSI in mental health, such that MSSI provided food and MoH provided medication for families affected by mental ill-health: "*because \[people with mental illness\] need to eat in order to take medication*" (Decision maker \#9, 46--50 years, male).
### Barriers to intersectoral collaboration {#Sec13}
Despite the emphasis on intersectoral collaboration, mental health had limited specific mention in key health, social sector and development strategies (e.g. National Health Sector Strategic Plan 2011--2030, and Strategic Development Plan 2011--2030) \[[@CR53], [@CR54]\]. One civil society representative said the lower priority of mental health reflected social norms: "\[*mental health\] is not socially talked about, or socially an important subject, so people are not really looking at it as something that they need to focus on*" (Civil society \#6, 26--30 years, male).
Government and civil society participants identified a lack of resources as a challenge to government services working with the NGO sector: "*So far only Pradet \[NGO\] have good knowledge and experience with these people \[with mental illness\] because the government have very limited resources"* (Civil society \#5, 36--40, male). A development partner explained that the mental health-relevant portfolios within MoH and MSSI received less political and fiscal priority:"*Mental health is so poorly funded under \[MoH\] and those people are not very powerful within the \[MoH\], and likewise people who work in disability within \[MSSI\] are not very powerful within the ministry and have very low funding as well* (Other \#1, 36--40 years, female)"
Government decision makers and community members stated that the demands on government to address Timor-Leste's other economic, political and social development challenges meant that ministries who were not directly responsible for mental health did not prioritise working intersectorally in this area:"*There are a lot of issues in Timor, not only mental health. \[The government\] also try to solve malnutrition, and improve access to clean water, education, a lot of things.* (Other \#4, 30--35 years, female)"
### Intersectoral collaboration for policy making and planning {#Sec14}
Participants and documents reported many links between health and other sectors in Timor-Leste. Decision makers and documents reported that there were national- and municipality-level ministerial working groups for health and disability programming between MoH, MSSI and Ministry of Education. Government and NGO service providers said they attended quarterly disability or social sector working group meetings at the national and municipality levels. One decision maker from Baucau explained:"*In Baucau, we have a working group to deal with cases of \[people requiring\] psychosocial recovery that is composed of the Ministry of Health, the Ministry of Social Solidarity, Pradet \[NGO\], Alfela \[NGO\], Ministry of Public Administration, and civil society like safe houses \[for female and child victims of violence\]. We have a quarterly meeting so we discuss all the things related to these cases. Every institution comes together and presents the issues they are facing and discusses their priorities and actions.* (Decision maker \#3, 46--50 years, male)"
There is no mental health service user or family organisation in Timor-Leste so participants did not report contact with service users and families as a key part of their collaborations with other organisations.
### Intersectoral collaboration for service delivery {#Sec15}
Figure [1](#Fig1){ref-type="fig"} displays the key stakeholders for mental health and social service delivery across multiple levels of the mental health system based on information reported in interviews and documents. Participants reported that families affected by mental health problems directly accessed support from customary healers, government health services, Pradet or private health clinics. Police, local authorities, private clinics, social sector providers and customary healers referred people with mental health problems to government health facilities and Pradet. Referrals were made to and from government health services and Pradet, and São João de Deus inpatient mental health facility if the person was deemed to be very unwell. Government health services and Pradet also referred to, and received referrals from, MSSI and disability, violence or women's support organisations. Klibur Domin, a disability NGO, provided a longer stay service for people with mental illness coming to/from: family, São João de Deus mental health facility, prison or from living in homelessness. This quotation from a service provider exemplifies the information provided by participants: "*We have a network with other organisations, they are our partners. These organisations are all over Timor*-*Leste, from Dili to Viqueque \[municipality\], to Lospalos \[municipality\], Suai \[municipality\], Maliana \[municipality\]. We have good communication and coordination with these partners so that we can give assistance to the clients from wherever they are from \[in Timor*-*Leste\].* (Service provider \#3, 36--40 years, female)" Fig. 1Mental health and social service referral and back referral pathways across multiple levels of the mental health system. *MSSI* Ministry of Social Solidarity, *VWCs* violence, women and children organisations, *DP0s* Disabled Persons Organisations, *SISCa* Integrated Health Services, Outreach Care
Descriptive social network analysis: the strength and structure of national-level intersectoral collaboration {#Sec16}
-------------------------------------------------------------------------------------------------------------
### Network metrics {#Sec17}
Network metrics are provided in Table [5](#Tab5){ref-type="table"}. The contact network had greater connectivity than the resource network, as indicated by higher density and average degree scores. Approximately 50% of organisations reported directly sharing information compared to 30% who directly shared resources (density = 0.55 and 0.30 for contact and resource sharing, respectively).Table 5Network metrics for the contact and resource sharing networks of the national mental health systemNetwork metricContact networkResource sharing networkDensity0.550.30Average degree14.227.70Average distance1.501.80Degree centralisation0.280.47See Table [3](#Tab3){ref-type="table"} for a definition of each metric
More organisations had direct contact for information sharing than resource sharing (average distance estimates = 14.22 and 7.70, respectively). As indicated by Figs. [2](#Fig2){ref-type="fig"} and [3](#Fig3){ref-type="fig"}, the networks for information and resource sharing were similarly distributed indicating that the same organisations (e.g. NGO1, MIN2, MIN3, CS1) played a central role in both types of collaboration. Three sub-networks emerged for both information and resource sharing within the national mental health system: (1) health, (2) disability, and (3) violence, women and children's support. As indicated in the key on Figs. [2](#Fig2){ref-type="fig"} and [3](#Fig3){ref-type="fig"}, these subnetworks constituted different types of organisations, including government ministries, NGO and government service providers, civil society, etc. These sub-networks are displayed as rings in Figs. [2](#Fig2){ref-type="fig"} and [3](#Fig3){ref-type="fig"} and corresponded to the governance structures described by participants and documents, which split mental health between the health and social sectors. These sub-networks indicated that the mental health network was relatively decentralised, as indicated by network degree centralisation estimates of 0.38 and 0.47 for information and resource sharing, respectively.Fig. 2Displays a map of the intersectoral connections between 27 organisations working in the national level of the mental health system based on frequency of contact (information sharing) over the preceding year. The lines connecting organisations in each map represent connections at least once a month (i.e. monthly, weekly, daily) Fig. 3Shows the intersectoral connections between these organisations based on the frequency of resource sharing at least monthly
### Organisation metrics {#Sec18}
Metrics were calculated to identify the relative importance of organisations in terms of their number of direct connections (in-degree centrality) and indirect connections (betweenness). Organisations with more direct or indirect relationships are assumed to have more opportunities to access relevant information or resources \[[@CR42]\]. One NGO service provider (NGO1) and three government organisations (GOV1, MIN1 and MIN2) had the most direct and indirect connections for information sharing, and direct connections for resource sharing. International development organisations and civil society stakeholders (OT1, DP5 and CS1) had the most indirect relationships for resource sharing.
Discussion {#Sec19}
==========
This study is the first to investigate intersectoral collaboration for both mental health service provision and mental health system governance in a LMIC using mixed-qualitative methods and social network analysis (SNA). The key findings were:Consensus among stakeholder groups that intersectoral collaboration for mental health is important in Timor-Leste;Information and resource sharing exist among organisations (e.g. government, NGO, civil society, international development) working within the health and social (disability and violence support) sectors, despite resource restrictions discussed by participants; andSNA proved useful for identifying subnetworks of intersectoral organisations to substantiate data from interviews and documents such that there was a split in stewardship for mental health between subnetworks in the health and social sectors.
The functional intersectoral connections within the Timor-Leste mental health system contrast with the challenges of health governance reported in other LMICs (e.g. weak government institutions, hierarchical structure of MoH) \[[@CR19]\]. Intersectoral collaboration for mental health in Timor-Leste may be facilitated for several reasons. First, the appreciation of the interconnections between mental health and other sectors displayed by Timorese participants reflected the holistic understandings of health found in Timor-Leste \[[@CR55]\] and indigenous peoples around the world \[[@CR56], [@CR57]\]. Second, connections across the mental health system may have been enabled because they were primarily forged to share information, which is assumed in social network science to indicate a less intensive type of collaboration than resource sharing \[[@CR5]\]. However, given that health knowledge is often among the most valuable of resources in LMICs \[[@CR58]\], this finding could also suggest a stronger degree of collaboration. Third, connections between organisations may be forged out of necessity given the low availability of human and financial resources for mental health in Timor-Leste. Fourth, the relatively small number of organisations working in mental health and social services in Timor-Leste (*n *= 27) created a bounded community of practice, which contrasted with the fragmentation of mental health and social service systems reported to challenge collaboration in HICs \[[@CR16]\]. The tightly-defined network combined with the reliance on informal and kinship networks for health previously reported in Timor-Leste \[[@CR59]\] may overcome barriers to trust reported in settings with more formalised systems of mental health governance \[[@CR17], [@CR18]\]. This is also in line with broader governance literature which reports that collaborations are most effective when they have clearly defined and agreed upon understandings of which problems they will address \[[@CR60]\]. Hence, it will be important to consider how to maintain these connections as the Timorese mental health system expands and formalises; a key concern for mental health system strengthening in other LMICs.
Despite these information and resource sharing collaborations, the document review highlighted that mental health had limited specific mention in other key government policies. The commitment to intersectoral collaboration expressed by our participants may not be shared by other stakeholders who are not currently engaged with the mental health system. Thus, the disadvantage of not integrating mental health into intersectoral policies is that resources and political will cannot be mobilized to translate intention into practice \[[@CR8]\]. Timor-Leste could benefit from explicitly incorporating mental health into intersectoral policies in line with efforts throughout the Asia and Pacific region to place 'Health in All Policies' (HiAP) \[[@CR8], [@CR61]\]. Increasing awareness and understanding of the importance of mental health among intersectoral stakeholders may be part of achieving this. Given the overlap in scope, people-centred mental health care as a concept would benefit from more explicitly aligning with existing global health movements for universal health coverage and HiAP to relish the learnings and progress already made in these areas over the past 40 years.
The shared stewardship for mental health in Timor-Leste is contrary to the assumption that the health sector is the primary steward for the people-centred health care model. This split stewardship is beneficial in Timor-Leste because it allows for more efficient use of existing resources and also opens up funding channels for mental health service providers through disability- and gender-inclusive development that are not available through traditional health financing \[[@CR30]\]. The central role of the social sector in the mental health system may promote people-centredness because social sector activities tackled the social exclusion of people with mental health problems and their families in Timor-Leste found in previous research (e.g. experiences of stigma, exclusion from employment and education) \[[@CR62]\], which are also key barriers to mental health care access \[[@CR63]\]. This governance structure acknowledges the social determinants of mental health and the co-existing health and social issues affecting families, which are typically under-addressed when there is a myopic focus on treating the mental illness. On the other hand, as one participant explained, government focus on mental health may be diluted without one central champion \[[@CR19]\]. Furthermore, if more resources flow into mental health in Timor-Leste, requiring a greater level of coordination than information sharing, parallel systems of care may emerge over time. Hence, a key consideration is how to ensure that there are no gaps in implementation of strategies to achieve people-centred mental health care in Timor-Leste and other LMICs in which mental health stewardship is shared. This finding also highlights that global mental health efforts should not presume that that Ministry of Health is always the primary steward of mental health.
The prevailing collaborative structures for mental health service delivery and governance in Timor-Leste have important implications for the implementation of Timor-Leste National Mental Health Strategy 2018--2022. Currently, the key role of the social sector in mental health governance is underestimated. Decisions need to be made as to whether the split stewardship for mental health continues or if MoH steps up to lead mental health initiatives in line with their mandate established in the National Strategy. The service delivery collaborations highlighted the importance of social sector NGO service providers (e.g. psychosocial rehabilitation, violence support services), which suggests that training and capacity building that is currently focused on government mental health service providers should also incorporate these NGO providers. Finally, the absence of a mental health service user and family organisation is a key consideration for people-centred mental health care in Timor-Leste because without such a mechanism, the involvement of mental health service users and families in future intersectoral collaborations will likely remain minimal \[[@CR64]\].
Our study had several limitations. SNA data may not have accurately captured the dynamic nature of relationships between organisations because it was cross sectional; assumed that information and resource sharing indicated relationship quality; and relied on participants accurately reporting connections with other organisations. However, we are confident that SNA accurately measured and mapped collaboration because SNA findings triangulated with data from interviews and documents. Our study is also limited because we did not incorporate the role of the customary sector, who we know from previous research by the authors plays a large role in the provision of mental health care in Timor-Leste and have emergent collaboration with the formal mental health sector \[[@CR27]\]. Future research could use SNA to examine collaborations between the formal mental health and customary sectors over time. Research could also investigate the informal processes that drive intersectoral collaboration in Timor-Leste (e.g. trust) so that these can be harnessed to develop the mental health system.
Conclusion {#Sec20}
==========
Overall, the findings suggest that there may be opportunities for intersectoral collaborations in mental health systems in LMICs. These may not exist in settings with more formalised mental health systems such as HICs in which systemic (e.g. service fragmentation) and interpersonal factors (e.g. poor communication) are barriers to working collaboratively. The holistic understanding of health and wellbeing, and the commitment to working together in the face of resource restrictions suggest that intersectoral collaboration can be employed to achieve people-centred mental health care in Timor-Leste. Intersectoral collaboration is not a new idea, and the people-centred mental health care model may have more uptake if it is tied to existing movements to reduce health inequities and ensure sustainable development.
Supplementary information
=========================
{#Sec21}
**Additional file 1.** Interview guides. **Additional file 2: Table S1.** Summary of key government strategies and law pertaining to mental health in Timor-Leste.
DPOs
: Disabled Persons Organisations
HiAP
: Health in All Policies
SISCa
: Integrated Health Services Program
HIC
: high-income country
LMIC
: low- and middle-income country
MoH
: Ministry of Health
MSSI
: Ministry of Social Solidarity and Inclusion
NGO
: Non-Government Organisation
SNA
: social network analysis
VWCs
: violence, women and children organisations
WHO
: World Health Organisation
WHO IPCHS
: WHO Framework on Integrated People-Centred Health Services
**Publisher\'s Note**
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary information
=========================
**Supplementary information** accompanies this paper at 10.1186/s13033-019-0328-1.
We are grateful for the participants of our study and the tireless work of Francisco Almeida, Valeriano Da Silva, Jhalia Ximenes, Neila Belo and Pelagio Doutel for research and language assistance in Timor-Leste. We would also like to thank Michelle Kermode for her incredible support as TH's primary PhD supervisor from 2017 to 2019.
TH designed the study, conducted field work, analysed the data, and drafted the manuscript as part of studies towards a PhD. GA was involved in concept development and manuscript preparation as TH's primary PhD supervisor. RK, LP, HM and JM were involved in study design and concept development and manuscript preparation as co-supervisors. All authors read and approved the final manuscript.
TH received an Australian Government Research Training Program Scholarship to cover PhD research enrolment costs and a living allowance.
Participants shared their opinions and experiences upon assurance that their confidentiality and anonymity would be protected. Hence, the research data is not available publicly because this would compromise individual privacy and our ethical approval conditions.
Ethical approval was obtained from University of Melbourne Human Ethics Sub-Committee (1749926) and National Institute of Health in Timor-Leste (1070MS-INS/DE-DP/CDC-DEP/IX/2017). All participants provided written or spoken consent for participation. In line with the governing ethics approvals, consent (verbal or written, depending on participant preference and literacy levels) was obtained by all participants before interviews commenced and were recorded. Verbal consent was recorded and saved as a separate audio file to the interview.
Not applicable.
The authors declare that they have no competing interests.
|
{
"pile_set_name": "PubMed Central"
}
|
Q:
Bluetooth framework for older iOS devices
My question is related with Bluetooth technology around iOS.
I've watched WWDC about Bluetooth Low Energy 101, what's new, the basics etc, and about using the CoreBluetooth framework available in iOS 5 and later.
I've looked through different sites and documentations trying to find more information about Bluetooth 2.1 and 4, but there is so few.
GameKIt is not an answer, I am developing an app to work with an non-iOS device.
Some of the topics I've went through:
Connecting to a Bluetooth device from iOS, no MFi
iOS - How to integrate bluetooth devices in my app
http://www.bluegiga.com/files/bluegiga/Presentations/BT4_0_for_Apple.pdf
Bluetooth 4.0 with older Bluetooth
IPhone Bluetooth Connectivity to Non IOS Devices
But the supported devices are just 4S and up and latests iPads...
1) will the latest CoreBluetooth framework just fail on older devices?
2) Did apple have any frameworks for BT 2.1 or something? What to do, there are still so many ipad2 and iphone4 users, I can't just ignore them.. So what framework actually to use?
Any help, advice, idea,link will be highly appreciated!
A:
Well...
You need to understand one thing: CoreBluetooth framework is used for Bluetooth Low Energy and ExternalAccessory framework for the "Classic" Bluetooth. It is really two kind of different approchs of what we usually call Bluetooth (as a simple user/consumer).
Only recent iDevices support Bluetooth Low Energy (iPhone from 4S, MacBook Air from 2011, etc.). That's why it doesn't support iOS4 for example.
For your information, Bluetooth Low Energy is kind of a fork of Bluetooth which appears only in Bluetooth 4.0. Even if having a Bluetooth 4.0 device does not ensure that it supports Bluetooth Low Energy (as I said, it's a fork which is not always include).
As a simplistic vision, Bluetooth Low Energy works like a NSDictionnary with a NSDictionnary in it. You get a Peripheral, which as one of more Services which has one or more Characteristics. Quite different from a common device, right ?
A:
Did apple have any frameworks for BT 2.1 or something? What to do, there are still so many ipad2 and iphone4 users, I can't just ignore them.. So what framework actually to use?
To talk to a Bluetooth 2.1 device, you need to be in the Made for iPhone accessory program. The details of it are under NDA, but you should expect things like Apple reviewing your manufacturing processes and auditing your accounts, and putting a custom chip into the accessory. If you don't make the accessory yourself, you're probably out of luck.
The only exceptions are the classes of device that iOS supports natively: keyboards, audio output, car stereos, other iPhones etc. However, you still can't send and receive arbitrary data, you're limited to using whatever APIs exist for the specific functions (e.g. for audio output, Core Audio lets you set a few properties for how Bluetooth devices behave).
|
{
"pile_set_name": "StackExchange"
}
|
Effects of a fractional picosecond 1,064 nm laser for the treatment of dermal and mixed type melasma.
Picosecond laser is a novel modality for pigmented skin disorders with extremely short pulse duration. Little is known about the effects of the picosecond laser in melasma. This study aimed to investigate the efficacy of fractional picosecond 1,064 nm laser in melasma treatment. A prospective, randomized, assessor-blinded, intra-individual split face comparative study. Female subjects with melasma were enrolled and received fractional picosecond 1,064 nm laser plus 4% hydroquinone cream on one randomly assigned side of the face; the results were compared to the use of hydroquinone cream only on the contralateral side. The modified melasma area severity index (mMASI) score, melanin index by Mexameter MX18®, participant satisfaction score by quartile rating scale, and the quality of life by the dermatology life quality index (DLQI) were evaluated over 12 weeks. Thirty female subjects completed the protocol. The mean (± standard deviation, SD) mMASI score at the 12-week visit was significantly reduced in the picosecond laser-treated areas compared to controls (3.52 ± 1.4 and 4.18 ± 2.03 respectively; p = 0.035). No differences were observed in the mean Mexameter melanin index, participant satisfaction score, and DLQI score. The observed adverse effects included transient mild erythema and mild skin desquamation. The addition of fractional picosecond 1,064 nm laser to 4% hydroquinone was effective and significantly better than 4% hydroquinone alone for the treatment of melasma.
|
{
"pile_set_name": "PubMed Abstracts"
}
|
package:
name: satrap
version: 0.2
build:
number: 2
source:
url:
- https://depot.galaxyproject.org/software/satrap/satrap_0.2_src_all.tar.gz
- http://satrap.cribi.unipd.it/download/SATRAP_v0.2.tar.gz
sha256: da6df8262474074539275754872c20ef231d7f3cf810004f63ac4e7df4e3ab07
patches:
- 0001-Replace-hardcoded-g++.patch
requirements:
build:
- {{ compiler('cxx') }}
- llvm-openmp # [osx]
- libgomp # [linux]
host:
run:
test:
commands:
- satrap
about:
home: http://satrap.cribi.unipd.it/cgi-bin/satrap.pl
license: file
license_file: LICENSE
summary: A SOLiD assembly translation program.
extra:
identifiers:
- biotools:satrap
- doi:10.1371/journal.pone.0137436
|
{
"pile_set_name": "Github"
}
|
A comparison of intensive psychiatric services for children and adolescents: cost of day treatment versus hospitalization.
Proponents of day treatment for children and adolescents assert that this mode of intervention is a viable alternative to hospitalization based on both treatment and cost effectiveness. Preliminary studies on treatment effectiveness are beginning to appear in the literature. This paper focuses on the relative cost difference of treating children and adolescents in a day-treatment program vs three inpatient-hospital settings. The study finds that the populations in the two settings are similar with regard to demographic and diagnostic characteristics and that day treatment is significantly less costly on a daily basis. A conservative conclusion, based on the findings reported, is that over the course of treatment partial hospitalization is equal or less costly than hospitalization. The authors conclude that further research exploring both treatment efficacy and cost effectiveness is in order to define what role day treatment should serve in the continuum of mental health-care services for children and adults.
|
{
"pile_set_name": "PubMed Abstracts"
}
|
#include "qtscriptshell_QDialog.h"
#include <QtScript/QScriptEngine>
#include <QVariant>
#include <qaction.h>
#include <qbitmap.h>
#include <qbytearray.h>
#include <qcoreevent.h>
#include <qcursor.h>
#include <qdialog.h>
#include <qevent.h>
#include <qfont.h>
#include <qgraphicseffect.h>
#include <qgraphicsproxywidget.h>
#include <qicon.h>
#include <qkeysequence.h>
#include <qlayout.h>
#include <qlist.h>
#include <qlocale.h>
#include <qmargins.h>
#include <qobject.h>
#include <qpaintdevice.h>
#include <qpaintengine.h>
#include <qpainter.h>
#include <qpalette.h>
#include <qpixmap.h>
#include <qpoint.h>
#include <qrect.h>
#include <qregion.h>
#include <qsize.h>
#include <qsizepolicy.h>
#include <qstyle.h>
#include <qwidget.h>
#define QTSCRIPT_IS_GENERATED_FUNCTION(fun) ((fun.data().toUInt32() & 0xFFFF0000) == 0xBABE0000)
Q_DECLARE_METATYPE(QActionEvent*)
Q_DECLARE_METATYPE(QEvent*)
Q_DECLARE_METATYPE(QChildEvent*)
Q_DECLARE_METATYPE(QCloseEvent*)
Q_DECLARE_METATYPE(QContextMenuEvent*)
Q_DECLARE_METATYPE(QDragEnterEvent*)
Q_DECLARE_METATYPE(QDragLeaveEvent*)
Q_DECLARE_METATYPE(QDragMoveEvent*)
Q_DECLARE_METATYPE(QDropEvent*)
Q_DECLARE_METATYPE(QFocusEvent*)
Q_DECLARE_METATYPE(QHideEvent*)
Q_DECLARE_METATYPE(QPainter*)
Q_DECLARE_METATYPE(QInputMethodEvent*)
Q_DECLARE_METATYPE(Qt::InputMethodQuery)
Q_DECLARE_METATYPE(QKeyEvent*)
Q_DECLARE_METATYPE(QPaintDevice::PaintDeviceMetric)
Q_DECLARE_METATYPE(QMouseEvent*)
Q_DECLARE_METATYPE(QMoveEvent*)
Q_DECLARE_METATYPE(long*)
Q_DECLARE_METATYPE(QPaintEngine*)
Q_DECLARE_METATYPE(QPaintEvent*)
Q_DECLARE_METATYPE(QPoint*)
Q_DECLARE_METATYPE(QPaintDevice*)
Q_DECLARE_METATYPE(QResizeEvent*)
Q_DECLARE_METATYPE(QShowEvent*)
Q_DECLARE_METATYPE(QTabletEvent*)
Q_DECLARE_METATYPE(QTimerEvent*)
Q_DECLARE_METATYPE(QWheelEvent*)
QtScriptShell_QDialog::QtScriptShell_QDialog(QWidget* parent, Qt::WindowFlags f)
: QDialog(parent, f) {}
QtScriptShell_QDialog::~QtScriptShell_QDialog() {}
void QtScriptShell_QDialog::accept()
{
QScriptValue _q_function = __qtscript_self.property("accept");
if (!_q_function.isFunction() || QTSCRIPT_IS_GENERATED_FUNCTION(_q_function)
|| (__qtscript_self.propertyFlags("accept") & QScriptValue::QObjectMember)) {
QDialog::accept();
} else {
_q_function.call(__qtscript_self);
}
}
void QtScriptShell_QDialog::actionEvent(QActionEvent* arg__1)
{
QScriptValue _q_function = __qtscript_self.property("actionEvent");
if (!_q_function.isFunction() || QTSCRIPT_IS_GENERATED_FUNCTION(_q_function)
|| (__qtscript_self.propertyFlags("actionEvent") & QScriptValue::QObjectMember)) {
QDialog::actionEvent(arg__1);
} else {
QScriptEngine *_q_engine = __qtscript_self.engine();
_q_function.call(__qtscript_self,
QScriptValueList()
<< qScriptValueFromValue(_q_engine, arg__1));
}
}
void QtScriptShell_QDialog::changeEvent(QEvent* arg__1)
{
QScriptValue _q_function = __qtscript_self.property("changeEvent");
if (!_q_function.isFunction() || QTSCRIPT_IS_GENERATED_FUNCTION(_q_function)
|| (__qtscript_self.propertyFlags("changeEvent") & QScriptValue::QObjectMember)) {
QDialog::changeEvent(arg__1);
} else {
QScriptEngine *_q_engine = __qtscript_self.engine();
_q_function.call(__qtscript_self,
QScriptValueList()
<< qScriptValueFromValue(_q_engine, arg__1));
}
}
void QtScriptShell_QDialog::childEvent(QChildEvent* arg__1)
{
QScriptValue _q_function = __qtscript_self.property("childEvent");
if (!_q_function.isFunction() || QTSCRIPT_IS_GENERATED_FUNCTION(_q_function)
|| (__qtscript_self.propertyFlags("childEvent") & QScriptValue::QObjectMember)) {
QDialog::childEvent(arg__1);
} else {
QScriptEngine *_q_engine = __qtscript_self.engine();
_q_function.call(__qtscript_self,
QScriptValueList()
<< qScriptValueFromValue(_q_engine, arg__1));
}
}
void QtScriptShell_QDialog::closeEvent(QCloseEvent* arg__1)
{
QScriptValue _q_function = __qtscript_self.property("closeEvent");
if (!_q_function.isFunction() || QTSCRIPT_IS_GENERATED_FUNCTION(_q_function)
|| (__qtscript_self.propertyFlags("closeEvent") & QScriptValue::QObjectMember)) {
QDialog::closeEvent(arg__1);
} else {
QScriptEngine *_q_engine = __qtscript_self.engine();
_q_function.call(__qtscript_self,
QScriptValueList()
<< qScriptValueFromValue(_q_engine, arg__1));
}
}
void QtScriptShell_QDialog::contextMenuEvent(QContextMenuEvent* arg__1)
{
QScriptValue _q_function = __qtscript_self.property("contextMenuEvent");
if (!_q_function.isFunction() || QTSCRIPT_IS_GENERATED_FUNCTION(_q_function)
|| (__qtscript_self.propertyFlags("contextMenuEvent") & QScriptValue::QObjectMember)) {
QDialog::contextMenuEvent(arg__1);
} else {
QScriptEngine *_q_engine = __qtscript_self.engine();
_q_function.call(__qtscript_self,
QScriptValueList()
<< qScriptValueFromValue(_q_engine, arg__1));
}
}
void QtScriptShell_QDialog::customEvent(QEvent* arg__1)
{
QScriptValue _q_function = __qtscript_self.property("customEvent");
if (!_q_function.isFunction() || QTSCRIPT_IS_GENERATED_FUNCTION(_q_function)
|| (__qtscript_self.propertyFlags("customEvent") & QScriptValue::QObjectMember)) {
QDialog::customEvent(arg__1);
} else {
QScriptEngine *_q_engine = __qtscript_self.engine();
_q_function.call(__qtscript_self,
QScriptValueList()
<< qScriptValueFromValue(_q_engine, arg__1));
}
}
int QtScriptShell_QDialog::devType() const
{
QScriptValue _q_function = __qtscript_self.property("devType");
if (!_q_function.isFunction() || QTSCRIPT_IS_GENERATED_FUNCTION(_q_function)
|| (__qtscript_self.propertyFlags("devType") & QScriptValue::QObjectMember)) {
return QDialog::devType();
} else {
return qscriptvalue_cast<int >(_q_function.call(__qtscript_self));
}
}
void QtScriptShell_QDialog::done(int arg__1)
{
QScriptValue _q_function = __qtscript_self.property("done");
if (!_q_function.isFunction() || QTSCRIPT_IS_GENERATED_FUNCTION(_q_function)
|| (__qtscript_self.propertyFlags("done") & QScriptValue::QObjectMember)) {
QDialog::done(arg__1);
} else {
QScriptEngine *_q_engine = __qtscript_self.engine();
_q_function.call(__qtscript_self,
QScriptValueList()
<< qScriptValueFromValue(_q_engine, arg__1));
}
}
void QtScriptShell_QDialog::dragEnterEvent(QDragEnterEvent* arg__1)
{
QScriptValue _q_function = __qtscript_self.property("dragEnterEvent");
if (!_q_function.isFunction() || QTSCRIPT_IS_GENERATED_FUNCTION(_q_function)
|| (__qtscript_self.propertyFlags("dragEnterEvent") & QScriptValue::QObjectMember)) {
QDialog::dragEnterEvent(arg__1);
} else {
QScriptEngine *_q_engine = __qtscript_self.engine();
_q_function.call(__qtscript_self,
QScriptValueList()
<< qScriptValueFromValue(_q_engine, arg__1));
}
}
void QtScriptShell_QDialog::dragLeaveEvent(QDragLeaveEvent* arg__1)
{
QScriptValue _q_function = __qtscript_self.property("dragLeaveEvent");
if (!_q_function.isFunction() || QTSCRIPT_IS_GENERATED_FUNCTION(_q_function)
|| (__qtscript_self.propertyFlags("dragLeaveEvent") & QScriptValue::QObjectMember)) {
QDialog::dragLeaveEvent(arg__1);
} else {
QScriptEngine *_q_engine = __qtscript_self.engine();
_q_function.call(__qtscript_self,
QScriptValueList()
<< qScriptValueFromValue(_q_engine, arg__1));
}
}
void QtScriptShell_QDialog::dragMoveEvent(QDragMoveEvent* arg__1)
{
QScriptValue _q_function = __qtscript_self.property("dragMoveEvent");
if (!_q_function.isFunction() || QTSCRIPT_IS_GENERATED_FUNCTION(_q_function)
|| (__qtscript_self.propertyFlags("dragMoveEvent") & QScriptValue::QObjectMember)) {
QDialog::dragMoveEvent(arg__1);
} else {
QScriptEngine *_q_engine = __qtscript_self.engine();
_q_function.call(__qtscript_self,
QScriptValueList()
<< qScriptValueFromValue(_q_engine, arg__1));
}
}
void QtScriptShell_QDialog::dropEvent(QDropEvent* arg__1)
{
QScriptValue _q_function = __qtscript_self.property("dropEvent");
if (!_q_function.isFunction() || QTSCRIPT_IS_GENERATED_FUNCTION(_q_function)
|| (__qtscript_self.propertyFlags("dropEvent") & QScriptValue::QObjectMember)) {
QDialog::dropEvent(arg__1);
} else {
QScriptEngine *_q_engine = __qtscript_self.engine();
_q_function.call(__qtscript_self,
QScriptValueList()
<< qScriptValueFromValue(_q_engine, arg__1));
}
}
void QtScriptShell_QDialog::enterEvent(QEvent* arg__1)
{
QScriptValue _q_function = __qtscript_self.property("enterEvent");
if (!_q_function.isFunction() || QTSCRIPT_IS_GENERATED_FUNCTION(_q_function)
|| (__qtscript_self.propertyFlags("enterEvent") & QScriptValue::QObjectMember)) {
QDialog::enterEvent(arg__1);
} else {
QScriptEngine *_q_engine = __qtscript_self.engine();
_q_function.call(__qtscript_self,
QScriptValueList()
<< qScriptValueFromValue(_q_engine, arg__1));
}
}
bool QtScriptShell_QDialog::event(QEvent* arg__1)
{
QScriptValue _q_function = __qtscript_self.property("event");
if (!_q_function.isFunction() || QTSCRIPT_IS_GENERATED_FUNCTION(_q_function)
|| (__qtscript_self.propertyFlags("event") & QScriptValue::QObjectMember)) {
return QDialog::event(arg__1);
} else {
QScriptEngine *_q_engine = __qtscript_self.engine();
return qscriptvalue_cast<bool >(_q_function.call(__qtscript_self,
QScriptValueList()
<< qScriptValueFromValue(_q_engine, arg__1)));
}
}
bool QtScriptShell_QDialog::eventFilter(QObject* arg__1, QEvent* arg__2)
{
QScriptValue _q_function = __qtscript_self.property("eventFilter");
if (!_q_function.isFunction() || QTSCRIPT_IS_GENERATED_FUNCTION(_q_function)
|| (__qtscript_self.propertyFlags("eventFilter") & QScriptValue::QObjectMember)) {
return QDialog::eventFilter(arg__1, arg__2);
} else {
QScriptEngine *_q_engine = __qtscript_self.engine();
return qscriptvalue_cast<bool >(_q_function.call(__qtscript_self,
QScriptValueList()
<< qScriptValueFromValue(_q_engine, arg__1)
<< qScriptValueFromValue(_q_engine, arg__2)));
}
}
int QtScriptShell_QDialog::exec()
{
QScriptValue _q_function = __qtscript_self.property("exec");
if (!_q_function.isFunction() || QTSCRIPT_IS_GENERATED_FUNCTION(_q_function)
|| (__qtscript_self.propertyFlags("exec") & QScriptValue::QObjectMember)) {
return QDialog::exec();
} else {
return qscriptvalue_cast<int >(_q_function.call(__qtscript_self));
}
}
void QtScriptShell_QDialog::focusInEvent(QFocusEvent* arg__1)
{
QScriptValue _q_function = __qtscript_self.property("focusInEvent");
if (!_q_function.isFunction() || QTSCRIPT_IS_GENERATED_FUNCTION(_q_function)
|| (__qtscript_self.propertyFlags("focusInEvent") & QScriptValue::QObjectMember)) {
QDialog::focusInEvent(arg__1);
} else {
QScriptEngine *_q_engine = __qtscript_self.engine();
_q_function.call(__qtscript_self,
QScriptValueList()
<< qScriptValueFromValue(_q_engine, arg__1));
}
}
bool QtScriptShell_QDialog::focusNextPrevChild(bool next)
{
QScriptValue _q_function = __qtscript_self.property("focusNextPrevChild");
if (!_q_function.isFunction() || QTSCRIPT_IS_GENERATED_FUNCTION(_q_function)
|| (__qtscript_self.propertyFlags("focusNextPrevChild") & QScriptValue::QObjectMember)) {
return QDialog::focusNextPrevChild(next);
} else {
QScriptEngine *_q_engine = __qtscript_self.engine();
return qscriptvalue_cast<bool >(_q_function.call(__qtscript_self,
QScriptValueList()
<< qScriptValueFromValue(_q_engine, next)));
}
}
void QtScriptShell_QDialog::focusOutEvent(QFocusEvent* arg__1)
{
QScriptValue _q_function = __qtscript_self.property("focusOutEvent");
if (!_q_function.isFunction() || QTSCRIPT_IS_GENERATED_FUNCTION(_q_function)
|| (__qtscript_self.propertyFlags("focusOutEvent") & QScriptValue::QObjectMember)) {
QDialog::focusOutEvent(arg__1);
} else {
QScriptEngine *_q_engine = __qtscript_self.engine();
_q_function.call(__qtscript_self,
QScriptValueList()
<< qScriptValueFromValue(_q_engine, arg__1));
}
}
bool QtScriptShell_QDialog::hasHeightForWidth() const
{
QScriptValue _q_function = __qtscript_self.property("hasHeightForWidth");
if (!_q_function.isFunction() || QTSCRIPT_IS_GENERATED_FUNCTION(_q_function)
|| (__qtscript_self.propertyFlags("hasHeightForWidth") & QScriptValue::QObjectMember)) {
return QDialog::hasHeightForWidth();
} else {
return qscriptvalue_cast<bool >(_q_function.call(__qtscript_self));
}
}
int QtScriptShell_QDialog::heightForWidth(int arg__1) const
{
QScriptValue _q_function = __qtscript_self.property("heightForWidth");
if (!_q_function.isFunction() || QTSCRIPT_IS_GENERATED_FUNCTION(_q_function)
|| (__qtscript_self.propertyFlags("heightForWidth") & QScriptValue::QObjectMember)) {
return QDialog::heightForWidth(arg__1);
} else {
QScriptEngine *_q_engine = __qtscript_self.engine();
return qscriptvalue_cast<int >(_q_function.call(__qtscript_self,
QScriptValueList()
<< qScriptValueFromValue(_q_engine, arg__1)));
}
}
void QtScriptShell_QDialog::hideEvent(QHideEvent* arg__1)
{
QScriptValue _q_function = __qtscript_self.property("hideEvent");
if (!_q_function.isFunction() || QTSCRIPT_IS_GENERATED_FUNCTION(_q_function)
|| (__qtscript_self.propertyFlags("hideEvent") & QScriptValue::QObjectMember)) {
QDialog::hideEvent(arg__1);
} else {
QScriptEngine *_q_engine = __qtscript_self.engine();
_q_function.call(__qtscript_self,
QScriptValueList()
<< qScriptValueFromValue(_q_engine, arg__1));
}
}
void QtScriptShell_QDialog::initPainter(QPainter* painter) const
{
QScriptValue _q_function = __qtscript_self.property("initPainter");
if (!_q_function.isFunction() || QTSCRIPT_IS_GENERATED_FUNCTION(_q_function)
|| (__qtscript_self.propertyFlags("initPainter") & QScriptValue::QObjectMember)) {
QDialog::initPainter(painter);
} else {
QScriptEngine *_q_engine = __qtscript_self.engine();
_q_function.call(__qtscript_self,
QScriptValueList()
<< qScriptValueFromValue(_q_engine, painter));
}
}
void QtScriptShell_QDialog::inputMethodEvent(QInputMethodEvent* arg__1)
{
QScriptValue _q_function = __qtscript_self.property("inputMethodEvent");
if (!_q_function.isFunction() || QTSCRIPT_IS_GENERATED_FUNCTION(_q_function)
|| (__qtscript_self.propertyFlags("inputMethodEvent") & QScriptValue::QObjectMember)) {
QDialog::inputMethodEvent(arg__1);
} else {
QScriptEngine *_q_engine = __qtscript_self.engine();
_q_function.call(__qtscript_self,
QScriptValueList()
<< qScriptValueFromValue(_q_engine, arg__1));
}
}
QVariant QtScriptShell_QDialog::inputMethodQuery(Qt::InputMethodQuery arg__1) const
{
QScriptValue _q_function = __qtscript_self.property("inputMethodQuery");
if (!_q_function.isFunction() || QTSCRIPT_IS_GENERATED_FUNCTION(_q_function)
|| (__qtscript_self.propertyFlags("inputMethodQuery") & QScriptValue::QObjectMember)) {
return QDialog::inputMethodQuery(arg__1);
} else {
QScriptEngine *_q_engine = __qtscript_self.engine();
return qscriptvalue_cast<QVariant >(_q_function.call(__qtscript_self,
QScriptValueList()
<< qScriptValueFromValue(_q_engine, arg__1)));
}
}
void QtScriptShell_QDialog::keyPressEvent(QKeyEvent* arg__1)
{
QScriptValue _q_function = __qtscript_self.property("keyPressEvent");
if (!_q_function.isFunction() || QTSCRIPT_IS_GENERATED_FUNCTION(_q_function)
|| (__qtscript_self.propertyFlags("keyPressEvent") & QScriptValue::QObjectMember)) {
QDialog::keyPressEvent(arg__1);
} else {
QScriptEngine *_q_engine = __qtscript_self.engine();
_q_function.call(__qtscript_self,
QScriptValueList()
<< qScriptValueFromValue(_q_engine, arg__1));
}
}
void QtScriptShell_QDialog::keyReleaseEvent(QKeyEvent* arg__1)
{
QScriptValue _q_function = __qtscript_self.property("keyReleaseEvent");
if (!_q_function.isFunction() || QTSCRIPT_IS_GENERATED_FUNCTION(_q_function)
|| (__qtscript_self.propertyFlags("keyReleaseEvent") & QScriptValue::QObjectMember)) {
QDialog::keyReleaseEvent(arg__1);
} else {
QScriptEngine *_q_engine = __qtscript_self.engine();
_q_function.call(__qtscript_self,
QScriptValueList()
<< qScriptValueFromValue(_q_engine, arg__1));
}
}
void QtScriptShell_QDialog::leaveEvent(QEvent* arg__1)
{
QScriptValue _q_function = __qtscript_self.property("leaveEvent");
if (!_q_function.isFunction() || QTSCRIPT_IS_GENERATED_FUNCTION(_q_function)
|| (__qtscript_self.propertyFlags("leaveEvent") & QScriptValue::QObjectMember)) {
QDialog::leaveEvent(arg__1);
} else {
QScriptEngine *_q_engine = __qtscript_self.engine();
_q_function.call(__qtscript_self,
QScriptValueList()
<< qScriptValueFromValue(_q_engine, arg__1));
}
}
int QtScriptShell_QDialog::metric(QPaintDevice::PaintDeviceMetric arg__1) const
{
QScriptValue _q_function = __qtscript_self.property("metric");
if (!_q_function.isFunction() || QTSCRIPT_IS_GENERATED_FUNCTION(_q_function)
|| (__qtscript_self.propertyFlags("metric") & QScriptValue::QObjectMember)) {
return QDialog::metric(arg__1);
} else {
QScriptEngine *_q_engine = __qtscript_self.engine();
return qscriptvalue_cast<int >(_q_function.call(__qtscript_self,
QScriptValueList()
<< qScriptValueFromValue(_q_engine, arg__1)));
}
}
void QtScriptShell_QDialog::mouseDoubleClickEvent(QMouseEvent* arg__1)
{
QScriptValue _q_function = __qtscript_self.property("mouseDoubleClickEvent");
if (!_q_function.isFunction() || QTSCRIPT_IS_GENERATED_FUNCTION(_q_function)
|| (__qtscript_self.propertyFlags("mouseDoubleClickEvent") & QScriptValue::QObjectMember)) {
QDialog::mouseDoubleClickEvent(arg__1);
} else {
QScriptEngine *_q_engine = __qtscript_self.engine();
_q_function.call(__qtscript_self,
QScriptValueList()
<< qScriptValueFromValue(_q_engine, arg__1));
}
}
void QtScriptShell_QDialog::mouseMoveEvent(QMouseEvent* arg__1)
{
QScriptValue _q_function = __qtscript_self.property("mouseMoveEvent");
if (!_q_function.isFunction() || QTSCRIPT_IS_GENERATED_FUNCTION(_q_function)
|| (__qtscript_self.propertyFlags("mouseMoveEvent") & QScriptValue::QObjectMember)) {
QDialog::mouseMoveEvent(arg__1);
} else {
QScriptEngine *_q_engine = __qtscript_self.engine();
_q_function.call(__qtscript_self,
QScriptValueList()
<< qScriptValueFromValue(_q_engine, arg__1));
}
}
void QtScriptShell_QDialog::mousePressEvent(QMouseEvent* arg__1)
{
QScriptValue _q_function = __qtscript_self.property("mousePressEvent");
if (!_q_function.isFunction() || QTSCRIPT_IS_GENERATED_FUNCTION(_q_function)
|| (__qtscript_self.propertyFlags("mousePressEvent") & QScriptValue::QObjectMember)) {
QDialog::mousePressEvent(arg__1);
} else {
QScriptEngine *_q_engine = __qtscript_self.engine();
_q_function.call(__qtscript_self,
QScriptValueList()
<< qScriptValueFromValue(_q_engine, arg__1));
}
}
void QtScriptShell_QDialog::mouseReleaseEvent(QMouseEvent* arg__1)
{
QScriptValue _q_function = __qtscript_self.property("mouseReleaseEvent");
if (!_q_function.isFunction() || QTSCRIPT_IS_GENERATED_FUNCTION(_q_function)
|| (__qtscript_self.propertyFlags("mouseReleaseEvent") & QScriptValue::QObjectMember)) {
QDialog::mouseReleaseEvent(arg__1);
} else {
QScriptEngine *_q_engine = __qtscript_self.engine();
_q_function.call(__qtscript_self,
QScriptValueList()
<< qScriptValueFromValue(_q_engine, arg__1));
}
}
void QtScriptShell_QDialog::moveEvent(QMoveEvent* arg__1)
{
QScriptValue _q_function = __qtscript_self.property("moveEvent");
if (!_q_function.isFunction() || QTSCRIPT_IS_GENERATED_FUNCTION(_q_function)
|| (__qtscript_self.propertyFlags("moveEvent") & QScriptValue::QObjectMember)) {
QDialog::moveEvent(arg__1);
} else {
QScriptEngine *_q_engine = __qtscript_self.engine();
_q_function.call(__qtscript_self,
QScriptValueList()
<< qScriptValueFromValue(_q_engine, arg__1));
}
}
bool QtScriptShell_QDialog::nativeEvent(const QByteArray& eventType, void* message, long* result)
{
QScriptValue _q_function = __qtscript_self.property("nativeEvent");
if (!_q_function.isFunction() || QTSCRIPT_IS_GENERATED_FUNCTION(_q_function)
|| (__qtscript_self.propertyFlags("nativeEvent") & QScriptValue::QObjectMember)) {
return QDialog::nativeEvent(eventType, message, result);
} else {
QScriptEngine *_q_engine = __qtscript_self.engine();
return qscriptvalue_cast<bool >(_q_function.call(__qtscript_self,
QScriptValueList()
<< qScriptValueFromValue(_q_engine, eventType)
<< qScriptValueFromValue(_q_engine, message)
<< qScriptValueFromValue(_q_engine, result)));
}
}
void QtScriptShell_QDialog::open()
{
QScriptValue _q_function = __qtscript_self.property("open");
if (!_q_function.isFunction() || QTSCRIPT_IS_GENERATED_FUNCTION(_q_function)
|| (__qtscript_self.propertyFlags("open") & QScriptValue::QObjectMember)) {
QDialog::open();
} else {
_q_function.call(__qtscript_self);
}
}
QPaintEngine* QtScriptShell_QDialog::paintEngine() const
{
QScriptValue _q_function = __qtscript_self.property("paintEngine");
if (!_q_function.isFunction() || QTSCRIPT_IS_GENERATED_FUNCTION(_q_function)
|| (__qtscript_self.propertyFlags("paintEngine") & QScriptValue::QObjectMember)) {
return QDialog::paintEngine();
} else {
return qscriptvalue_cast<QPaintEngine* >(_q_function.call(__qtscript_self));
}
}
void QtScriptShell_QDialog::paintEvent(QPaintEvent* arg__1)
{
QScriptValue _q_function = __qtscript_self.property("paintEvent");
if (!_q_function.isFunction() || QTSCRIPT_IS_GENERATED_FUNCTION(_q_function)
|| (__qtscript_self.propertyFlags("paintEvent") & QScriptValue::QObjectMember)) {
QDialog::paintEvent(arg__1);
} else {
QScriptEngine *_q_engine = __qtscript_self.engine();
_q_function.call(__qtscript_self,
QScriptValueList()
<< qScriptValueFromValue(_q_engine, arg__1));
}
}
QPaintDevice* QtScriptShell_QDialog::redirected(QPoint* offset) const
{
QScriptValue _q_function = __qtscript_self.property("redirected");
if (!_q_function.isFunction() || QTSCRIPT_IS_GENERATED_FUNCTION(_q_function)
|| (__qtscript_self.propertyFlags("redirected") & QScriptValue::QObjectMember)) {
return QDialog::redirected(offset);
} else {
QScriptEngine *_q_engine = __qtscript_self.engine();
return qscriptvalue_cast<QPaintDevice* >(_q_function.call(__qtscript_self,
QScriptValueList()
<< qScriptValueFromValue(_q_engine, offset)));
}
}
void QtScriptShell_QDialog::reject()
{
QScriptValue _q_function = __qtscript_self.property("reject");
if (!_q_function.isFunction() || QTSCRIPT_IS_GENERATED_FUNCTION(_q_function)
|| (__qtscript_self.propertyFlags("reject") & QScriptValue::QObjectMember)) {
QDialog::reject();
} else {
_q_function.call(__qtscript_self);
}
}
void QtScriptShell_QDialog::resizeEvent(QResizeEvent* arg__1)
{
QScriptValue _q_function = __qtscript_self.property("resizeEvent");
if (!_q_function.isFunction() || QTSCRIPT_IS_GENERATED_FUNCTION(_q_function)
|| (__qtscript_self.propertyFlags("resizeEvent") & QScriptValue::QObjectMember)) {
QDialog::resizeEvent(arg__1);
} else {
QScriptEngine *_q_engine = __qtscript_self.engine();
_q_function.call(__qtscript_self,
QScriptValueList()
<< qScriptValueFromValue(_q_engine, arg__1));
}
}
QPainter* QtScriptShell_QDialog::sharedPainter() const
{
QScriptValue _q_function = __qtscript_self.property("sharedPainter");
if (!_q_function.isFunction() || QTSCRIPT_IS_GENERATED_FUNCTION(_q_function)
|| (__qtscript_self.propertyFlags("sharedPainter") & QScriptValue::QObjectMember)) {
return QDialog::sharedPainter();
} else {
return qscriptvalue_cast<QPainter* >(_q_function.call(__qtscript_self));
}
}
void QtScriptShell_QDialog::showEvent(QShowEvent* arg__1)
{
QScriptValue _q_function = __qtscript_self.property("showEvent");
if (!_q_function.isFunction() || QTSCRIPT_IS_GENERATED_FUNCTION(_q_function)
|| (__qtscript_self.propertyFlags("showEvent") & QScriptValue::QObjectMember)) {
QDialog::showEvent(arg__1);
} else {
QScriptEngine *_q_engine = __qtscript_self.engine();
_q_function.call(__qtscript_self,
QScriptValueList()
<< qScriptValueFromValue(_q_engine, arg__1));
}
}
void QtScriptShell_QDialog::tabletEvent(QTabletEvent* arg__1)
{
QScriptValue _q_function = __qtscript_self.property("tabletEvent");
if (!_q_function.isFunction() || QTSCRIPT_IS_GENERATED_FUNCTION(_q_function)
|| (__qtscript_self.propertyFlags("tabletEvent") & QScriptValue::QObjectMember)) {
QDialog::tabletEvent(arg__1);
} else {
QScriptEngine *_q_engine = __qtscript_self.engine();
_q_function.call(__qtscript_self,
QScriptValueList()
<< qScriptValueFromValue(_q_engine, arg__1));
}
}
void QtScriptShell_QDialog::timerEvent(QTimerEvent* arg__1)
{
QScriptValue _q_function = __qtscript_self.property("timerEvent");
if (!_q_function.isFunction() || QTSCRIPT_IS_GENERATED_FUNCTION(_q_function)
|| (__qtscript_self.propertyFlags("timerEvent") & QScriptValue::QObjectMember)) {
QDialog::timerEvent(arg__1);
} else {
QScriptEngine *_q_engine = __qtscript_self.engine();
_q_function.call(__qtscript_self,
QScriptValueList()
<< qScriptValueFromValue(_q_engine, arg__1));
}
}
void QtScriptShell_QDialog::wheelEvent(QWheelEvent* arg__1)
{
QScriptValue _q_function = __qtscript_self.property("wheelEvent");
if (!_q_function.isFunction() || QTSCRIPT_IS_GENERATED_FUNCTION(_q_function)
|| (__qtscript_self.propertyFlags("wheelEvent") & QScriptValue::QObjectMember)) {
QDialog::wheelEvent(arg__1);
} else {
QScriptEngine *_q_engine = __qtscript_self.engine();
_q_function.call(__qtscript_self,
QScriptValueList()
<< qScriptValueFromValue(_q_engine, arg__1));
}
}
|
{
"pile_set_name": "Github"
}
|
Feigelson HS, Powers JD, Kumar M, Carroll NM, Pathy A, Ritzwoller DP. Melanoma incidence, recurrence, and mortality in an integrated healthcare system: A retrospective cohort study. Cancer Med. 2019;8:4508--4516. 10.1002/cam4.2252
All authors reviewed and approved the final version of this manuscript.
**Funding information**
The funding for this study was provided by institutional support from Kaiser Permanente Colorado, Institute for Health Research along with support from NCI (U2 C171524 to the Cancer Research Network).
**Data Availability Statement:** The data that support the findings of this study are available from the corresponding author upon reasonable request.
1. BACKGROUND {#cam42252-sec-0005}
=============
In the United States, cutaneous melanoma (hereafter referred to as melanoma) is the fifth most common cancer in men and the sixth most common cancer in women, with an estimated 91 270 new cases in 2018.[1](#cam42252-bib-0001){ref-type="ref"} Over the past 30 years, the incidence of melanoma has risen rapidly, although current trends suggest this pattern may be shifting in certain age groups.[2](#cam42252-bib-0002){ref-type="ref"}, [3](#cam42252-bib-0003){ref-type="ref"} A recent report using data from National Cancer Institute\'s Surveillance, Epidemiology, and End Results (SEER) Program through 2014 found increasing incidence rates in older men and women, but decreasing rates in younger age groups.[2](#cam42252-bib-0002){ref-type="ref"} Death from melanoma also appears to be on the decline, especially among younger patients.[1](#cam42252-bib-0001){ref-type="ref"} The 5‐year relative survival rate for melanoma is 91%, and even higher (98%) among those diagnosed with localized disease. Five‐year survival rates for individuals with regional and distant‐stage diseases are 62% and 16%, respectively.[4](#cam42252-bib-0004){ref-type="ref"}
Despite these encouraging trends related to improvements in screening and the introduction of novel therapeutics,[5](#cam42252-bib-0005){ref-type="ref"}, [6](#cam42252-bib-0006){ref-type="ref"} melanoma remains a significant public health problem, with an estimated 1.2 million melanoma survivors currently living in the United States.[4](#cam42252-bib-0004){ref-type="ref"} While many previous studies have examined factors associated with survival,[7](#cam42252-bib-0007){ref-type="ref"}, [8](#cam42252-bib-0008){ref-type="ref"}, [9](#cam42252-bib-0009){ref-type="ref"}, [10](#cam42252-bib-0010){ref-type="ref"}, [11](#cam42252-bib-0011){ref-type="ref"}, [12](#cam42252-bib-0012){ref-type="ref"}, [13](#cam42252-bib-0013){ref-type="ref"}, [14](#cam42252-bib-0014){ref-type="ref"}, [15](#cam42252-bib-0015){ref-type="ref"}, [16](#cam42252-bib-0016){ref-type="ref"}, [17](#cam42252-bib-0017){ref-type="ref"} little is known about factors associated with risk of recurrence.[18](#cam42252-bib-0018){ref-type="ref"}, [19](#cam42252-bib-0019){ref-type="ref"}, [20](#cam42252-bib-0020){ref-type="ref"}, [21](#cam42252-bib-0021){ref-type="ref"}, [22](#cam42252-bib-0022){ref-type="ref"}, [23](#cam42252-bib-0023){ref-type="ref"} Data on cancer recurrence can yield important information that can guide treatment and surveillance planning, but is often unavailable. The Kaiser Permanente Colorado (KPCO) tumor registry tracks cancer cases for recurrence,[24](#cam42252-bib-0024){ref-type="ref"}, [25](#cam42252-bib-0025){ref-type="ref"} as well as for mortality, providing a unique opportunity to examine characteristics associated with melanoma recurrence. Data derived from an insured, monitored patient population, such as KPCO, have the potential to enhance our understanding of emerging trends in melanoma, including differences by age and gender. In this retrospective cohort, we examine incidence, recurrence, and mortality among KPCO members diagnosed with melanoma from January 1, 2000 through December 31, 2015.
2. METHODS {#cam42252-sec-0006}
==========
Kaiser Permanente Colorado has provided comprehensive health care to the greater Denver metropolitan area since 1969 and currently provides care to approximately 12% of the state\'s entire population (over 600 000 people). The primary data source for this analysis was KPCO\'s tumor registry that is linked to a variety of electronic health record (EHR) records and other administrative data. These data have been extracted and loaded into relational tables, known as the "virtual data warehouse" and linked through a common, unique identifier.[26](#cam42252-bib-0026){ref-type="ref"}, [27](#cam42252-bib-0027){ref-type="ref"}, [28](#cam42252-bib-0028){ref-type="ref"}, [29](#cam42252-bib-0029){ref-type="ref"}
The tumor registry contains data consistent with the North American Association of Central Cancer Registries (NAACCR) standard. Tumor registry data, including first course treatment, were obtained from manual reviews of patients' medical charts by certified tumor registrars and include coded clinical data associated with inpatient and outpatient events. Stage at diagnosis was determined using the American Joint Committee on Cancer (AJCC) Cancer Staging Guidelines. A unique feature of the KPCO tumor registry is its tracking of cancer recurrence. Patient charts are manually reviewed specifically to access recurrence status. A recurrence of melanoma is defined as a detected melanoma that occurs after a patient is declared disease free after completion of definitive therapy (eg, excision, radiation, and/or chemotherapy). For this analysis, all melanoma cases had manual chart review by a certified tumor registrar at the start of the study to verify recurrence status.
Patient characteristics not available in the tumor registry were extracted from the virtual data warehouse. As an indicator of general health, we used the Quan adaptation of the Charlson Comorbidity Index[30](#cam42252-bib-0030){ref-type="ref"} derived from diagnosis codes captured from all inpatient claims and EHR and encounters that occurred 12 months prior to melanoma diagnosis. Data on cause and date of death were derived from the tumor registry, membership data, state‐level mortality files, and Social Security Administration data. We used the patient\'s home address mapped to census block level information on education to estimate socioeconomic status. Socioeconomic status was expressed as the percent of people in the census block with a college (or higher) education.
This project was approved by the KPCO Institutional Review Board. All analyses were performed by SAS 9.2 and 9.4 (SAS Institute, Cary, NC).
2.1. Statistical analysis {#cam42252-sec-0007}
-------------------------
Patients diagnosed with stage I‐IV melanoma between January 1, 2000 and December 31, 2015 were identified from the tumor registry and followed through December 31, 2017 for recurrence and mortality. We excluded in situ cases, those who were less than 21 years of age, who were missing information on race/ethnicity or stage, or who were not enrolled at KPCO at the time of diagnosis.
Patient characteristics were reported as means or medians with standard deviations for interval‐level variables, and percentages for categorical variables. Age‐adjusted incidence was calculated using the age distribution of the US 2000 population as the standard. We compared our incidence rates to SEER (version 18) data ([www.seer.cancer.gov](http://www.seer.cancer.gov)).
Cox proportional hazards models were used to estimate both risk of recurrence and death while adjusting for important covariates. Analysis of recurrence was limited to patients diagnosed with stage I‐III at diagnosis and declared disease free after first course therapy. Time to recurrence was defined as months from diagnosis until recurrence. Both all‐cause and melanoma‐specific mortality were examined; survival time was defined as months from diagnosis until death. Patients who disenrolled from the health plan, had a subsequent cancer, or who were alive at the end of the follow‐up period (December 31, 2017) were censored on those dates.
The following covariates were included in the survival analysis: stage at diagnosis, gender, age (as a continuous variable), race/ethnicity (white or non‐white), socioeconomic status as measured by percent of college educated individuals in census tract of residence, comorbidity scores (0, 1‐2, or ≥3 comorbid conditions), and treatment. Treatment was classified into four dichotomous variables based on data in the tumor registry: receipt of surgery (yes/no), radiation therapy (yes/no), chemotherapy (yes/no), or biologic response modulators (yes/no). Treatment categories were not mutually exclusive.
Variables included in the analysis of recurrence were the same as those in the survival analysis, with the exception of surgery because all but one case in the recurrence analysis had surgery as part of their treatment. To examine the effects of treatment on recurrence, we constructed multivariable models with and without treatment.
Finally, we conducted a sensitivity analysis for both recurrence and all‐cause mortality using tumor thickness rather than disease stage among the sub‐set of cases where thickness data was available.
3. RESULTS {#cam42252-sec-0008}
==========
A total of 1,931 KPCO members 21 years of age or older were diagnosed with invasive melanoma from 2000 to 2015. Table [1](#cam42252-tbl-0001){ref-type="table"} shows the demographic and tumor characteristics of the study population overall and by stage. Most cases were diagnosed at stage I (77.5%). More cases occurred in men (57.8%) than in women (42.2%), and our cohort was predominantly non‐Hispanic white (97.8%). Most patients (97.7%) were treated with surgery; use of other treatment modalities was uncommon. However, among cases diagnosed at stage IV (N = 54), radiation, chemotherapy, and biologic response modulators were each used in about a quarter of patients.
######
Characteristics of invasive melanoma cases, Kaiser Permanente Colorado, 2000‐2015
Characteristic Total cases Stage I Stage II Stage III Stage IV
------------------------------------------------------------------------------- --------------- --------------- --------------- --------------- ---------------
Number of people 1931 1497 (77.5%) 277 (14.3%) 103 (5.3%) 54 (2.8%)
Age group
\<30 57 (3.0%) 45 (3.0%) 7 (2.5%) 5 (4.9%) 0 (0.0%)
30‐39 148 (7.7%) 124 (8.3%) 13 (4.7%) 6 (5.8%) 5 (9.3%)
40‐49 250 (13.0%) 209 (14.0%) 25 (9.0%) 11 (10.7%) 5 (9.3%)
50‐59 398 (20.6%) 333 (22.2%) 33 (11.9%) 22 (21.4%) 10 (18.5%)
60‐69 483 (25.0%) 375 (25.1%) 59 (21.3%) 31 (30.1%) 18 (33.3%)
70‐79 387 (20.0%) 269 (18.0%) 85 (30.7%) 22 (21.4%) 11 (20.4%)
80+ 208 (10.8%) 142 (9.5%) 55 (19.9%) 6 (5.8%) 5 (9.3%)
Gender
Female 815 (42.2%) 665 (44.4%) 97 (35.0%) 34 (33.0%) 19 (35.2%)
Male 1116 (57.8%) 832 (55.6%) 180 (65.0%) 69 (67.0%) 35 (64.8%)
Race/Ethnicity
Non‐Hispanic White 1888 (97.8%) 1466 (97.9%) 269 (97.1%) 102 (99.0%) 51 (94.4%)
Hispanic 21 (1.1%) 13 (0.9%) 5 (1.8%) 1 (1.0%) 2 (3.7%)
Other 22 (1.1%) 18 (1.2%) 3 (1.1%) 0 (0.0%) 1 (1.9%)
Average (SD) percent of people with college or more education in census block 45.1% (19.9%) 45.8% (19.7%) 43.9% (19.5%) 40.9% (21.9%) 40.8% (21.7%)
Comorbidity score
0 1110 (57.5%) 910 (60.8%) 128 (46.2%) 54 (52.4%) 18 (33.3%)
1‐2 533 (27.6%) 400 (26.7%) 88 (31.8%) 33 (32.0%) 12 (22.2%)
3+ 288 (14.9%) 187 (12.5%) 61 (22.0%) 16 (15.5%) 24 (44.4%)
First course therapy
Surgery 1887 (97.7%) 1495 (99.9%) 277 (100.0%) 100 (97.1%) 15 (27.8%)
Radiation 24 (1.2%) 2 (0.1%) 3 (1.1%) 4 (3.9%) 15 (27.8%)
Chemotherapy 16 (0.8%) 0 (0.0%) 1 (0.4%) 2 (1.9%) 13 (24.1%)
Biologic response modulators 67 (3.5%) 7 (0.5%) 9 (3.3%) 35 (34.0%) 16 (29.6%)
John Wiley & Sons, Ltd
Figure [1](#cam42252-fig-0001){ref-type="fig"} shows the age‐adjusted incidence rates between 2000 and 2015 for our KPCO population by gender, and for comparison, the SEER incidence rates for the same period. The rates for KPCO are higher than SEER rates; like the SEER rates, they are increasing over time, but at a faster rate. Among men, the SEER rates increased from 22.8/100 000 in 2000 to 30.6/100 000 in 2015, compared to 29.1/100 000 to 49.0/100 000 for KPCO men in the same period. Similarly, for women, the SEER rates increased from 14.3/100 000 in 2000 to 18.8/100 000 in 2015 compared to 17.0/100 000 to 43.0/100 000 in KPCO. We observed a sharp increase in 2002 followed by a drop in 2003 that we cannot explain, otherwise, the upward trend is consistent across the entire period. Figure [2](#cam42252-fig-0002){ref-type="fig"} shows age‐adjusted incidence for melanoma stratified by stage at diagnosis, and illustrates that the increased incidence over time is limited to early stage disease. Age‐adjusted incidence for stage I increased from 7.6/100 000 to 19.7/100 000 from 2000 to 2015, while the rates for stage II‐IV remained relatively constant over time.
{#cam42252-fig-0001}
{#cam42252-fig-0002}
Table [2](#cam42252-tbl-0002){ref-type="table"} shows unadjusted and adjusted hazard ratios (HRs) and 95% confidence intervals (CIs) for factors associated with recurrence. Analysis of factors associated with recurrence was limited to cases diagnosed at stages I‐III. The overall rate of recurrence was 8.8%. The most commonly reported recurrence types were distant recurrence in multiple sites (20%) and local recurrences (18%) (data not shown). In multivariable models, factors associated with recurrent disease were stage at initial diagnosis, gender, and age. Race/ethnicity, socioeconomic status, and comorbidities were not associated with recurrence. In the multivariable model that did not include treatment variables, the adjusted HR was 5.54 (95% CI: 3.83‐8.03, *P* \< 0.001) for stage II compared to stage I, and the adjusted HR was 18.58 (95% CI: 12.51‐27.61, *P* \< 0.001) for stage III compared to stage I. Men were more likely to have a recurrence than women (adjusted HR: 1.72, 95% CI: 1.21‐2.45, *P* = 0.003), and for each decade of increasing age, the adjusted HR was 1.18 (95% CI: 1.04‐1.33, *P* = 0.009). Adding treatment to the model had little impact on the effect estimates for the other covariates. Receipt of radiation therapy was the only treatment modality associated with risk of recurrence (HR = 3.03, 95% CI: 1.10‐8.36, *P* = 0.03), while stage, male gender, and age remained as important predictors of recurrence. This increased risk associated with radiation therapy likely reflects practice patterns, as radiation is generally reserved for patients with advanced disease and for patients with large tumors who are poor surgical candidates.
######
Hazard ratios (HR) and 95% confidence intervals (CI) for characteristics associated with recurrence for incident melanoma, stages I‐III, Kaiser Permanente Colorado, 2000‐2015
Observations Unadjusted HR 95% CI Adjusted[a](#cam42252-note-0004){ref-type="fn"} HR 95% CI *P*‐value Adjusted[b](#cam42252-note-0005){ref-type="fn"} HR 95% CI *P*‐value
--------------------------------- -------------- --------------- ------------- ---------------------------------------------------- ------------- ----------- ---------------------------------------------------- ------------- -----------
Stage I 1486 Ref Ref Ref
Stage II 272 6.37 4.43‐9.18 5.54 3.83‐8.03 \<0.0001 5.27 3.63‐7.67 \<0.0001
Stage III 94 19.54 13.24‐28.82 18.58 12.51‐27.61 \<0.0001 16.23 10.45‐25.19 \<0.0001
Female 788 Ref Ref Ref
Male 1064 2.20 1.55‐3.12 1.72 1.21‐2.45 0.003 1.70 1.19‐2.43 0.003
Age in decades 1.29 1.15‐1.43 1.18 1.04‐1.33 0.009 1.20 1.06‐1.37 0.005
White 1816 Ref Ref Ref
Non‐White 36 1.22 0.45‐3.29 1.81 0.66‐4.97 0.251 1.86 0.68‐5.12 0.228
Socioeconomic status 0.44 0.20‐0.96 0.74 0.33‐1.67 0.466 0.75 0.33‐1.69 0.486
Comorbid conditions: 0 1084 Ref Ref Ref
Comorbid conditions: 1‐2 511 1.48 1.04‐2.10 1.10 0.77‐1.58 0.602 1.14 0.79‐1.64 0.494
Comorbid conditions: 3+ 257 2.04 1.35‐3.08 1.12 0.72‐1.76 0.614 1.10 0.71‐1.73 0.665
No radiation 1845 Ref Ref
Radiation 7 10.69 3.96‐28.89 3.03 1.10‐8.36 0.0328
No chemotherapy 1849 Ref Ref
Chemotherapy 3 3.85 0.54‐27.62 0.80 0.11‐6.13 0.833
No biologic response modulators 1803 Ref Ref
Biologic response modulators 49 6.60 4.13‐10.54 1.51 0.86‐2.68 0.155
Adjusted for stage, gender, age as a continuous variable, race/ethnicity, socioeconomic status as measured by percent of college educated households in census tract of residence, and comorbidity index
This model adds dichotomous variables for the type of treatments received in addition to the covariates listed in the prior model. Surgery was not included as all but one case received surgery as part of treatment
John Wiley & Sons, Ltd
Table [3](#cam42252-tbl-0003){ref-type="table"} shows adjusted HRs and 95% CIs for factors associated with all‐cause and melanoma‐specific mortality. There were 342 deaths among the 1931 cohort members; the median follow‐up time was 4.1 years. In multivariable models, stage, gender, age at diagnosis, socioeconomic status as measured by education at the census tract level, and comorbidity index were all statistically significantly associated with all‐cause mortality. Risk of death increased at each stage of diagnosis; compared to stage I, the HR was 12.87 (95% CI: 6.63‐24.99, *P* \< 0.001) for stage IV disease. Men were 42% more likely to die than women (HR = 1.42, 95% CI: 1.12‐1.79, *P* = 0.003). For each decade of increasing age, the HR was 1.89 (95% CI: 1.70‐2.10, *P* \< 0.001), and higher socioeconomic status was protective (HR = 0.40, 95% CI: 0.23‐0.72, *P* = 0.002). The presence of three or more comorbid conditions doubled the risk of death from any cause (HR = 2.10, 95% CI: 1.57‐2.79, *P* \< 0.001). Surgery and radiation both reduced the risk of death, but we did not observe an effect from chemotherapy or biologic response modulators. Factors associated with melanoma‐specific mortality were similar to those associated with all‐cause mortality. The association with disease stage was very strong for melanoma‐specific death and the presence of comorbid conditions was less important. Interestingly, age was not a significant predictor of melanoma‐specific mortality in multivariate models (HR = 1.09, 95% CI: 0.94‐1.25, *P* = 0.253).
######
Hazard ratios (HR) and 95% confidence intervals (CI) for characteristics associated with all‐cause mortality and melanoma‐specific mortality, Kaiser Permanente Colorado, 2000‐2015
Observations All‐Cause HR[a](#cam42252-note-0006){ref-type="fn"} 95% CI *P*‐value Melanoma HR[a](#cam42252-note-0006){ref-type="fn"} ^,^ [b](#cam42252-note-0007){ref-type="fn"} 95% CI *P*‐value
--------------------------------- -------------- ----------------------------------------------------- ------------ ----------- ------------------------------------------------------------------------------------------------ -------------- -----------
Stage I 1497 Ref Ref
Stage II 277 1.79 1.37‐2.35 \<0.0001 7.61 4.55‐12.74 \<0.0001
Stage III 103 3.83 2.72‐5.40 \<0.0001 22.50 12.89‐39.26 \<0.0001
Stage IV 54 12.87 6.63‐24.99 \<0.0001 98.05 40.83‐235.43 \<0.0001
Female 815 Ref Ref
Male 1116 1.42 1.12‐1.79 0.003 1.95 1.29‐2.95 0.002
Age in decades 1.89 1.70‐2.10 \<0.0001 1.09 0.94‐1.25 0.253
White 1888 Ref Ref
Non‐White 43 1.79 0.90‐3.55 0.095 2.66 1.01‐7.00 0.048
Socioeconomic status 0.40 0.23‐0.72 0.002 0.73 0.28‐1.90 0.516
Comorbid conditions: 0 1110 Ref Ref
Comorbid conditions: 1‐2 533 1.68 1.29‐2.18 0.0001 1.21 0.79‐1.86 0.370
Comorbid conditions: 3+ 288 2.10 1.57‐2.79 \<0.0001 1.64 1.05‐2.56 0.030
No surgery 44 Ref Ref
surgery 1887 0.20 0.10‐0.40 \<0.0001 0.31 0.13‐0.74 0.008
No radiation 1907 Ref Ref
radiation 24 0.42 0.21‐0.83 0.013 0.37 0.17‐0.79 0.010
No chemotherapy 1915 Ref Ref
chemotherapy 16 1.03 0.50‐2.10 0.947 0.89 0.41‐1.93 0.765
No biologic response modulators 1864 Ref Ref
Biologic response modulators 67 1.35 0.82‐2.23 0.241 0.76 0.42‐1.38 0.369
Adjusted for stage, gender, age as a continuous variable, race/ethnicity, socioeconomic status as measured by percent of college educated households in census tract of residence, comorbidity index, and dichotomous variables for the type of treatments received.
Melanoma‐specific mortality model excludes 18 cases where cause of death was unknown.
John Wiley & Sons, Ltd
In sensitivity analyses replacing tumor thickness for stage in our models of total mortality and recurrence, our results were similar (Tables [S1](#cam42252-sup-0001){ref-type="supplementary-material"} and [S2](#cam42252-sup-0001){ref-type="supplementary-material"}); risk increased with thicker lesions, and the characteristics associated with recurrences and mortality remained the same with two exceptions: radiation therapy was no longer a significant predictor of recurrence, and biologic response modulators became a statistically significant predictor of recurrence.
4. DISCUSSION {#cam42252-sec-0009}
=============
This retrospective analysis of melanoma patients was designed to examine trends in incidence and predictors of both recurrence and survival in a large, insured population. Consistent with previous studies,[31](#cam42252-bib-0031){ref-type="ref"}, [32](#cam42252-bib-0032){ref-type="ref"}, [33](#cam42252-bib-0033){ref-type="ref"} we observed a significant increase in the incidence of melanoma between 2000 and 2015, which was largely due to increasing rates of early stage disease. KPCO incidence was higher than the reported SEER rates, and this likely reflects both our high elevation, which increases UV exposure, and better access to screening for our insured population. We did observe a sharp increase in incidence in 2002 that we do not fully understand. We did not uncover any changes in our population, clinical practices or tumor registry coding or procedures that could explain this observation; fortunately, it did not drive any conclusions. It is possible that this observation is due to chance, given the relatively small number of cases diagnosed each year.
Singh et al[34](#cam42252-bib-0034){ref-type="ref"} used data from CDC and SEER and county‐level summary socioeconomic measures from the US Census and reported that incidence was associated with lower poverty, higher education, and higher income. While we did not examine incidence rates by socioeconomic status, these observations are consistent with our data from a predominantly white, insured population. In multivariate models, we found higher socioeconomic status measured by education level in the census tract of residence was significantly associated reduced risk of death (HR: 0.40, 95% CI: 0.22‐0.72). Studies outside of the United States have had inconsistent associations with socioeconomic status and survival.[7](#cam42252-bib-0007){ref-type="ref"}, [14](#cam42252-bib-0014){ref-type="ref"}
In addition to socioeconomic status, stage at diagnosis, male gender, increasing age, and the presence of comorbidities were all independent predictors of all‐cause mortality. Increased risk of death associated with increasing age and male gender have been shown consistently in prior studies.[11](#cam42252-bib-0011){ref-type="ref"}, [13](#cam42252-bib-0013){ref-type="ref"} We also found surgery and radiation therapy were associated with a reduced risk of death from all causes, but chemotherapy and biologic response modulators were not. These associations may be explained by practice patterns: nearly all our patients (97.7%) received surgery; chemotherapy and biologic response modulators were used in only 4% of patients, all of whom presented with advanced disease. Predictors of melanoma‐specific death were similar to those of all‐cause mortality, with one notable exception: age was not associated with melanoma‐specific mortality after adjusting for other covariates (HR = 1.09, 95% CI: 0.94‐1.25, *P* = 0.253). Prior studies have also found men at higher risk of death from melanoma.[10](#cam42252-bib-0010){ref-type="ref"}, [13](#cam42252-bib-0013){ref-type="ref"}, [14](#cam42252-bib-0014){ref-type="ref"} While others have not reported a lack of association with age, in a study of young adults age 15‐39 years with melanoma, Gamba et al[10](#cam42252-bib-0010){ref-type="ref"} found that men were 55% more likely to die from melanoma than women. This higher risk of death was observed across all tumor thicknesses and age ranges even after multivariable adjustment.
Several characteristics that predict mortality also predict recurrence: higher stage, older age, and male gender were all statistically significantly associated with recurrence in multivariable models, even after accounting for treatment. Men had a higher risk of recurrence compared to women (HR: 1.70, 95% CI: 1.19‐2.43), suggesting that men may benefit from more intensive follow‐up with more frequent screening intervals than women. Socioeconomic status, comorbidity, and race/ethnicity were not predictors of recurrence. Studies of melanoma recurrence are limited, and no population‐based studies have been conducted in the United States.[18](#cam42252-bib-0018){ref-type="ref"}, [19](#cam42252-bib-0019){ref-type="ref"}, [20](#cam42252-bib-0020){ref-type="ref"}, [21](#cam42252-bib-0021){ref-type="ref"}, [22](#cam42252-bib-0022){ref-type="ref"}, [23](#cam42252-bib-0023){ref-type="ref"} Findings from previous studies were consistent with ours; increasing age, male gender, and tumor thickness were important predictors of recurrence. Prior studies did not address comorbidities, race/ethnicity, nor socioeconomic status.
We observed a lower rate of recurrence in our population compared to prior studies (8.8% vs 12%‐30%).[18](#cam42252-bib-0018){ref-type="ref"}, [19](#cam42252-bib-0019){ref-type="ref"}, [23](#cam42252-bib-0023){ref-type="ref"} There are several possible explanations for differences in rate of recurrence, including differences between the study populations in distribution of stage at diagnosis, age and gender, or clinical characteristics including treatment and surveillance. The difference could also be due to differences in the definition of recurrence. Our definition is used by certified tumor registrars to monitor disease, and requires a "disease free" interval to precede a documented recurrence. Berger et al[23](#cam42252-bib-0023){ref-type="ref"} limited their analysis to cases diagnosed at stage II, and Rockberg et al[19](#cam42252-bib-0019){ref-type="ref"} included instances of disease progression in their definition of recurrence as well as cases diagnosed at stage IV. Our study is most similar to that of Lyth et al[18](#cam42252-bib-0018){ref-type="ref"} who examined prognostic factors for cases diagnosed at stages I‐II, reported a recurrence rate of 12%, and found that tumor thickness was the predominant risk factor for recurrence. Similarly, we found that stage (and in sensitivity analysis, tumor thickness) was a strong predictor of recurrence, along with gender and age.
We acknowledge the limitations of our study. We relied upon NAACCR definitions in the tumor registry to distinguish between chemotherapy and biological response modifiers, and during the study period, classifications and registry capture of some agents may have changed. Because use of any chemotherapy or biological response modifiers was uncommon in our population (\~4% overall), we do not believe this biased our analysis. We also lack information on genetic susceptibility and did not examine specific mutations nor histopathologic features that have been previously associated with survival.[12](#cam42252-bib-0012){ref-type="ref"}, [15](#cam42252-bib-0015){ref-type="ref"} We did not have individual level data on education or income; thus, to asses socioeconomic status, we relied on census block level information based on home address, which may have introduced some misclassification. We have a limited number of non‐white patients in our population, and therefore, we were unable to examine differences that may exist in survival or recurrence by race/ethnicity. Finally, while our well‐defined population is a strength in many ways, it also may limit the generalizability of our findings.
Our study also has several strengths. Using our extensive data systems allows complete capture of cancer data and the ability to follow patients over an extended period of time. Our recurrence data, which are captured by tumor registrars using systematic medical record review, are a notable strength of this study. Few studies can examine recurrence rates, and thus, this adds a unique and valuable contribution to our knowledge of melanoma. Our findings reflect usual care in a community setting for an insured population. As such, access to care is unlikely to be an important confounder to our findings, and our observations may reflect a more accurate picture of disease characteristics, recurrence and survival than studies in a tertiary care setting.
To our knowledge, this is the first population‐based study in the United States to examine patient characteristics associated with risk of recurrence. We found that stage, male gender and age are associated with recurrence. These characteristics are also associated with overall survival, along with socioeconomic status and the presence of multiple comorbidities. Because men have an increased risk of both recurrence and death, they may benefit from more intensive follow‐up than women.
CONFLICT OF INTEREST {#cam42252-sec-0010}
====================
The authors have no conflicts of interest.
DATA AVAILABILITY STATEMENT {#cam42252-sec-0011}
===========================
The data that support the findings of this study are available from the corresponding author upon reasonable request.
Supporting information
======================
######
######
Click here for additional data file.
The authors thank Ms. Valerie Paolino for her assistance in formatting the article.
|
{
"pile_set_name": "PubMed Central"
}
|
CINCINNATI — The first goal is under glass now, the National League Central championship for the Cincinnati Reds.
And with nine games left in the season, the questions are, “Where to and what now?” What is there to maintain the interest for two weeks before the playoffs commence?
It would be easy for the team to relax, lick its wounds from the long, long season and to spend the time to recuperate and regenerate.
There is, though, one problem. After Sunday night’s 5-3 loss to the Los Angeles Dodgers, the Reds have nine games remaining, three each with Milwaukee, Pittsburgh and St. Louis. All three teams are still within grasps of a wild card invitation.
So wouldn’t it behoove the Reds to put their best feet on the field, do their best to win each game as a matter of maintaining baseball’s integrity?
And then there is the matter of the seeding for the postseason. The Reds and the Washington Nationals are chest-to-chest for the best record in the National League, so shouldn’t the Reds try to beat out the Nationals?
The team with the best record, the No. 1 seed, gets to play the survivor of the one-game wild card play-in game. The No. 2 seed would play the ever-dangerous, pitcher-loaded San Francisco Giants.
It’s a slippery slope, one that for now is being climbed by acting manager Chris Speier, running the clubhouse and the dugout while manager Dusty Baker recovers from a few days of testing and medications from a irregular heart beat.
When Baker will return to the manager’s chair is unknow and his doctors and the Reds are being cautious to the extreme because health trumps baseball triumphs.
Baker left Northwestern Hospital in Chicago Sunday morning and met with the team after batting practice Sunday night, but did not put on his uniform. He watched the game from a private box in civilian clothes.
After Sunday’s game, Speier is 7-2 for the nine games he has managed during Baker’s absences.
As he readied himself to meet the media before Sunday’s game, Speier had to put down a full plate of broccoli-and-cheese, a strange pre-game snack.
Speier, ever the good soldier and tight friends with Baker, was hoping Baker could return to duty Tuesday night against Milwaukee after the Reds have a day off Monday.
“We’ll see that hopefully with the day off tomorrow and him getting a real relaxing day at home tonight he’ll be able to come back Tuesday. We’ll see how he feels,” said Speier.
Now that the Reds have clinched the National League Central, thoughts turn to resting some of the regulars to revive them for the playoffs. But the Reds finish their schedule against the Dodgers, Milwaukee, Pittsburgh and St. Louis, all still in the wild card playoff snapshot. So it behooves the Reds to do their best to win every game.
“That won’t be too difficult with our personnel because we put a competitive team out there every day,” said Speier. “But we are going to take care of ourselves and make sure the people that need a day or two to get their injuries under wrap, we’ll have to take that opportunity. But it won’t be hard.”
Joey Votty, Drew Stubbs, Ryan Hanigan and Homer Bailey were the regulars in Sunday’s lineup, along with semi-regular Todd Frazier. Ryan Ludwick remained out of play while nursing his tender groin.
“We need to give Ludwick time to get that groin healed and Scott Rolen needs a couple of extra days,” said Speier. “He was in tonight’s lineup, but now we have a chance to give him two days off.”
The other questeion involves the starting pitchers, none of which has missed a start all year. Will anybody be rested or will anybody’s starts be shortened to preserve innings?
“I haven’t spoken with pitching coach Bryan Price or Dusty Baker beause that is their domain,” said Speier. “I’ll sit down with Bryan later today and see what he wants to do with Homer Bailey (Sunday’s starter), but I don’t see any reason to hold back.”
And he wasn’t held back. He pitched 6 2/3 innings before his night unraveled in the seventh inning.
He gave up a second-inning home run to Adrian Gonzalez and had a 1-1 tie entering the seventh. Then he gave up a second home run to Gonzalez to make it 2-1 and was eventually charred with three more during a four-run inning.
After missing 12 days, closer Aroldis Chapman pitched the ninth inning Saturday, coming in with the team ahead, 6-0.
“I thought Chapman looked good, I did,” said Speier “I was happy that when he was off the strike zone, he was done. That’s a good sign that he is getting out in front and staying on top of the ball.
“It was a good position for him to have a soft landing, just to get that inning in,” Speier said. Mat Latos began the game and threw eight shutout innings and led, 3-0. Speier said he planned to send Latos back out for the ninth, but when the Reds scored three in the eighth for a six-run lead, it was perfect to slip Chapman into the game for his comeback.
“We’re going to ease him back into the closer’s role, maybe another time or two (without save situations),” said Speier. “Latos was pitching so well I was going to let him finish, but when we added the runs I felt that was a great opportunity to get back in with a soft landing. He needs to pitch.
“I didn’t see any upside for Mat to go back. I mean, if he got hurt or did something crazy I would have had to live with that and didn’t want to take that chance,” Speier added. “It was big for us to get Chapman back out there as soon as we could to see what we have.”
Article continues below ...
|
{
"pile_set_name": "OpenWebText2"
}
|
Role of biochemical markers of bone turnover as prognostic indicator of successful osteoporosis therapy.
Most of the currently available anti-osteoporosis medications promptly and significantly influence the rate of bone turnover. Biochemical markers of bone turnover now provide a high sensitivity to change, allowing the detection of these bone turnover changes within a couple of weeks. Since the anti-fracture efficacy of inhibitors of bone resorption or stimulators of bone formation appears to be largely independent of baseline bone turnover, biochemical markers do not appear to play a significant role in the selection of one particular drug, for an individual patient. However, there are consistent data showing that short-term changes in biochemical markers of bone turnover may be significant predictors of future changes in bone mineral density or fracture reduction, hence suggesting that bone turnover markers play a significant role in the monitoring of anti-osteoporosis therapy.
|
{
"pile_set_name": "PubMed Abstracts"
}
|
[Direct radiographic magnification of the hands. A critical analysis].
The value of direct magnification radiography of the hand and the wrist has been studied in 128 patients affected by rheumatic diseases. Only in a small group (3.17%) magnification determined a higher percentage of correct diagnosis; in the 17.06% of cases direct magnification radiography provided useful increase in information but did not change the diagnosis correctly reached by conventional techniques. In most cases (79.76%) magnification provided only a better image quality but no more information helpful for the diagnosis, because of the high level achieved by conventional techniques. Therefore direct magnification radiography must be used only in selected cases and not as routine radiographic technique.
|
{
"pile_set_name": "PubMed Abstracts"
}
|
Attributes and views of families with food allergic children recruited from allergy clinics and from a consumer organization.
A significant body of food allergy research has been conducted with families recruited from consumer organizations (COs). However, there has been no systematic comparison of the characteristics of such families with those attending specialist allergy clinics (ACs), nor have parental views on food allergy COs been examined in detail. To address these questions, 44 families with food allergic children recruited from hospital clinics and 25 families recruited from a CO: (i) completed a survey with items concerning demographic details, allergy features, and sources of allergy information, and (ii) participated in qualitative interviews and focus groups concerning their experiences. Significant differences were found in reported number of food allergies and nut allergy, seeking of second opinions, adrenaline autoinjector possession (but not use) and sources of food allergy information. Parents valued COs as sources of practical information and emotional support, but viewed advice which did not acknowledge their individual circumstances and heightened anxiety from contact with other anxious parents as being unhelpful. Research conducted with CO members is valuable, but may have limited generalizability to other populations. To supplement the information and support provided by ACs, all parents should be given the opportunity to join a CO, with guidance from their clinician towards those aspects of membership which are most likely to be helpful.
|
{
"pile_set_name": "PubMed Abstracts"
}
|
Determination of dissociation constants for protein-ligand complexes by electrospray ionization mass spectrometry.
A fully automated biophysical assay based on electrospray ionization mass spectrometry (ESI-MS) for the determination of the dissociation constants (KD) between soluble proteins and low molecular mass ligands is presented. The method can be applied to systems where the relative MS response of the protein and the protein-ligand complexes do not reflect relative concentrations. Thus, the employed approach enables the use of both electrostatically and nonpolar bound complexes. The dynamic range is wider than for most biological assays, which facilitates the process of establishing a structure-activity relationship. This fully automated ESI-MS assay is now routinely used for ligand screening. The entire procedure is described in detail using hGHbp, a 25-kDa extracellular soluble domain of the human growth hormone receptor, as a model protein.
|
{
"pile_set_name": "PubMed Abstracts"
}
|
Contact Us
Name *
Phone *
Email *
Are you selling a house?I have a property to sell
Comment
Contact me with property related news and offers from the Countrywide Group and our carefully selected partners.Countrywide can pass my details to carefully selected third parties so they can offer me financial or property related services. Please tick this box if you are happy for us to share your personal data in this way.
SPRING INTO YOUR NEW PROPERTY- OPEN HOUSE 3RD MARCH AT 10.00AM-11.00AM- CALL FOR MORE INFO The attached property is the real selling point of this house making it ideal for anybody required additional space/living accommodation. The main house offers 3 bedrooms...
Bluebell Walk is a superb new development offering a fantastic collection of luxury 4 & 5 bedroom FREEHOLD homes available in a variety of styles and constructed by quality craftsmen to the highest specification and incorporating high energy efficiency standards...
Deceptively spacious and well presented three bedroom detached true bungalow convenient for both Cuerden Valley Country Park and the motorway network, from an entrance porch leads to the kitchen diner with integrated appliances, extended lounge, inner hallway, newly...
PROPERTY OF THE WEEKL- REDUCED FROM 204,950 AND OFFERED WITH NO CHAIN AND READY TO MOVE INTO- A modern 4 bed detached family home set in a popular cul-de-sac in Rishton with double driveway, utility, downstairs W.C and study. The accommodation comprises of a entrance...
Offered to the market with no chain is this 3 bedroom semi detached in a sought after Ribble Valley location just off Somerset Drive in Wilpshire. The accommodation comprises of an entrance vestibule, hallway, lounge, dining modern high gloss kitchen, downstairs...
Fantastic size house comprising two lounges, dining kitchen, orangery, ground floor WC, four good sized bedrooms, family bathroom and Jack and Jill en-suite to master, outside to the rear is a garden with decked patio and separate garage. Viewing essential..
Red House Gardens is a small development of 3 & 4 bedroom family homes situated in a convenient residential area just three miles from the centre of Blackburn. Ideally placed for schools, shops and local amenities.The development is well placed for easy commuting to...
|
{
"pile_set_name": "Pile-CC"
}
|
Construction of plasmid vectors for gene cloning in Escherichia coli and Bacillus subtilis.
The construction and some properties of new hybrid plasmids which are able to replicate in both Escherichia coli and Bacillus subtilis are presented. A 5.5 Md hybrid plasmid pJP9 was constructed from pBR322 (Tc, Ap) and pUB110 (Nm) plasmids. pIM1 (7.0 Md) and pIM3 (7.7 Md) plasmids are its different erythromycin resistant derivatives. Tetracycline, ampicillin, neomycin and possibly erythromycin resistance genes are expressed in E. coli while neomycin and erythromycin resistance genes are expressed in B. subtilis. Insertional inactivation of only one gene is possible using the pJP9 plasmid as a vector in B. subtilis. However, insertional inactivation of at least two different genes can be achieved and monitored in E. coli and B. subtilis transformants in cloning experiments with PIM1 and pIM3 plasmids. Insertional inactivation of antibiotic resistance genes present in pJP9 plasmid was achieved by cloning of Streptococcus sanguis DNA fragments generated by appropriate restriction endonucleases. The pJP9 plasmid and its derivatives were found to be stable in both hosts cells.
|
{
"pile_set_name": "PubMed Abstracts"
}
|
Coagulation abnormalities associated with localized hemorrhage in the neonate.
Serial coagulation studies were completed on six neonates with apparent or inapparent localized hemorrhage. The sites of the hemorrhages were intracranial (2), gastrointestinal (2), subperiosteal (1), and pulmonary (1). The studies revealed an increased factor VIII level, decreased platelet count, and a short PTT. Since similar findings occur in disseminated intravascular coagulation, it is possible that coagulation abnormalities associated with localized hemorrhage result from similar mechanisms. These observations suggest that occult and clinically unrecognized hemorrhage can be suspected by serial coagulation studies of sick infants.
|
{
"pile_set_name": "PubMed Abstracts"
}
|
A Complex body of religious practices that spread throughout the Hindu, Buddhist, and Jain traditions; a form of spirituality that seemingly combines sexuality, sensual pleasure, and the full range ofMore
|
{
"pile_set_name": "Pile-CC"
}
|
Q:
Finding duplicate rows in SQL table issue
I have a table that has duplicate data. A field in the table has some typo's so I am trying to find all the rows in this table where the other column does not have the same information.
For example
partnumber
warehouse int_pn ext_pn
========= ====== ======
1 ABC100 XYZ001
2 ABC100 XYZ001
1 ABC200 XYZ021
2 ABC200 XYY021
3 ABC999 XYZ999
In the table above, int_pn ABC200 exists in two warehouses (1 and 2) but in ext_pn for warehouse 2 there is a typo
I am trying to list all the rows where int_pn appears more than once but have a different ext_pn
http://sqlfiddle.com/#!6/96248d/1
The result of a query should return
result
warehouse int_pn ext_pn
========= ====== ======
1 ABC200 XYZ021
2 ABC200 XYY021
I am having a hard time building a SQL query to do this
Thanks
A:
You can achieve this by self-join
SELECT p1.warehouse
,p1.int_pn
,p1.ext_pn
FROM partnumber AS p1
INNER JOIN partnumber AS p2 ON p1.int_pn = p2.int_pn
AND p1.ext_pn != p2.ext_pn
Demo: http://sqlfiddle.com/#!6/96248d/28
|
{
"pile_set_name": "StackExchange"
}
|
Dietary fatty acid modulation of mucosally-induced tolerogenic immune responses.
Immunological unresponsiveness or hyporesponsiveness (tolerance) can be induced by feeding protein antigens to naive animals. Using a classical oral ovalbumin gut-induced tolerance protocol in BALB/c mice we investigated the effects of dietary n-6 and n-3 polyunsaturated fatty acids (PUFA) on high-and low-dose oral tolerance (and in non-tolerised animals, i.e. effects of antigen challenge alone) in relation to lymphoproliferative, cytokine and antibody responses. Fish oil rich in long-chain n-3 fatty acids decreased both T-helper (Th) 1- and Th2-like responses. In contrast, borage (Borago officinalis) oil rich in n-6 PUFA, of which gamma-linolenic acid is rapidly metabolised to longer-chain n-6 PUFA, increased Thl-like responses and decreased Th2-like responses, and possibly enhanced suppressor cell or Th3-like activity. These findings are in general agreement with other studies on the effects of long chain n-3 PUFA on immune system functions, and characterise important differences between long-chain n-3 and n-6 PUFA, defining more precisely and broadly the immunological regulatory mechanisms involved. They are also discussed in relation to autoimmune disease.
|
{
"pile_set_name": "PubMed Abstracts"
}
|
1. Field of the Invention
The present invention relates to inflatable flexible molds. More particularly, the present invention relates to inflatable flexible molds and mold components, and the method of making the same, which can combined to build various structures, for example, a space lab.
2. Description of the Prior Art
In the prior art, flexible molds having multiple layers of flexible material defining at least one inner cavity therebetween are known. The molds can have fluid mold material injected between two of the layers to fill one of the cavities. If a plurality of cavities exist within the mold then those cavities could also be filled with fluid mold materials or gasses. Once the fluid mold material is injected into at least one of the cavities, the flexible mold assumes its final shape. When the mold material hardens, a solid structure in the final shape of the mold is obtained. Various methods of forming the final solid structure using a multiple of flexible molds is shown in the prior art.
U.S. Pat. No. 4,825,599 issued May 2, 1989 to Swann, Jr. discloses a flexible mold which forms a structure to be used in space when inflated after being placed into space. Several types of foam material are used as mold materials, each one being injected into one of the cavities of the mold. The inner portion of the mold is filled with gas or liquid to hold the final shape of the structure, in this case spherical, as the foam hardens.
U.S. Pat. No. 3,110,552 issued Nov. 12, 1963 to Voelker discloses a flexible mold which forms a structure to be used in space when inflated after being placed into space. Voelker discloses that the final shape of the structure may be a toroid. The mold has an inner and outer layer in which foam is injected therebetween via an entry port. If desired, a vent may be provided at the far end of the cavity between the inner and outer layers to permit the escape of gases. Between the inner and outer layers, fibers or webs may be attached so that the structure maintains the desired final shape before the foam hardens. Also, the inner layer may be inflated to form an inner cavity and to offset the pressure generated by the foam between the inner and outer layers.
U.S. Pat. No. 3,329,750 issued Jul. 4, 1967 to Growald discloses a flexible mold for constructing a building structure. Growald uses a multi-layered flexible mold in which one cavity is inflated to extend the structure to its desired shaped. Afterwards, foam is injected into the other cavity. Once the injected foam has hardened, the inflated cavity may be deflated and then filled with foam as well.
None of the above inventions and patents, taken either singly or in combination, is seen to describe the instant invention as claimed.
|
{
"pile_set_name": "USPTO Backgrounds"
}
|
D-Aspartate: An endogenous NMDA receptor agonist enriched in the developing brain with potential involvement in schizophrenia.
Free D-aspartate and D-serine occur at substantial levels in the mammalian brain. D-Serine is a physiological endogenous co-agonist for synaptic N-Methyl D-Aspartate (NMDA) receptors (NMDARs), and is involved in the pathophysiology of schizophrenia. Much less is known about the biological meaning of D-aspartate. D-Aspartate is present at high levels in the embryo brain and strongly decreases at post-natal phases. Temporal reduction of D-aspartate levels depends on the post-natal onset of D-aspartate oxidase (DDO), an enzyme able to selectively catabolize this D-amino acid. Pharmacological evidence indicates that D-aspartate binds to and activates NMDARs. Characterization of genetic and pharmacological mouse models with abnormally higher levels of D-aspartate has evidenced that increased D-aspartate enhances hippocampal NMDAR-dependent synaptic plasticity, dendritic morphology and spatial memory. In line with the hypothesis of a hypofunction of NMDARs in the pathogenesis of schizophrenia, it has been shown that increased D-aspartate levels also improve brain connectivity, produce corticostriatal adaptations resembling those observed after chronic haloperidol treatment, and protects against prepulse inhibition deficits and abnormal circuits activation induced by psychotomimetic drugs. In healthy humans, genetic variation predicting reduced expression of DDO in post-mortem prefrontal cortex is associated with greater prefrontal gray matter and activity during working memory. On the other side, evaluation of D-aspartate content in post-mortem patients with schizophrenia has shown a significant reduction of this D-amino acid in the prefrontal cortex and striatum. Generation of mouse models with reduced embryonic levels of D-aspartate may disclose unprecedented role for D-aspartate in developmental brain processes associated with vulnerability to psychotic-like symptoms.
|
{
"pile_set_name": "PubMed Abstracts"
}
|
package com.zdcf.leetcode;
import java.util.Stack;
//20. Valid Parentheses
//Given a string containing just the characters '(', ')', '{', '}', '[' and ']', determine if the input string is valid.
//
// An input string is valid if:
//
// Open brackets must be closed by the same type of brackets.
// Open brackets must be closed in the correct order.
// Note that an empty string is also considered valid.
//
// Example 1:
//
// Input: "()"
// Output: true
// Example 2:
//
// Input: "()[]{}"
// Output: true
// Example 3:
//
// Input: "(]"
// Output: false
// Example 4:
//
// Input: "([)]"
// Output: false
// Example 5:
//
// Input: "{[]}"
// Output: true
public class ValidParentheses {
public boolean isValid(String s) {
Stack<Character> stack= new Stack<Character>();
for(int i=0;i<s.length();i++){
Character c = s.charAt(i);
if(!stack.isEmpty()){
if(stack.peek()=='('&&c==')'){
stack.pop();
continue;
}
if(stack.peek()=='['&&c==']'){
stack.pop();
continue;
}
if(stack.peek()=='{'&&c=='}'){
stack.pop();
continue;
}
}
stack.push(c);
}
return stack.isEmpty();
}
}
|
{
"pile_set_name": "Github"
}
|
> From: [eLinux.org](http://eLinux.org/ITJ2005Detail_2-2 "http://eLinux.org/ITJ2005Detail_2-2")
# ITJ2005Detail 2-2
## UHAPI4Linux, an Open Source implementation of the Universal Home API
- By John Vugts (Philips) and Arjen Klomp (Logica CMG)
### Description
This session will cover the following topics:
- Universal Home API (UHAPI) overview
([Media:UHAPI\_Forum\_Introduction\_CELF\_20050614.pdf](http://eLinux.org/images/d/d5/UHAPI_Forum_Introduction_CELF_20050614.pdf "UHAPI Forum Introduction CELF 20050614.pdf"))
& status
([Media:Overview\_UHAPI.pdf](http://eLinux.org/images/8/85/Overview_UHAPI.pdf "Overview UHAPI.pdf"))
- Combination of UHAPI interfaces and [DirectFB](http://eLinux.org/DirectFB "DirectFB")
([Media:DirectFB\_UHAPI.pdf](http://eLinux.org/images/5/5c/DirectFB_UHAPI.pdf "DirectFB UHAPI.pdf"))
- Explanation of UHAPI 4Linux design & implementation
([Media:LogicaCMG-UHAPI4Linux20050614.pdf](http://eLinux.org/images/a/a9/LogicaCMG-UHAPI4Linux20050614.pdf "LogicaCMG-UHAPI4Linux20050614.pdf"))
- Launch of "UHAPI 4Linux" open source project : open source project
that develops an open source implementation of UHAPI interfaces on
standard (multimedia) PC hardware
[[1]](http://sourceforge.net/projects/uhapi4linux/) CVS
archive:[[2]](http://cvs.sourceforge.net/viewcvs.py/uhapi4linux/#dirlist)
[Category](http://eLinux.org/Special:Categories "Special:Categories"):
- [Events](http://eLinux.org/Category:Events "Category:Events")
|
{
"pile_set_name": "Github"
}
|
Large amino acid transporter 1 mediated glutamate modified docetaxel-loaded liposomes for glioma targeting.
The therapeutic outcome of glioma treatment is rigorously limited by blood-brain barrier (BBB) and infiltrating growth of glioma. To tackle the dilemma, more and more attentions were focused on developing nutrient transporters-mediated dual-targeted drug delivery system, in one side for BBB penetration, another for intracranial glioma targeting. Herein, Large amino acid transporter 1 (LAT1), overexpressed both on BBB and glioma cells, was selected as a target. Docetaxel-loaded glutamate-d-α-tocopherol polyethylene glycol 1000 succinate copolymer (Glu-TPGS) functionalized LAT1-targeting liposomes (DTX-TGL) were applied to enhance the BBB penetration and glioma therapy. The in vivo results of the fluorescent image indicated that TGL possessed an effective BBB penetration than that of unmodified ones in mice. The LAT1 targeting effcicacy and cell cytotoxicity of TGL were investigated in C6 glioma cells. Compared with unmodified liposomes, a significant higher cellular uptake and cell cytotoxicity was found in TGL treated group. Our results indicated that LAT1-targeting docetaxel-loaded liposome paves up a new direction using LAT1 transporter as a good target in designing brain glioma-targeting nanosystems.
|
{
"pile_set_name": "PubMed Abstracts"
}
|
Donate
BECOME A SUSTAINING MEMBER!
For as little as $12 a month you can help the most! Becoming a sustaining member provides reliable support for the diapers our families count on and reduces budget fluctuations for us. When you become a sustaining member, your donation will secure a happy, healthy diapered baby!
The average baby uses 8 to 12 diapers a day, equivalent to approximately 4,300 diapers a year. At an average per diaper cost from a retail store of approximately $0.36 per diaper (depending on size of diaper, size of pack and retail location), the typical family can spend up to $100 a month on diapers for one baby.
|
{
"pile_set_name": "Pile-CC"
}
|
Spread the love
Stow, OH — The Stow police department is currently investigating a complaint made against their department this week after one of their officers shot a family’s cat and then tossed it in the dumpster.
“It’s not a complaint on the officer, but a complaint on procedure,” Police Chief Jeff Film told the Stow Sentry on Thursday.
“I want to say I am totally responsible,” Lynn Maganja, the owner of the 12-year-old cat named Marley, said. Maganja was out of town when her cat was shot. She admits that her son accidentally let the cat out of their home but questions the officer’s response.
While Maganja takes responsibility for her cat getting out and then getting injured when it was struck by a car, she questions the officer’s decision to shoot the cat and then throw it in the dumpster.
“This is all sort of heartbreaking to my family and I just think people need to be aware of it,” said Maganja.
Pam Busch, an area animal-rights advocate, said Thursday that she submitted the handwritten complaint the day before. She said she believes that Marley should have been taken to a veterinarian to determine his condition and whether he should be euthanized, according to the Sentry.
“Even if it’s bleeding, we don’t know the severity of the injuries. We’re not professionals,” said Busch.
What’s more, both Busch and Maganja also question the cop’s decision to toss the family cat in the trash after he shot it.
“I don’t think an animal should be thrown into trash like a piece of trash because they are not, no more than you would do it to a human,” said Busch.
As the Sentry reports, Maganja said after her son told her Marley had gotten out, she went on Facebook and happened to see a photo someone posted of Marley, still alive at the time, in a cardboard box. Realizing the cat looked like Marley, she called the police and spoke to the officer, who confirmed that he had shot the cat.
“I was in shock,” said Maganja.
The cat was apparently alive enough for someone to pick him up, put him in a box and post a picture to Facebook to see if anyone had lost a cat. However, the officer disagreed.
According to the police department’s policy and procedures manual, in a chapter concerning “use of force,” officers can use lethal force on an animal for several reasons, including “To relieve the animal of undue suffering after determining means of care would be ineffective or are unavailable,” the Sentry noted.
“It’s not that often that we have dispatched domestic animals, but it does happen and it’s unfortunate,” said Film. “The animal was suffering gravely, the determination was to end its suffering.”
Surprisingly, TFTP has reported on officers shooting cats before. In December 2014, we reported on Officer Barry Accorti, who shot five kittens in front of terrified children.
Shortly after Accorti killed the kittens, another officer, with the Gorham Police Department in Maine shot a cat because he thought it was rabid. It was not. Lt. Christopher Sanborn opened fire on a cat after he called in back up mistakenly thinking that the feline was a danger. Luckily, the cat survived the bullet holes and made a recovery.
Spread the love
Sponsored Content:
|
{
"pile_set_name": "OpenWebText2"
}
|
Acute myocardial infarction and left bundle branch block.
Complete left bundle branch block often masks old as well as acute myocardial infarctions. However, a diagnosis of acute myocardial infarction in the presence of complete left bundle branch block can be made when the acute injury current is large enough to modify the secondary repolarization abnormalities of left bundle branch block. Under these circumstances the classical ST-T changes of an acute infarction may evolve in serial electrocardiograms.
|
{
"pile_set_name": "PubMed Abstracts"
}
|
What are the key policy factors influencing courts and tribunals attempting to balance open justice against other rights and interests in newsworthy cases involving forensic mental health patients?
Associate Professor Tom Morton from UTS, ABC lawyer Hugh Bennett and I examined this question – and the related issue of whether the media could report upon such cases and identify the patients involved – in our recent article in the leading journal in the field, the Journal of Media Law.
Open justice in mental health proceedings need not be viewed in a vacuum. There are strong parallels with numerous other situations where the legislature and the courts find and apply exceptions to the open justice principle. There is much scope for consistency across Australian jurisdictions and across the many situations where the restrictions are in place because of different vulnerabilities faced by key participants in the court process – mental health patients, children, sexual crime victims, family law parties, protected witnesses and, in two Australian states, even those accused of sexual offences until after the committal stage of proceedings.
There is a strong argument that the courts should be most transparent when the public gaze is so sharply focussed upon them, and that public education about the workings of the justice system in the important area of mental health will be most effective when citizens are intrigued by a particular story and know its background. The courts might acknowledge that in some circumstances a story can be both “interesting to the public” and “in the public interest” – and that perhaps the two notions might not have to be mutually exclusive as Lord Wilberforce so famously suggested.
Full contents of the edition and subscription details can be seen here.
We compared four forensic mental health cases in Australia and the UK and highlighted some of the key competing rights and interests at stake when the news media or other parties seek to have mental health proceedings opened and to identify the patients involved. The approaches of the tribunals and courts we studied showed the competing policy considerations in such applications were by no means clear-cut. They varied markedly from case to case with regards to the potential impact on the patient and other stakeholders and in their respective public interest value in the stories being told to broader communities. Policies around publicity are complicated when expert psychiatric opinion varies on the potential impact on the mental health and treatment regime for the patient.
The weighing of such important rights and interests is not a precise science where a pre-set formula will apply. Of course, important differences between Australian and UK jurisdictions inform such decisions, including different statutory frameworks for the particular tribunals, together with the lack of a formal human rights framework in Australia, comparable with the European Convention on Human Rights, which affords privacy and free expression rights. In Australia, these considerations draw upon the common law, because there is as yet no actionable tort of privacy invasion and free expression is limited to a High Court-designed implied constitutional freedom of communication with respect to “discussion of government and political matters”. Further, the various mental health tribunals dealing with applications from or regarding forensic patients operate within their own statutory frameworks, rules and practice directions which sometimes bind, and in other circumstances guide, their decisions on whether hearings can be held in public and, if so, whether parties and other participants might be identified.
In Australia alone, the nine jurisdictions have taken a variety of approaches to whether such hearings are held in public and whether parties must be anonymised in any reporting permitted. Open justice can be viewed as a policy continuum, ranging from closed hearings and a total ban on reporting at one end through to open hearings with full identification of parties allowed as part of a fair and accurate report of proceedings at the other. Somewhere in between are attempts to strike a balance between open justice and competing rights and interests with partial permissions; where the public or the media might be admitted to proceedings with a range of conditions placed upon the extent of identification of parties or witnesses allowed.
We developed this list of key policy factors elicited from the cases reviewed, influencing whether a forensic patient or former patient might be given a public hearing or be identified in proceedings:
Specific legislation, regulations, rules and practice directions relating to privacy and anonymity in hearings involving forensic patients or former patients;
Whether there is informed consent from the patient to identification and publicity of his or her case;
The extent to which a public trial and/or identification impacts upon on the life (ECHR Article 2), ill-treatment (ECHR Article 3), liberty (ECHR Article 5), and other rights, dignity and self respect of patients; including the impact of publicity and identification on their mental health and well being, ongoing treatment, safety and ease of re-entry to the community after treatment/rehabilitation;
The impact of a public hearing or identification upon the right to privacy (ECHR Article 8) of the patient and other participants, and the confidentiality of personal medical details;
The historic principle of open justice (ECHR Article 6): fundamental principles of transparency and justice ‘being seen to be done’, as espoused in Scott v. Scott; the public interest in transparency of mental health processes and proceedings;
Freedom of expression and communication (ECHR Article 10); including the freedom of expression of the media, patients and other participants like hospital and prison personnel;
The public’s right to know: public understanding of the mental health system and its treatment of patients; the public interest in knowing the outcome of highly publicised or emblematic cases; the public interest in knowing of wrongdoing in the mental health system; and the public interest in the safety and security of their communities;
Impact of identification and publicity upon other parties, including hospital staff, other patients, victims and their families;
Public administration costs (economic and organisational) associated with implementing effective systems of publicity and identification. (For example, hospitals’ and courts’ management of media inquiries, extra costs of security for patient, special accommodation for public hearings, expense of installing video links etc);
Stage of the process – for example, publicity and identification might be allowed on early applications related to conditions while institutionalised, but perhaps refused when re-entry to society is imminent or has already passed;
The track record of the applicant media organisation/s in prior coverage and ethical management of privacy and consent issues, in this and perhaps in other comparable cases; the nature of the proposed program or publication and whether it is likely to be of a professional standard, balanced, accurate, reflective of a range of stakeholder views and sensitive to the patient’s experiences; and the context and focus of the identification of the patient in the media output;
Full contents of the edition and subscription details can be seen here.
Disclaimer: While I write about media law and ethics, nothing here should be construed as legal advice. I am an academic, not a lawyer. My only advice is that you consult a lawyer before taking any legal risks.
|
{
"pile_set_name": "Pile-CC"
}
|
Lindane-induced cytotoxicity and the role of vitamin E in Chinese Hamster Ovary (CHO-K1) cells.
Lindane, a toxic insecticide from the persistent organic pollutants (POP's) group, may act as an endocrine disrupter affecting crucial tissues of reproductive system. In this study a Chinese Hamster Ovary cell line (CHO-K1) was applied to assess the potential of lindane cytotoxicity at the cellular level. The methods of Trypan blue exclusion, MTT and Kenacid blue assays were used to assess cytotoxicity and confirmed a decrease in the number of viable CHO-K1 cells at 34.4-344 microM lindane during 24, 48 and 72 hours of exposure. The cell proliferation tests showed significant inhibition (p < 0.025-0.001 vs control) and a progressive increase in toxicity with increasing lindane concentrations. Corresponding IC(50) values were determined with each applied method. After 72 h of lindane exposure, IC(50) values were 184 microM according to the Trypan blue method and 272 and 256 microM with the Kenacid blue and MTT assays, respectively. Morphological changes induced by the cytotoxicity of lindane were followed by the fluorescence microscopy and only necrotic cells were detected. Vitamin E (25 and 50 microg/mL) was used for protection of ovarian cells against lidane-induced oxidative stress damage, and lipid peroxidation was postulated as a possible mechanism of lindane toxicity. The viability of cells pre-incubated with vitamin E was significantly enhanced (up to p < 0.025) compared to the results observed in cells exposed to lindane only, but vitamin E treatment could not prevent complete lindane-induced cytotoxicity. Results suggest that vitamin E may exert a slightly protective role in cell defense against lipophilic pro-oxidant xenobiotics such as lindane.
|
{
"pile_set_name": "PubMed Abstracts"
}
|
Eduardo Pérez (swimmer)
Eduardo Pérez (born 6 March 1957) is a Mexican former swimmer and amateur wrestler who competed in the 1976 Summer Olympics.In 1960's he got stabbed by fan, then he successfully returned to ring. He now has a nightclub in Florida.
References
Category:1957 births
Category:Living people
Category:Mexican male swimmers
Category:Male freestyle swimmers
Category:Olympic swimmers of Mexico
Category:Swimmers at the 1975 Pan American Games
Category:Swimmers at the 1976 Summer Olympics
Category:Pan American Games bronze medalists for Mexico
Category:Pan American Games medalists in swimming
Category:Competitors at the 1978 Central American and Caribbean Games
Category:Central American and Caribbean Games gold medalists for Mexico
|
{
"pile_set_name": "Wikipedia (en)"
}
|
NMR techniques have grown extensively over the past forty years, most notably in the medical instrumentation areas where in vivo examination of various parts of the human body can be seen and in clinical research laboratory uses. In addition there has been some use and interest in the application of these techniques to industrial instrumentation and control tasks. The present invention enables effective utilization (technically and economically) of pulsed NMR techniques in industrial areas to replace or complement existing optical and radiant energy-based instrumentation.
The following is a brief review of NMR theory and concepts pertinent to understanding the present invention. The term, magnetic resonance imaging, or MRI, used below, is an alternative name for NMR. NMR signal is more easily understood by the human eye and brain. Approximately one third of the elemental isotopes and certain compounds with non-zero spin quantum numbers are magnetically active and suitable for MRI detection.
In a simplified model, a spinning isolated nucleus will align itself either with or against a static magnetic field. There will be a nearly equal number of nuclei aligned in each direction since there is only a small energy difference between these two states so a thermal equilibrium exists between these two states. However, statistically there will be a small number of excess nuclei in the lower energy state. It is these excess nuclei which give rise to MRI signals. The term "nuclei" will subsequently refer only to magnetically active nuclei.
Nuclei in a magnetic field will precess similar to a spinning top, because there is an angular acceleration produced by the interaction of the magnetic field and the magnetic moment of the nuclei. This precession occurs in the direction of the magnetic field. Quantum mechanics shows that only a selected number of possible alignments is possible. The precessional frequency is determined by which alignment occurs and the magnetic properties of the nucleus being studied. The fundamental MRI signal is derived from inducing transitions between these different alignments. This is often done by using the magnetic component of an applied RF (radio frequency) signal. When this component is applied perpendicularly to the magnetic field a resonance occurs at a particular RF frequency where transitions between the different alignments happen. This resonance typically occurs in the Megahertz frequency ranges when a strong magnetic field is used. This field is in the 1 Tesla (10,000 Gauss) order of magnitude (i.e. 0.1-2 T).
The effect of a nucleus bonded in a lattice to other nuclei has a great effect on the resonance frequencies. The effect of the other components and the bonding create secondary effects and shielding which cause the resonance frequencies to be different. In effect the chemical environment affects the resonance frequencies and the signal strength.
Pulsed MRI spectroscopy is one specific technique to which the present invention is drawn. This technique uses a radio frequency burst or pulse which is designed to excite all the nuclei of a particular nuclear species. After the application of the pulse there occurs a free induction decay (FID) curve associated with the excited nuclei. Traditional Fourier Transform analysis generates a frequency spectrum which can be used to advantage in studying the nuclei of interest. The duration of the pulses, the time between the pulses, the pulse phase angle and the use of chemical reactants in the sample are parameters which affect the sensitivity of this technique. These frequency techniques are not easily useable in industrial applications, especially on-line applications.
An object of this invention is an improved measurement system which leads to accurate, fast determination of the types and quantity of the nuclear species of interest.
A further object of the invention is to utilize time domain analysis in achieving such system.
Another object of this invention is its application to the industrial, on-line problems of measuring the controlling processes.
The principal variable of interest is moisture, with distinction between free and bound water in an organic or inorganic substance based on hydrogen nuclei precession analysis. But other parameters can be measured based on hydrogen or other sensitive species including e.g., sodium. It is an object of this invention to accommodate a variety of such measuring tasks.
Another object is to accommodate the dynamics of industrial on-line applications including variations of density, temperature, packing and size factors, friction and static electricity, vibration and frequent, repetitive, cyclic and non-cyclic measurements.
Another object of the invention is to use magnetic resonance techniques in polymer analysis, including density and/or melt index measurement.
Another object is to enhance accuracy and reliability of data obtained.
It is an object of the invention to achieve the necessary practical economies consistent with the foregoing objects.
|
{
"pile_set_name": "USPTO Backgrounds"
}
|
# Contributor: Jakub Jirutka <[email protected]>
# Maintainer: Jakub Jirutka <[email protected]>
pkgname=binaryen
pkgver=40
pkgrel=1
pkgdesc="Compiler infrastructure and toolchain library for WebAssembly, in C++"
options="!check" # Tests require python2
url="https://github.com/WebAssembly/binaryen"
arch="all !s390x !mips !mips64"
license="Apache-2.0"
makedepends="cmake"
checkdepends="nodejs" # python2"
subpackages="$pkgname-dev"
source="binaryen-$pkgver.tar.gz::https://github.com/WebAssembly/binaryen/archive/version_$pkgver.tar.gz
link-dynamically.patch
ignore-type-limits-error.patch
fix-gcc8-wcatch-value.patch
gcc-march-aarch64.patch"
builddir="$srcdir/$pkgname-version_$pkgver"
case "$CARCH" in
x86) options="!check";; # XXX: two tests fail
esac
build() {
cmake \
-DCMAKE_BUILD_TYPE=None \
-DCMAKE_INSTALL_PREFIX=/usr \
-DCMAKE_INSTALL_LIBDIR=lib \
-DCMAKE_CXX_FLAGS="$CXXFLAGS" \
-DCMAKE_C_FLAGS="$CFLAGS" \
-DCMAKE_VERBOSE_MAKEFILE=ON \
-DBUILD_SHARED_LIBS=ON
make
}
check() {
# waterfall requires additional dependency
# gcc-tests fail, dunno why
python2 check.py --no-test-waterfall --no-run-gcc-tests
}
package() {
make install DESTDIR="$pkgdir"
rm "$pkgdir"/usr/share/binaryen/binaryen.js
}
sha512sums="89aa73c1686fb6d54c91990acbc7cd6c1bc7e6da57731bf009fe507c955c98c91582b5b9924c3c3f6a2d36d55ae73f1be79911cdce93dfd74954bca46861c8ad binaryen-40.tar.gz
9729655be0d952385de959bf7dd84a075b192fe4b221bb5c23e562a83a2bf9202a07536ad393157b23e0447f28bdd16283f64a63588ca42597bb59e9551219c8 link-dynamically.patch
3b95a197cd8805dfa714cf9f31adc1437b6d29bd4082f752c16d98c40cd024b110c02a412285c78251cb5d0b3080c0f9e4d45d8dd0166c71b9462b1610191ab8 ignore-type-limits-error.patch
f937a9f9f7f2cab97aa15ade3a800b8924755b27ae4e9e8951ee97dc79d54c95db28d3c71b32c2ed0b5711d6e1884c33cc307564abe759f6ea3c23db60d433a1 fix-gcc8-wcatch-value.patch
43db7456ce3b9a6239b24caa9fc9cb1a59975e742fa74ff6bfcdadbbb453afee62842aaa973596d78294704c7d76cb73eb7703f1d66e40b1bb164e86e5da5914 gcc-march-aarch64.patch"
|
{
"pile_set_name": "Github"
}
|
They will also be required to sign a confidential, proprietary information form. This form is in the Supplement Section where you got the attachments in your original message below. This confidentiality form requiring signature is attached as follows. EF
-----Original Message-----
From: Matthews, Ron
Sent: Monday, August 13, 2001 4:09 PM
To: '[email protected]'
Cc: Watson, Kimberly; Faucheaux, Eric; Gottsponer, Morgan
Subject: Caithness - Big Sandy Project
Attached are the engineering standards you requested on our latest telephone discussion with Kim Watson. Also attached is a cost estimate summary that the numbers were manually into an Excel spreadshhet. As stated, the full cost estimate will follow through regular mail along with a hard copy of the engineering standards. If you have any questions, please call Kim or myself.
<< File: 4701.doc >> << File: 5405-w7.doc >> << File: 5420-w5.doc >> << File: 5421-w9.doc >> << File: 5423-w3.doc >> << File: 5425-w4.doc >> << File: 5430-w4.doc >> << File: 5440.doc >> << File: Timberline Summary 010813.xls >>
|
{
"pile_set_name": "Enron Emails"
}
|
How to Use Fast Fill in Excel 2016
Excel 2016’s Flash Fill feature gives you the ability to take a part of the data entered into one column of a worksheet table and enter just that data in a new table column using only a few keystrokes. The series of entries appears in the new column, literally in a flash (thus, its name, Flash Fill).
The second Excel 2016 detects a pattern in your initial data entries, the rest of the entries in that series immediately appear in blank cells in rows below that you can then enter with a single keystroke. And the beauty is that all this happens without the need for you to construct or copy any kind of formula.
The best way to understand Flash Fill is to see it in action. In the following figure, you see a new data table consisting of four columns. The cells in the first column of this table contain the full names of clients (first, middle, and last), all together in one entry. The second, third, and fourth columns need to have just the first, middle, and surnames, respectively, entered into them (so that particular parts of the clients’ names can be used in the greetings of form e-mails and letters as in, “Hello Keith,” or “Dear Mr. Harper,”).
Data Table containing full names that need to be split up in separate columns using Flash Fill.
Rather than manually enter the first, middle, or last names in the respective columns (or attempt to copy the entire client name from column A and then edit out the parts not needed in First Name, Middle Name, and Last Name columns), you can use Flash Fill to quickly and effectively do the job. And here’s how you do it:
Type Keith in cell B2 and complete the entry with the down arrow or Enter key.
When you complete this entry with the down-arrow key or Enter key on your keyboard, Excel moves the cell pointer to cell B3 where you only have to type the first letter of the next name for Flash Fill to get the picture.
In Cell B3, only type J, the first letter of the second client’s first name.
Flash Fill immediately does an AutoFill-type maneuver by suggesting the rest of the second client’s first name, Jonas, as the text to enter in this cell. At the same time, Flash Fill suggests entering all the remaining first names from the full names in column A in column B.
Complete the entry of Jonas in cell B3 by pressing the Enter key or an arrow key.
The moment you complete the data entry in cell B3, the First Name column’s done: Excel enters all the other first names in column B at the same time (take that, Barry Allen)!
To complete this example name table by entering the middle and last names in columns C and D, respectively, you simply repeat these steps in those columns. You enter the first middle name, Austen, from cell A2 in cell C2 and then type W in cell C3. Complete the entry in cell C3 and the middle name entries in that column are done. Likewise, you enter the first last name, Harper, from cell A2 in cell D2 and then type S in cell D3. Complete the entry in cell D3, and the last name entries for column D are done, completing the entire data table.
Keep in mind that Flash Fill works perfectly at extracting parts of longer data entries in a column provided that all the entries follow the same pattern and use the same type of separators (spaces, commas, dashes, and the like). For example, in the figure, there’s an anomaly in the full name entries in cell A9 where only the middle initial with a period is entered instead of the full middle name. In this case, Flash Fill simply enters M in cell C9 and you have to manually edit its entry to add the necessary period.
Dummies Insider
Sign up for insider news on books, authors, discounts and more content created just for you.
|
{
"pile_set_name": "Pile-CC"
}
|
A-Rod*: Unapologetic
If you believe an ounce of the fabricated answers New York Yankee's Alex "A-Rod" Rodriguez fed to the press, then you are extremely gullible. Every answer he gave had transparency and left more unanswered questions.
I half expected him to come out and be responsible for his previous actions, but that wouldn't satisfy his ego. Instead, he ended up beating around the bush, leaving the public with more questions than answers.
Why can't he just solve his problems by admitting that he took steroids and that he was wrong to take advantage of illegal substances for his own gain?
Why couldn't he say he was sorry for misrepresenting Major League Baseball?
But Rodriguez can't show modesty or humility. So instead he claims he was 24 or 25, and that naivety and stupidity was the reason he took these illegal substances.
According to him, he was not given a chance to "grow up" properly. A-Rod blamed his missed opportunity to go to college on why he did not mature properly.
What kind of pathetic excuse is that?
If you look at his stats, you can easily see he was not 24 or 25 for that matter. He was 27, 28, and 29, respectively.
If you haven't matured enough to realize you're taking steroids illegally to better yourself at that age, then you surely have problems beyond the aforementioned.
He is a liar, a cheater, and uses the generic, typical excuse we as fans have come to expect from professional athletes. The "it wasn't my fault, I was unaware, blame someone else" method of covering up your guilt by excusing yourself from any knowing involvement in it.
Whoever votes for him ranging from the All-Star Game to the Hall of Fame is as gullible and pathetic as A-Rod himself.
|
{
"pile_set_name": "Pile-CC"
}
|
CETA will make Romania’s exports to Canada more competitive
Romania may increase exports to Canada after the free trade agreement between the European Union and Canada, also known as CETA, comes into force.
Romanian producers could be competitive on market segments such as footwear, agricultural products, shipbuilding, automotive, ceramics and textiles, according to a press release of the Canadian Embassy in Romania.
The European Parliament approved the Comprehensive Economic and Trade Agreement (CETA) between Canada and the EU on February 15. One of the agreement’s consequences is that all Romanians and Bulgarians will be able to travel to Canada without visas starting December 1, 2017, the same as the citizens from other EU member states. For those who have traveled to Canada in the last 10 years, the visas will be lifter starting May 1.
However, besides the visa elimination, CETA also brings economic benefits to the parts involved, including Romania, according to the Canadian Embassy. For example, the cars produced in Romania could become more competitive on the Canadian market.
“The custom duties in the car sector, which are currently 9.5%, will be eliminated after implementing CETA,” the Embassy says.
Canadian investments in EU countries, including Romania, will also be stimulated, according to the same document.
Romanians travel visa-free to Canada after EU Parliament approves CETA deal
[email protected]
|
{
"pile_set_name": "OpenWebText2"
}
|
<div class="content-container">
<br />
<h2 class="text-center">{$LANG.twofactorauth}</h2>
<form method="post" action="{$issuerurl}dologin.php" role="form">
<div id="loginWithBackupCode"{if !$backupcode} class="hidden"{/if}>
<div class="content-padded">
{include file="$template/includes/alert.tpl" type="warning" msg=$LANG.twofabackupcodelogin textcenter=true}
<input type="text" name="code" class="form-control">
<br />
<button type="submit" name="backupcode" value="1" class="btn btn-primary btn-block" id="btnLogin">
{lang key='login'} »
</button>
</div>
<div class="action-buttons">
<button type="button" class="btn btn-default" id="btnCancel" onclick="jQuery('#frmCancelLogin').submit()">
{lang key='cancel'}
</button>
</div>
</div>
<div id="loginWithSecondFactor"{if $backupcode} class="hidden"{/if}>
<div class="content-padded">
{if $incorrect}
{include file="$template/includes/alert.tpl" type="error" msg=$LANG.twofa2ndfactorincorrect textcenter=true}
{elseif $error}
{include file="$template/includes/alert.tpl" type="error" msg=$error textcenter=true}
{else}
{include file="$template/includes/alert.tpl" type="warning" msg=$LANG.twofa2ndfactorreq textcenter=true}
{/if}
{$challenge}
</div>
<div class="action-buttons">
<div class="pull-left text-left small">
{$LANG.twofacantaccess2ndfactor}<br />
<a href="#" onclick="jQuery('#loginWithSecondFactor').hide();jQuery('#loginWithBackupCode').removeClass('hidden').show();">{$LANG.twofaloginusingbackupcode}</a>
</div>
<button type="button" class="btn btn-default" id="btnCancel" onclick="jQuery('#frmCancelLogin').submit()">
{lang key='cancel'}
</button>
</div>
</div>
</form>
</div>
<form method="post" action="{$issuerurl}oauth/authorize.php" id="frmCancelLogin">
<input type="hidden" name="login_declined" value="yes"/>
<input type="hidden" name="request_hash" value="{$request_hash}"/>
</form>
|
{
"pile_set_name": "Github"
}
|
I recently got an unsolicited e-mail from the Goldwater Institute touting the part the group played in the final destruction of teachers' unions in Arizona. The wording of the press release was absolutely giddy, explaining how the cabal of white guys in expensive suits has stripped away all legal protections from people who have selflessly devoted themselves to the education of Arizona's young people for decades and suddenly find themselves at the mercy of petty administrators, dysfunctional school boards, and lawmakers who refuse to obey the law (301 monies, anyone?).
Before the guys at the Goldwater Institute move on to their ultimate goal—the complete elimination of the public school system (which helped make America the great country that it is)—they will pause for a photo op in which they will gather in front of a retirement home and kick the canes out from under old folks.
If it weren't for the fact that it would have a massive negative impact on an entire generation of our state's schoolchildren, I would almost enjoy watching the Goldwaterites and their Republican lackeys in the Legislature get their way in driving the final nail in the coffin that holds what used to be respect for teachers. Under their plan, Arizona would hand out, to parents, checks worth thousands of dollars per each school-aged kid in the house. Then, the state would say, "Now, go buy some education."
I was recently e-mailing back and forth with a good friend of mine who, due to some unknown trauma during his formative years, has turned out to be a mega-Republican. When I asked, quite sarcastically, what could possibly go wrong with such a system, he wrote "Will there be waste or some abuses? Probably..." That instantly becomes the frontrunner for the "Houston, We've Had a Problem" Memorial Understatement of the Year Award.
This thing will be the alt-fuels fiasco on steroids.
There are so many really awful things that could go wrong here, but let's start with the absolutely false notion that this plan will allow every parent to send his/her child to a good private school. We'll say that the state gives every parent $7,000. If I'm running a top-level private school (where the current tuition is already twice that amount), all I'm going to do is raise my tuition by the amount of the check. The parents who are already sending their kids to my school will gladly hand over the checks, knowing that it's not costing them a penny to do so and that the bump in tuition will help keep the Clampetts away.
For some reason, when you throw lots and lots of information at people, their eyes glaze over. So, I'll just focus on one small part of their plan and when I am done explaining it, if you don't think that it has disaster written all over it, there will officially be no hope for you.
One of the lesser-publicized parts of the plan is to give thousands of dollars per kid—no strings attached—to parents who claim to home-school their children. I've never been a fan of home schooling; I think it's, at the very least, an overreaction to the "evils" of society. But it's not against the law, so people can do it if they want to. All we can do is hope that their children don't turn out like that Spelling Bee kid from a few years back.
As it stands now, home-schooled kids are twilight, neither here nor there. It doesn't cost the state anything to educate them publicly and doesn't cost the parents anything to educate them privately. But now, the Republicans in the Legislature want to give your tax dollars to people who claim to be home-schooling their kids, whether they are doing so or not. This appears to be the Perfectly Bad Idea, one that everyone, regardless of political affiliation, can hate.
Let's start with those on the Left. They probably see a birther/gun nut/fanatic who wants to use that free money to build up an arsenal just in case a family of Mud People even thinks about moving into his trailer park. From the other fringe perspective, they see a crack ho/welfare mother who takes her brood of four kids out of public school and has them sit around and watch TV all day while she spends the 30 grand on personalized recreational activities.
But what about this scenario? You've got a family of five, good, decent people who have been unsuccessfully trying to swim upstream ever since the economy went in the toilet. Mom's job got eliminated in the downsizing and Dad can't pick up enough hours at work to make ends meet. They're near the breaking point when they hit on a plan born of desperation. They'll take their three kids out of school for a year, "school" them at home and use the $20,000 to pay down their credit cards and maybe buy that second car that the family needs. The state's not going to check where the money goes. Such a bureaucracy would be prohibitively expensive; plus, that would be an intrusion.
After a year, things are better, so they put their (uneducated) kids back in public school. But they're all a year behind and their test scores stink. Guess who gets the blame?
|
{
"pile_set_name": "OpenWebText2"
}
|
Q:
WCF OData patching an entity set
Is it possible in update multiple entites based on a filter query in a batch request?
As an example of what I'm trying to achieve, say I want to update all product categories from foo to bar in one request to an OData endpoint, is there something like this that would work:
--batch_36522ad7-fc75-4b56-8c71-56071383e77b
Content-Type: multipart/mixed; boundary=changeset_fa7b-4aa9-a01f
GET /api/products?$filter=category eq 'foo' HTTP/1.1
Accept:application/json
Content-ID: 1
--changeset_fa7b-4aa9-a01f
Content-Type: application/http
Content-Transfer-Encoding: binary
PATCH $1 HTTP/1.1
Accept: application/json
Content-Type: application/json;odata=verbose
{"category":"bar"}
--changeset_fa7b-4aa9-a01f--
--batch_36522ad7-fc75-4b56-8c71-56071383e77b--
A:
I'm afraid the answer is no. There's no support for that in the protocol. And even if you remove the filtering from the question, trying to update all entities in the entity set so that they will have a same new value, the answer is still no.
You could probably do this yourself. Like,
Get /service.svc/MyEntitySet
and for every entity you get back, send a PATCH to update it individually.
In addition, if this kind of operation is going to be done frequently, the service author could write a specific service operation or action to do this on the server side. For example, write something called "ClearAllNames", and a user could invoke that, and the server would go through every entity and clear its name field.
|
{
"pile_set_name": "StackExchange"
}
|
1. Field of the Invention
The present invention relates to an electrical connector having a positioning device.
2. Description of Related Art
High density electrical connectors are widely used in more and more electrical products due to rapid development of electronic industry. The connector generally comprises a plurality of terminals mounted on a housing and a positioning device to ensure that the plurality of terminals are accurately soldered to appointed pads or traces of a printed circuit board (PCB).
U.S. Pat. Nos. 5,692,912, 5,947,769, 5,957,705 respectively discloses an electrical connector with such a positioning device. The positioning device is a plate like and defines four rows of through holes arranged at predetermined intervals. A plurality of terminals has tails extending out of the housing and bending downwardly. The positioning device is assembled onto the insulating housing along a bottom-to-top direction and the terminal tails are inserted into corresponding through-holes of the positioning device. However, the through-holes of the positioning device are not easy to aim at the tails especially the tails are curving or slant. Hence, a new design which can solve the problem is required.
|
{
"pile_set_name": "USPTO Backgrounds"
}
|
// This is a manifest file that'll be compiled into application.js, which will include all the files
// listed below.
//
// Any JavaScript/Coffee file within this directory, lib/assets/javascripts, vendor/assets/javascripts,
// or vendor/assets/javascripts of plugins, if any, can be referenced here using a relative path.
//
// It's not advisable to add code directly here, but if you do, it'll appear at the bottom of the
// the compiled file.
//
// WARNING: THE FIRST BLANK LINE MARKS THE END OF WHAT'S TO BE PROCESSED, ANY BLANK LINE SHOULD
// GO AFTER THE REQUIRES BELOW.
//
//= require jquery
//= require jquery_ujs
//=
//= require jquery.ui.widget
//= require jquery.iframe-transport
//= require jquery.fileupload
//= require cloudinary/jquery.cloudinary
//= require attachinary
//=
//= require_tree .
jQuery(function() {
$('.attachinary-input').attachinary();
})
|
{
"pile_set_name": "Github"
}
|
Butterfield Overland Despatch
The Butterfield Overland Despatch was a mail and freight service operating across the Great Plains of America in the 1860s.
Due to increased travel to Colorado after the discovery of gold in 1858. David A. Butterfield, backed by New York capital, organized a joint-stock express and passenger carrying service between the Missouri River and Denver. In July 1865, the route via the Smoky Hill River was surveyed and soon thereafter coaches were in operation. Ben Holladay, acting for a competing organization, bought the Butterfield Overland Despatch in March 1866, when Eastern express companies threatened to take it over and establish a service between the Missouri River and Sacramento, California.
See also
Pond Creek Station, a preserved station of the company, built in 1865, near Wallace, Kansas
Butterfield Overland Mail (an unaffiliated company with similar name)
References
Root, Frank. (1901) The Overland Stage to California. Topeka, Kansas: W.Y. Morgan.
Category:American frontier
Category:History of Kansas
|
{
"pile_set_name": "Wikipedia (en)"
}
|
FRANKFURT/MUNICH (Reuters) - Infineon IFXGn.DE will supply parts to Tesla's upcoming mass-market Model 3 electric car, the German chipmaker said on Tuesday, after being beaten out as initial supplier by Franco-Italian rival STMicroelectronics STM.PA for the coveted deal.
FILE PHOTO: First production model of Tesla Model 3 out the assembly line in Fremont, California , U.S. is seen in this undated handout photo from Tesla Motors obtained by Reuters July 10, 2017. Tesla Motors/Handout via REUTERS
Infineon, which supplies chips to control the batteries and motors in eight out of 10 of the world’s top selling electric vehicles, confirmed its parts would go into the Model 3, which is now ramping up for volume production in 2018.
“We do not comment on the individual models, but we will also will be present in Model 3,” Chief Executive Reinhard Ploss told reporters after Infineon reported third-quarter operating profit just ahead of analysts’ consensus forecast.
The German company’s products are already in Tesla’s Model S sports car, the world’s top-selling electric vehicle, but the Model 3 - costing $35,000 in the United States - has the potential to significantly outsell the $60,000 Model S.
STMicroelectronics was first to market with a new class of highly energy efficient 1,200-volt silicon-carbide chips, which brokerage Liberum said helped it win the initial deal to supply power chips for the Model 3. But it predicted Infineon would become a second source of such chips as volumes ramp next year.
Last Friday, Tesla TSLA.O delivered the first 30 Model 3s to employee buyers, part of more than half a million advance reservations. (reut.rs/2tVmxlp)
While its presence in the Model 3 could be a boost, Ploss said most of the demand for Infineon’s electric vehicle parts would continue to come from Asia. “Our success is very much dependent on Asia,” he said.
The company reaffirmed its projection for 8-11 percent revenue growth for the year ending in September, and said its annual operating profit margin, excluding certain items, would be unchanged around 17 percent.
Infineon shares, which had slipped in early trading, picked up after Ploss signaled it had a renewed appetite for mergers. The stock was up 1.7 percent at 18.70 euros at 1315 GMT.
Ploss said the company "continues to explore various acquisition options" after its agreed $850 million deal to buy the Wolfspeed power unit of Cree CREE.O collapsed under U.S. government scrutiny earlier this year.
Ploss told reporters on a conference call that Infineon remained interested in deals in the American market, but would be sure to steer clear of technology the U.S. government considers vital to national security.
Revenue for its fiscal third quarter ended in June was roughly in line with expectations, up 12.2 percent on the same period a year ago to 1.83 billion euros.
Quarterly operating income, excluding certain items, rose 33 percent to 338 million euros ($400 million), compared with analysts’ average forecast of 323 million in a Reuters poll.
|
{
"pile_set_name": "OpenWebText2"
}
|
Products featured here are selected by our partners at StackCommerce.If you buy something through links on our site, Mashable may earn an affiliate commission.
They say good things come in small packages, but the concept doesn't seem to always apply in tech. Computers with impressive processing power tend to be bigger machines, big-screen TVs are a dime-a-dozen, and even bigger smartphones are making a comeback.
One of the only technological marvels that defies this trend is the VoCore2 Mini Linux Computer, an open-source Linux computer that packs a punch despite being minuscule in size. Right now, you can get your hands on one for $42.99.
Functioning as both a computer and tinkerer's toy, this tiny machine has endless use cases. You can opt to use it as a VPN gateway to secure your network, a private cloud to store your essential data, a personal assistant (just link it to Siri or Echo), or an airplay music streaming station. You can even attach a USB webcam to transform it into a home security camera. If you decide to enhance its functions, you simply throw in some C, Java, Python, Ruby, or JavaScript to get going. Usually $50, the computer is only $42.99 here in the Mashable Shop.
Now if you want to easily visualize your programming, you can grab the VoCore2 Mini Linux computer bundle, which includes the coin-sized computer and a screen that you can set up anywhere. The bundle normally retails for $79.80, but you can get it for only $69 here.
|
{
"pile_set_name": "OpenWebText2"
}
|
function Point(x, y) {
this.x = x;
this.y = y;
}
["x", "y"].forEach(function(p) {
Point.prototype["get_" + p] = function() {
return this[p];
};
Point.prototype["set_" + p] = function(v) {
if (typeof v !== 'number')
throw Error('number expected');
this[p] = v;
};
});
|
{
"pile_set_name": "Github"
}
|
The governor of Kazakhstan’s main financial hub has stated that while cryptocurrencies must be regulated, crypto and blockchain innovation will be supported, local news outlet Kazinform reports today, June 14.
Speaking at the “Blockchain Conference Astana,” the Astana International Financial Center’s (AIFC) governor Kairat Kaliyev said specifically that the AIFC is paying special attention blockchain and cryptocurrencies:
“The problem of regulation of cryptocurrencies is being discussed very intensively. The AIFC has a solid position on this issue – it is crucial to regulate the circulation of cryptocurrencies.”
According to Kazinform, the AIFC’s Regulatory Authority plans to approve rules for crypto regulations this summer. Kaliyev notes that the center is working with international companies to develop appropriate regulation, also adding that the center provides financial assistance to support fintech innovation and arranges courses for blockchain and programming.
The AIFC is a free economic zone created with the intent to attract international trade and increase Kazakhstan’s role in global markets. The AIFC website refers to fintech development as “one of the main strategic directions activities [sic]” of the financial hub.
At the end of May, Kazakh president Nursultan Nazarbayev called for international cooperation for cryptocurrency regulation. The Kazakhstani public has also shown increased interest in cryptocurrency. A study released by search engine Yandex in March showed that Kazakhstanis were searching for crypto-related terms more this year than in 2017.
|
{
"pile_set_name": "OpenWebText2"
}
|
// Copyright Aleksey Gurtovoy 2000-2004
//
// Distributed under the Boost Software License, Version 1.0.
// (See accompanying file LICENSE_1_0.txt or copy at
// http://www.boost.org/LICENSE_1_0.txt)
//
// Preprocessed version of "boost/mpl/aux_/reverse_fold_impl.hpp" header
// -- DO NOT modify by hand!
namespace boost { namespace mpl { namespace aux {
/// forward declaration
template<
long N
, typename First
, typename Last
, typename State
, typename BackwardOp
, typename ForwardOp
>
struct reverse_fold_impl;
template< long N >
struct reverse_fold_chunk;
template<> struct reverse_fold_chunk<0>
{
template<
typename First
, typename Last
, typename State
, typename BackwardOp
, typename ForwardOp
>
struct result_
{
typedef First iter0;
typedef State fwd_state0;
typedef fwd_state0 bkwd_state0;
typedef bkwd_state0 state;
typedef iter0 iterator;
};
};
template<> struct reverse_fold_chunk<1>
{
template<
typename First
, typename Last
, typename State
, typename BackwardOp
, typename ForwardOp
>
struct result_
{
typedef First iter0;
typedef State fwd_state0;
typedef typename apply2< ForwardOp, fwd_state0, typename deref<iter0>::type >::type fwd_state1;
typedef typename mpl::next<iter0>::type iter1;
typedef fwd_state1 bkwd_state1;
typedef typename apply2< BackwardOp, bkwd_state1, typename deref<iter0>::type >::type bkwd_state0;
typedef bkwd_state0 state;
typedef iter1 iterator;
};
};
template<> struct reverse_fold_chunk<2>
{
template<
typename First
, typename Last
, typename State
, typename BackwardOp
, typename ForwardOp
>
struct result_
{
typedef First iter0;
typedef State fwd_state0;
typedef typename apply2< ForwardOp, fwd_state0, typename deref<iter0>::type >::type fwd_state1;
typedef typename mpl::next<iter0>::type iter1;
typedef typename apply2< ForwardOp, fwd_state1, typename deref<iter1>::type >::type fwd_state2;
typedef typename mpl::next<iter1>::type iter2;
typedef fwd_state2 bkwd_state2;
typedef typename apply2< BackwardOp, bkwd_state2, typename deref<iter1>::type >::type bkwd_state1;
typedef typename apply2< BackwardOp, bkwd_state1, typename deref<iter0>::type >::type bkwd_state0;
typedef bkwd_state0 state;
typedef iter2 iterator;
};
};
template<> struct reverse_fold_chunk<3>
{
template<
typename First
, typename Last
, typename State
, typename BackwardOp
, typename ForwardOp
>
struct result_
{
typedef First iter0;
typedef State fwd_state0;
typedef typename apply2< ForwardOp, fwd_state0, typename deref<iter0>::type >::type fwd_state1;
typedef typename mpl::next<iter0>::type iter1;
typedef typename apply2< ForwardOp, fwd_state1, typename deref<iter1>::type >::type fwd_state2;
typedef typename mpl::next<iter1>::type iter2;
typedef typename apply2< ForwardOp, fwd_state2, typename deref<iter2>::type >::type fwd_state3;
typedef typename mpl::next<iter2>::type iter3;
typedef fwd_state3 bkwd_state3;
typedef typename apply2< BackwardOp, bkwd_state3, typename deref<iter2>::type >::type bkwd_state2;
typedef typename apply2< BackwardOp, bkwd_state2, typename deref<iter1>::type >::type bkwd_state1;
typedef typename apply2< BackwardOp, bkwd_state1, typename deref<iter0>::type >::type bkwd_state0;
typedef bkwd_state0 state;
typedef iter3 iterator;
};
};
template<> struct reverse_fold_chunk<4>
{
template<
typename First
, typename Last
, typename State
, typename BackwardOp
, typename ForwardOp
>
struct result_
{
typedef First iter0;
typedef State fwd_state0;
typedef typename apply2< ForwardOp, fwd_state0, typename deref<iter0>::type >::type fwd_state1;
typedef typename mpl::next<iter0>::type iter1;
typedef typename apply2< ForwardOp, fwd_state1, typename deref<iter1>::type >::type fwd_state2;
typedef typename mpl::next<iter1>::type iter2;
typedef typename apply2< ForwardOp, fwd_state2, typename deref<iter2>::type >::type fwd_state3;
typedef typename mpl::next<iter2>::type iter3;
typedef typename apply2< ForwardOp, fwd_state3, typename deref<iter3>::type >::type fwd_state4;
typedef typename mpl::next<iter3>::type iter4;
typedef fwd_state4 bkwd_state4;
typedef typename apply2< BackwardOp, bkwd_state4, typename deref<iter3>::type >::type bkwd_state3;
typedef typename apply2< BackwardOp, bkwd_state3, typename deref<iter2>::type >::type bkwd_state2;
typedef typename apply2< BackwardOp, bkwd_state2, typename deref<iter1>::type >::type bkwd_state1;
typedef typename apply2< BackwardOp, bkwd_state1, typename deref<iter0>::type >::type bkwd_state0;
typedef bkwd_state0 state;
typedef iter4 iterator;
};
};
template< long N >
struct reverse_fold_chunk
{
template<
typename First
, typename Last
, typename State
, typename BackwardOp
, typename ForwardOp
>
struct result_
{
typedef First iter0;
typedef State fwd_state0;
typedef typename apply2< ForwardOp, fwd_state0, typename deref<iter0>::type >::type fwd_state1;
typedef typename mpl::next<iter0>::type iter1;
typedef typename apply2< ForwardOp, fwd_state1, typename deref<iter1>::type >::type fwd_state2;
typedef typename mpl::next<iter1>::type iter2;
typedef typename apply2< ForwardOp, fwd_state2, typename deref<iter2>::type >::type fwd_state3;
typedef typename mpl::next<iter2>::type iter3;
typedef typename apply2< ForwardOp, fwd_state3, typename deref<iter3>::type >::type fwd_state4;
typedef typename mpl::next<iter3>::type iter4;
typedef reverse_fold_impl<
( (N - 4) < 0 ? 0 : N - 4 )
, iter4
, Last
, fwd_state4
, BackwardOp
, ForwardOp
> nested_chunk;
typedef typename nested_chunk::state bkwd_state4;
typedef typename apply2< BackwardOp, bkwd_state4, typename deref<iter3>::type >::type bkwd_state3;
typedef typename apply2< BackwardOp, bkwd_state3, typename deref<iter2>::type >::type bkwd_state2;
typedef typename apply2< BackwardOp, bkwd_state2, typename deref<iter1>::type >::type bkwd_state1;
typedef typename apply2< BackwardOp, bkwd_state1, typename deref<iter0>::type >::type bkwd_state0;
typedef bkwd_state0 state;
typedef typename nested_chunk::iterator iterator;
};
};
template<
typename First
, typename Last
, typename State
, typename BackwardOp
, typename ForwardOp
>
struct reverse_fold_step;
template<
typename Last
, typename State
>
struct reverse_fold_null_step
{
typedef Last iterator;
typedef State state;
};
template<>
struct reverse_fold_chunk< -1 >
{
template<
typename First
, typename Last
, typename State
, typename BackwardOp
, typename ForwardOp
>
struct result_
{
typedef typename if_<
typename is_same< First,Last >::type
, reverse_fold_null_step< Last,State >
, reverse_fold_step< First,Last,State,BackwardOp,ForwardOp >
>::type res_;
typedef typename res_::state state;
typedef typename res_::iterator iterator;
};
};
template<
typename First
, typename Last
, typename State
, typename BackwardOp
, typename ForwardOp
>
struct reverse_fold_step
{
typedef reverse_fold_chunk< -1 >::template result_<
typename mpl::next<First>::type
, Last
, typename apply2<ForwardOp,State, typename deref<First>::type>::type
, BackwardOp
, ForwardOp
> nested_step;
typedef typename apply2<
BackwardOp
, typename nested_step::state
, typename deref<First>::type
>::type state;
typedef typename nested_step::iterator iterator;
};
template<
long N
, typename First
, typename Last
, typename State
, typename BackwardOp
, typename ForwardOp
>
struct reverse_fold_impl
: reverse_fold_chunk<N>
::template result_< First,Last,State,BackwardOp,ForwardOp >
{
};
}}}
|
{
"pile_set_name": "Github"
}
|
This invention relates generally to packaging and conveying systems, and more particularly, to a system which is capable of reliably and efficiently separating handles and of subsequently inserting individual handles within the packages in order to produce a package having a handle fixedly secured thereto.
In the field of packaging articles and more specifically in the class of packages encompassing beverage carriers, it is of utmost importance to provide handles as part of the package. For example, beverage carriers which contain a plurality of bottles or cans therein and are provided with handles can be easily carried or transported from one location to the other in a safe and efficient manner.
A conventional procedure for producing such packages generally involves a three-step operation; (1) the manufacture of the package itself, (2) the filling of the package, with, for example, beverage bottles or cans, and (3) the attachment to the package of a handle.
Generally, the handles are either manually attached to the package or automatically secured to the package in an automated fashion by specifically designed machines. Unfortunately, manual attachment of the handle to the package is not only time consuming but as a consequence of the costs involved in maintaining a work staff, also extremely uneconomical. Mechanization of the procedure also leaves much to be desired since complex machinery is generally involved in not only separating the individual handles to be utilized with the completed packages but also in successfully attaching the handles to the packages. Thus, the added convenience of being able to easily carry a package such as a beverage carrier by means of handle may increase the overall cost of the item packaged to a point where providing handled packages, in most instances, is considered to be cost ineffective.
A need therefore arises in the packaging industry to produce a reliable and economical mechanized system for not only separating handles from a stack of handles but also to provide a capability within the system which permits the rapid securing of those handles to packages.
|
{
"pile_set_name": "USPTO Backgrounds"
}
|
A high-impedance attenuator for measurement of high-voltage nanosecond-range pulses.
A novel kind of high-impedance cable attenuator for measurement of high-voltage ns-range pulses is investigated in this paper. The input and output ports of the proposed attenuator were both high-impedance ports, and good pulse response characteristics of the proposed attenuator were obtained with pulse response time less than 1 ns. According to the requirement of measurement, two attenuators with lengths at 14 m and 0.7 m were developed with response time of 1 ns and 20 ns, and the attenuation coefficient of 96 and 33.5, respectively. The attenuator with the length of 14 m was used as a secondary-stage attenuator of a capacitive divider to measure the high-voltage pulses at several hundred ns range. The waveform was improved by the proposed attenuator in contrast to the result only measured by the same capacitive divider and a long cable line directly. The 0.7 m attenuator was also used as a secondary-stage attenuator of a standard resistant divider for an accurate measurement of high-voltage pulses at 100 ns range. The proposed cable attenuator can be used to substitute the traditional secondary-stage attenuators for the measurement of high-voltage pulses.
|
{
"pile_set_name": "PubMed Abstracts"
}
|
---
abstract: 'The discovery of high redshift quasars represents a challenge to the origin of supermassive black holes. Here, two evolutionary scenarios are considered. The first one concerns massive black holes in the local universe, which in a large majority have been formed by the growth of seeds as their host galaxies are assembled in accordance with the hierarchical picture. In the second scenario, seeds with masses around 100-150 $M_\odot$ grow by accretion of gas forming a non-steady massive disk, whose existence is supported by the detection of huge amounts of gas and dust in high-z quasars. These models of non-steady self-gravitating disks explain quite well the observed ”Luminosity-Mass" relation of quasars at high-z, indicating also that these objects do not radiate at the so-called Eddington limit.'
title: |
[**Supermassive Black Holes in the Early Universe**]{}\
J.A. de Freitas Pacheco\
Université de la Côte d’Azur\
Observatoire de la Côte d’Azur - Laboratoire Lagrange\
06304 Nice Cedex - France\
---
Introduction
============
The cosmological nature of quasars (QSOs) was established in the early sixties (Schmidt 1963). An immediate consequence of the implied large distances for these objects was the realization that QSOs were among the most powerful energy sources in the universe. Their luminosities are typically around $10^{46} erg.s^{-1}$ but the emission of some QSO’s may exceed by one or two orders of magnitude that value. Edwin Salpeter was one of the first to propose that a supermassive black hole (SMBH) in a state of accretion could provide the necessary energy to explain the luminosities of QSOs (Salpeter 1964). If presently a large majority of the scientific community accepts that accreting SMBHs are the engines powering QSOs, a series of questions still remain to be answered. For instance, how these SMBHs are formed? If they grow by accretion what are the seeds and from where they come from? What is the gas accretion geometry: spherically or that of an inspiraling disk? In last case, what are the viscous mechanisms responsible for the transfer of angular momentum?
In the local universe the presence of SMBHs in the center either of elliptical galaxies or bulges of spiral galaxies seems to be a well-established fact (Kormendy & Richstone 1995; Richstone et al. 1998; Kormendy & Gebhardt 2001). The black hole mass $M$ is well correlated either with the stellar mass or the luminosity of the host bulge (Kormendy & Richstone 1995; Magorrian et al. 1998; Marconi & Hunt 2003; Haring & Rix 2004; Graham 2007) but, in particular, a tight correlation exists between the SMBH mass and the central projected stellar velocity dispersion $\sigma$ (Ferrarese & Merrit 2000; Gebhardt et al. 2000; Merrit & Ferrarese 2001; Tremaine et al. 2002). The mechanism (or mechanisms) responsible for establishing the M-$\sigma$ relation is (are) not yet well determined but several scenarios have been put forward in the past years to explain the origin of such a relation. Self-regulated growth of black holes by feedback effects produced either by outflows or UV-radiation from QSO’s, which affect also the star formation activity is a possible mechanism able to reproduce the M-$\sigma$ relation (Silk & Rees 1998; Sazonov et al. 2005). A relation between these physical quantities can also be obtained from the picture developed by Burkert & Silk (2001) in which black holes grow at the expense of a viscous accretion disk and whose gas reservoir beyond the BH influence radius feeds also the formation of stars.
These investigations seem to point to a well-defined road leading to the formation of SMBHs: the growth of ”seeds” by accretion inside the host galaxy. This picture is consistent with the fact that the present BH mass density agrees with the accreted (baryonic) mass density derived from the bolometric luminosity function of quasars (Soltan 1982; Small & Blandford 1992; Hopkins, Richards & Hernquist 2007) and with a negligible amount of accreted dark matter (Peirani & de Freitas Pacheco 2008). Seeds could be intermediate mass ($10^3-10^4)~M_\odot$ black holes formed during the collapse of primordial gas clouds (Haehnelt & Rees 1993; Eisenstein & Loeb 1995; Koushiappas, Bullock & Dekel 2004) or during the core collapse of relativistic star clusters formed in star-bursts, which may have occurred in the early evolution of galaxies (Shapiro 2004). Here, as it will be discussed later, seeds are assumed to be black holes with masses around 100-500 $M_\odot$ originated from the first generation of stars, supposed to be quite massive due to the absence of metals, which are the main contributors to the cooling of the gas.
The different correlations between the black hole mass and the dynamic or the photometric properties of the host galaxy suggest a gradual growth of the seed as the host galaxy itself is assembled. However this scenario seems to be inconsistent with the fact that up to now more than 40 bright QSO’s have been discovered at high redshift (Wu et al. 2015). The three QSOs having the highest redshift are J1061+3922 at z = 6.61, J1120+0641 at z = 7.08 and J1342+0928 at z = 7.54. The later corresponds to an age of the universe of only 0.69 Gyr. Since most of these high redshift QSOs are associated with SMBHs having masses around $10^8-10^9 ~ M_\odot$, their growth was probably not gradual but rather fast in other they could shine so early in the history of the universe. Thus, a possible issue is to admit the existence of two evolutionary paths leading to the formation of SMBHs: one in which seeds grow intermittently as their host galaxies are assembled and another in which seeds grow very fast in a timescale of less than 1 Gyr. These two possibilities will be discussed in the next sections of this article.
Spherical Accretion and the Eddington limit
===========================================
Many authors still consider in their investigations spherical accretion processes in which the mass inflow rate is controlled by the Eddington luminosity. In this case, it seems judicious to recall the physical assumptions that permit the derivation of the Eddington limit. The Euler equation describing a spherically symmetric inflow under the influence of gravitation and radiation pressure is $$\label{euler}
V\frac{dV}{dr}+\frac{1}{\rho}\left[\frac{d(P+P_r)}{dr}\right]+\frac{GM}{r^2}=0$$ In the above equation $V$ is the radial flow velocity, $P$ and $P_r$ are respectively the gas and the radiation pressure, $\rho$ is the gas density, $G$ is the gravitational constant and $M$ is the mass of the central object. The radial gradient due to the radiation pressure is given by $$\frac{dP_r}{dr}=-\frac{1}{c}\int^{\infty}_{0}\kappa_{\nu}\phi_{\nu}d\nu = -\frac{\kappa}{c}\phi$$ where $\kappa$ is a suitable frequency average of the total absorption coefficient (including scattering) and $\phi$ is the total radiative flux. If the accreting gas envelope is highly ionized, the absorption of photons is essentially due to the Thomson scattering and, in this case $$\kappa = \frac{\sigma_T}{\mu m_H}\rho$$ where $\sigma_T = 6.65\times 10^{-25}~cm^2$ is the Thomson cross-section, $\mu$ is the mean molecular weight and $m_H$ is the proton mass. If the medium is optically thin and the radiation comes essentially from the deep inside region of the envelope, then the radiative flux in the outer regions is simply $$\phi = \frac{L}{4\pi r^2}$$ Combine eqs. (2), (3), (4) and replace into the Euler equation to obtain $$V\frac{dV}{dr}+\frac{1}{\rho}\frac{dP}{dr}= -\frac{GM}{r^2}+\frac{\sigma_T L}{4\pi\mu m_H c r^2}$$ Define now the Eddington luminosity as $$\label{eddington}
L_E = \frac{4\pi GM\mu m_H c}{\sigma_T} = 1.76 \times 10^{38}\left(\frac{M}{M_\odot}\right)~erg.s^{-1}$$ In this case, eq. (5) can be rewritten as $$\label{euler2}
V\frac{dV}{dr}+\frac{1}{\rho}\frac{dP}{dr}= -\frac{GM(1-\Gamma)}{r^2}$$ where $\Gamma = L/L_E$.
Assume that the gas equation of state is given by $P = K\rho^{\gamma}$ and define also the adiabatic sound velocity as $a^2 = \gamma(P/\rho)$. Under these conditions, using the mass conservation equation to express the mass density gradient, after some algebra, eq.\[euler2\] can be recast as $$\label{euler3}
V\left(1-\frac{a^2}{V^2}\right)\frac{dV}{dr} = \frac{2a^2}{r}-\frac{GM(1-\Gamma)}{r^2}$$
The ”critical” point of the flow in the usual mathematical sense corresponds to the point where both sides of eq. \[euler3\] vanish. Hence, in order to have the continuity of the flow through the critical point, two conditions must be simultaneously satisfied, namely $$\label{criticalvelocity}
V_* = a_*$$ and $$\label{criticalradius}
r_* = \frac{GM(1-\Gamma)}{2a_*^2}$$ for the critical velocity and the critical radius respectively. These relations imply that the critical and the sonic points of the flow coincide (this is not always the case) and that the luminosity radiated from inside must be [**[smaller]{}**]{} than the Eddington value in order that the critical radius be real. Note that once the critical point is surpassed, the left side of eq. \[euler3\] is negative, requiring imperatively that the right side be also negative or, equivalently, that $\Gamma < 1$. In other words, the spherical accretion of an optically thin envelope requires sub-Eddington conditions otherwise the inflow cannot be established. As we will see later, this requirement is weakened when the inflow geometry is modified as, for instance, in the case of an accretion disk.
As it was mentioned previously, many authors assume that the central black hole accretes mass with the envelope radiating near the Eddington limit. Since the Eddington luminosity is proportional to the mass of the black hole (see eq. 6), it results an exponential growth with a timescale $$\tau_E = \frac{\eta}{(1-\eta)}\frac{c\sigma_T}{4\pi\mu m_H G} = 3.22\times 10^8\frac{\eta}{(1-\eta)}~yr$$ where $\eta$ is the accretion efficiency. Such a short timescale in often considered as an argument to explain the presence of SMBHs at high-z. However, as we have seen above, the Eddington limit is derived under conditions in which the envelope is optically thin and the opacity is due only to the Thomson scattering. The optical depth of the envelope is given by $$\tau(r,\infty) = \int^{\infty}_r \frac{\sigma_T}{\mu m_H}\rho(r')dr' = 7.8\times 10^{-6}\left(\frac{M}{M_\odot}\right)n_{\infty}$$ and the gravitational radius was taken as a lower limit in order to obtain the numerical value on the left side of the equation. For an optically thin envelope the condition $\tau < 1$ must be satisfied, imposing an upper bound to the black hole mass, namely $$\frac{M}{M_\odot} < \frac{1.3\times 10^5}{n_{\infty}}$$ where $n_{\infty}$ is the gas particle density far from the influence radius of the black hole. Hence, if accreting spherically, SMBHs at high-z with masses around $10^8-10^9 ~ M_\odot$ will necessarily have an optically thick envelope and a different inflow regime. An optically thick envelope reduces the distance to the critical point and reduces also the accretion rate with respect to the optically thin case. In particular, when the radiation field is quite important, a second critical point may exist in the flow besides the hydrodynamical one, according to Nobili et al. (1991).
Accretion flows affected by radiation effects have been investigated by many authors in the past years (Maraschi et al. 1974; Flammang 1982; Milosavljevic et al. 2009). The radiation from the accreting envelope is essentially due to the free-free emission. For the optically thin case, the resulting luminosity is proportional to the square of the accretion rate and inversely proportional to the BH mass, i.e., $L \sim {\dot M^2}/M$. The flow becomes nearly self-regulated when the optical depth of the infalling matter is greater than unity and under these conditions, the luminosity approaches the Eddington limit (Milosavljevic et al. 2009). However, some authors claim that super-Eddington luminosities are possible if the black-hole is embedded in a very dense gas that decreases the importance of radiation pressure effects (Pacucci et al. 2015). Super-Eddington accretion rates were also found in some radiation-hydrodynamics simulations but based on one-dimensional geometry and particular conditions of the ambient gas (Inayoshi et al. 2016). However, it is not certain whether such extreme conditions are sustainable considering the violent environments of the first galaxies where the medium is affected by the star formation activity and supernovae.
If the BH is moving with respect to the gas, the situation is rather different. After passing the BH, a conically shaped shock is produced in the flow in which the gas loses the momentum component perpendicular to the shock front. After compression in the shock, gas particles within a certain impact parameter will fall into the BH. One determinant factor describing the subsequent motion of the gas is the angular momentum. If the infalling gas has an specific angular momentum $J$ that exceeds $2r_gc$, where $r_g = 2GM/c^2$ is the gravitational radius, the centrifugal forces will become important before the gas reaches the horizon. In this case, the gas will be thrown into near circular orbits and only after viscous stresses have transported away the excess of angular momentum will the gas cross the BH horizon (Shvartsman 1971). In fact, the formation or not of a disk requires two conditions: the disk radius must be larger than the last stable circular orbit (equivalent to the condition $J > 2r_gc$) and must be smaller than the typical dimension of the shock cone, e.g., $l_s \approx 2r_g(c/u_{\infty})^2$, where $u_{\infty}$ is the BH velocity with respect to the gas. If the gas is highly turbulent, the velocity of eddies having a scale $k$ is given roughly by $$V_t \sim V_0\left(\frac{k}{k_0}\right)^q$$ In the case of a Kolmogorov spectrum, $q = 1/3$. However, it is more probable that the turbulent energy be dissipated mainly through shock waves and, in this case, the spectrum is steeper with $q \sim 1$ (Kaplan 1954), a situation which will be assumed here. For our rough estimates, we adopt typical values for the turbulence observed in our Galaxy, e.g., $V_0 \approx 10~ km.s^{-1}$, $k_0 \approx 10~ pc$ (Kaplan & Pikel’ner 1970). The specific angular momentum associated with eddies is $J \sim V_tk$ and the specific angular momentum of the accreted gas corresponds to eddies of the order of twice the scale of the capture impact parameter. Thus, the first condition for disk formation requires $$M > 720\left(\frac{u_{\infty}}{50~km.s^{-1}}\right)^4 ~M_\odot$$ whereas the second requires $$M < 3.5\times 10^6\left(\frac{u_{\infty}}{50~km.s^{-1}}\right)^3~ M_\odot$$ The conclusion of this brief analysis is that a disk can be formed after the shock front only if the BH mass is in the range $10^3$ up to $10^6$ $M_\odot$. Notice that these values depend strongly on the black hole velocity. Taking into account the restricted range of BH masses that allows the presence of a disk, the necessity of an adequate balance between the mass flow across the shock front and the flow throughout the disk that is controlled by viscous forces, as well as the variety of instabilities present in the flow after the shock. Under these conditions, the formation of a disk under these circumstances is rather uncertain. In this case, if a disk is not formed inside the accretion cone, the radiated luminosity is only a small fraction of the accretion power.
Intermittent Growth of Black Holes
==================================
As we have seen previously, the correlations between the black hole mass and the photometric or dynamical properties of the host galaxy suggest that the former grows as the latter is assembled. The different physical processes involved on the growth of black holes inside galaxies require a numerical treatment or, in other words, an appeal to cosmological simulations. In fact, there are different reasons justifying such an approach: first, because a significant volume of the universe can be probed; second, because the dynamics of dark matter and the hydrodynamics of the gas, including physical processes like heating, cooling, ionization of different elements can be taken into account self consistently. Moreover, it is possible to study environmental effects on the galaxies themselves as well as the chemical evolution of the interstellar and of the intergalactic gas, including the effects of supernovae and turbulent diffusion of heavy metals. The simulations permit to test star formation recipes, models for the growth of seeds and to investigate the influence of black holes on the environment when they are in an active phase.
Cosmological Simulations
------------------------
The results described in this section were all derived from simulations performed at the Observatoire de la Côte d’Azur (Nice) in these past years. Details of the code and some results can be found, for instance, in the papers by Filloux et al. (2010, 2011) or Durier & de Freitas Pacheco (2011). For the sake of completeness, a short summary of the main features of the code will be presented here.
All the simulations were performed in the context of the $\Lambda$CDM cosmology, using the parallel TreePM-SPH code GADGET-2 in a formulation, despite the use of fully adaptive smoothed particle hydrodynamics (SPH), that conserves energy and entropy (Springel 2005). Initial conditions are established according to the algorithm COSMICS (Bertschinger 1995) and the evolution of the structures are followed in the redshift range $60 \geq z \geq 0$. Ionization equilibrium taking into account collisional and radiative processes were included following Katz, Weinberg & Hernquist (1996), as well as the contribution of the ionizing radiation background. The contribution of cooling processes such as collisional excitation of HI, HeI and HeII levels, radiative recombination, free-free emission and inverse Compton were also included, using the results by Sutherland & Dopita (1993). An interpolation procedure was adopted to take into account the enhancement of the cooling as the medium is enriched by metals. The cooling functions computed by those authors are adequate for highly ionized gases and for $T \geq 10^4 K$. At high redshifts ($100 > z > 20$) and inside neutral gas clouds a residual electron fraction of about $n_e/n_H \approx 0.005$ is present (Peebles 1993) which is enough to act as a catalyst in chemical reactions producing molecular hydrogen. $H_2$ cooling due to excitation of molecular rotational levels was introduced by using the results of Galli & Palla (1998). After the appearance of the first stars, the gas is enriched by trace elements like O, C, Si, Fe, responsible for a supplementary cooling mechanism. The UV background with $h\nu < 13.6~eV$ is unable to ionize hydrogen (and oxygen) in neutral gas clouds but it can ionize carbon, silicon and iron, which are mostly singly ionized under these conditions. These ions have fine structure levels that can be excited by collisions either with electrons or atomic hydrogen, constituting an important cooling mechanism at low temperatures, which was included in the code. The UV radiation from young massive stars able to ionize the nearby gas was computed for different ionization species of hydrogen and helium, representing not only an additional (local) source of ionization but also of heating.
Feedback processes like the return of mass to the interstellar medium, supernova heating and chemical enrichment were all taken into account. The return of mass to the interstellar medium was computed by assuming that the initial mass function (IMF) of stars with metallicities $[Z/H] < −2.0$ is of the form $\zeta(m) \propto m^{-2}$ while stars more metal-rich are formed with a Salpeter IMF, e.g., $\zeta(m) \propto m^{-2.35}$. Stars in the mass range $40-80~ M_\odot$ leave a $10~ M_\odot$ black hole as a remnant, whereas a $1.4~ M_\odot$ neutron star is left if progenitors are in the mass range $9-40 M_\odot$ or a white dwarf remnant otherwise. The mass lost by the ”stellar particle" is redistributed according to the SPH kernel among the gas particle neighbors and velocities are adjusted in order to conserve the total momentum in the cell. Moreover, the removed gas (except that ejected by supernovae, as discussed below) keeps its original chemical composition, contributing to the chemical budget of the medium. In fact, AGB stars, planetary nebulae and WR stars enrich the medium in He, C, N, but these contributions are not taken into account in the present version of the code.
Supernova explosions are supposed to inject both thermal and mechanical energy in the interstellar medium. Past investigations have shown that, when heated, the nearby gas cools quite rapidly and the injected thermal energy is simply radiated away. However, when energy is injected in the form of kinetic energy, the star formation process is affected (Navarro & White 1993). In the present simulations, supernova explosions were supposed to inject essentially mechanical energy into the interstellar medium through a ”piston” mechanism, represented by the momentum carried by the ejected stellar envelope. The distance $D_p$ covered by such a “piston”, ejected with a typical velocity $V_{ej} \sim 3000 km.s^{-1}$ in a time interval $\Delta t$ is $V_{ej}\Delta t$. Under this assumption a ”stellar cell” is defined including all gas particles inside a spherical volume of radius $D_p$ that will be affected by the ”piston”. The released energy is redistributed non-uniformly among these particles. In this process, it is expected that the closest gas particles receive more energy than the farthest ones. This was achieved by assigning to each gas particle $j$ a distance dependent weight $w_j(r) = A/r^n_{ij}$ , where $r_{ij}$ is the distance between the ”i-stellar" particle (host of the SN explosion) to the $j-gas$ particle inside the cell. The normalization constant is defined by $A = 1/\sum_j r^n_{ij}$ . Different values of the exponent (n = 2, 4) were tested. Supernovae do not only contribute to the energy budget of the interstellar medium but also inject heavy metals, leading to a progressive chemical enrichment of galaxies as well as of their nearby environment. Such a progressive enrichment was treated by an adequate algorithm able to simulate the turbulent diffusion of metals trough the medium.
In the code, BHs are represented by collisionless particles that can grow in mass, according to specific rules that mimic accretion or merging with other BHs. Possible recoils due to a merging event and to the consequent emission of gravitational waves were neglected. BHs are assumed to merge if they come within a distance comparable to or less than the mean inter-particle separation. Seeds are supposed to have been formed from the first (very massive) stars and are supposed to have a mass of about 100 $M_\odot$. An auxiliary algorithm finds potential minima where seeds are inserted in the redshift interval $15-20$. A fraction of the energy released during the accretion process is re-injected into the medium along two opposite “jets” aligned along the rotation axis of the disk, modeled by cones with an aperture angle of $20^o$ and extending up to distances of about 300 kpc. The adopted expression for the power of the jets is essentially that given by the simulations of Koide et al. (2002).
Properties of simulated SMBHs
-----------------------------
One of the main aspects of the growth of seeds by gas accretion concerns the fact that masses do not increase continuously. In the hierarchical model, galaxies are assembled in the filaments of the cosmic web or in the junction of filaments where clusters are formed. In filaments galaxies may capture fresh gas that will feed their central black holes. Gas may also come from merging events. However, from time to time the gas in the central region is exhausted and the process of growth stops until a new episode of gas capture is able to replenish the vicinity of the black hole. In fact, the amount of gas in the central regions of the host galaxy is controlled by the capture processes and internal processes like star formation and feedback from supernovae and the black hole itself. In figure 1 the individual growth of some simulated black holes is shown. It is possible to verify that there are periods during which the black hole mass remains constant (case of a ”dormant" black hole) and periods of gas accretion during which the black hole mass increases. In such a phase, the galaxy has an active nucleus, being associated to an AGN or to a QSO. Notice that only at $z \leq 4$ some black holes having masses greater than $10^7~M_\odot$ have appeared.
During the activity phase, the associated accretion disk is quite luminous and the luminosity depends essentially on the accretion rate. For a given redshift it is possible to compute the total luminosity due to all active black holes and, consequently, to estimate the comoving luminosity density. Such a luminosity density can be compared with observational data, permitting to test the robustness of the simulations. Figure 2 compares the luminosity density evolution derived from simulations with data by Hopkins et al. (2007). The agreement is quite satisfactory suggesting that the main physical aspects of the growth process are reasonably taken into account in the simulations.
Another successful comparison concerns the relation between the present SMBH mass with the central velocity dispersion of galaxies projected in the line of sight. This is done in figure 3. Black squares represent the masses of SMBHs at z = 0 derived from simulations while red squares represent data taken from the literature. There is a good agreement between simulated and observed data but some objects seem to have a higher black hole mass for the corresponding stellar velocity dispersion of their host galaxies. In particular, this is the case for NGC 5252 and Cygnus A as it can be seen in the considered figure. In our proposed scenario, these objects have not evolved intermittently but rather in a very short time scale in the early evolutionary phases of the universe, being presently the relics of such an active past.
If some SMBHs seem to have masses above that expected from the $M - \sigma$ relation as figure 3 suggests, there are other arguments indicating that these objects followed indeed a different evolutionary path. At a given redshift, the simulations permit to compute the mass distribution of SMBHs. This is shown in figure 4, which indicates that no SMBHs with masses above $10^7 ~ M_\odot$ is present at $z = 5$, in agreement with the evolution of individual black holes shown in figure 1. This means that the evolutionary path in which the BH mass grows as the host galaxy is assembled, which is probably the origin of the $M - \sigma$ relation, is unable to explain the existence of very massive BHs in the early universe or SMBHs present today in bright galaxies like NGC 5252 or Cygnus A. In the next sections an alternative evolutionary path will be examined.
The early formation of SMBHs
============================
As it was shown in the previous section, the coeval evolution of seeds and host galaxies is not able to explain the existence of bright QSOs at high redshift and the fact that in the local universe some objects have masses higher than that expected from the simulated $M - \sigma$ relation.
Could a unique accretion disk form a SMBH in a timescale of about 1 Gyr? The answer to this question implies the solution of different related problems. The disk must be quite massive in order to provide enough gas to form a $10^9 ~M_\odot$ black hole and the angular momentum transfer must be very efficient in order to maintain a high accretion rate necessary to provide the observed luminosities as well as a short timescale for the growth of the seed. In fact, numerical simulations suggest that after the merger of two galaxies, a considerable amount of gas is settled into the central region of the resulting object. The gas loses angular momentum in a timescale comparable to the dynamical timescale (Mihos & Hernquist 1996; Barnes 2002), forming circumnuclear self-gravitating disks having masses in the range $10^6 - 10^9 ~ M_\odot$ and dimensions of about 100 - 500 pc.
Massive accretion disks are, in general, self-gravitating in their early evolutionary phases, a situation that affects the usual dynamics of disks controlled only by the gravitational forces due to the central body. In fact, models of non steady self-gravitating accretion disks were computed by Montesinos & de Freitas Pacheco (2011, hereafter MP11) satisfying the aforementioned requirements or, in other words, they are luminous enough and they permit the growth of seeds in a short timescale, consistent with observations of bright QSOs at high $z$. Some aspects of the work by those authors will be shortly reviewed below, followed by the presentation of new results.
As mentioned before, the very early formation of a massive disk in the central region of a galaxy requires the presence of a large amount of gas. In fact, infrared sky surveys have discovered huge amounts of molecular gas (CO) at intermediate and high-z QSOs (Downes et al 1999; Bertoldi et al 2003; Weiss et al 2007). In particular, the detection of CO emission in the quasar J1148+5251 at $z = 6.42$ permitted an estimation of the molecular hydrogen mass present in the central region of the host galaxy that amounts to $M(H_2) \sim 10^{10}~M_\odot$ (Walter et al. 2009). Moreover, at least in the case of the quasar J1319+0950 ($z = 6.13$) there is a robust evidence that the gas is rotating (Shao et al. 2017), suggesting the presence of a gaseous disk. More recently, observations of J1342+0928 ($z = 7.54$) indicate important amounts of gas and dust revealed by the infrared continuum and by the \[CII\] line emission (Venemans et al. 2017). All these observations support the idea that some massive galaxies in their early evolutionary phases had large amounts of gas in their central regions, which could have formed the massive accretion disks required by our model.
If observations seem to support the scenario in which massive disks fed the seeds of SMBHs, the other question concerns the accretion timescale defining the growth of those seeds. The accretion rate is fixed by the mechanism of angular momentum transfer and depends on the gas viscosity mechanism. Presently there is no adequate physical theory able to describe the gas viscosity in the presence of turbulent flows or in the presence of magnetic fields. The angular momentum transfer is generally described by the formalism introduced almost forty five years ago by Shakura & Sunyaev (1973), in which the viscosity is due to subsonic turbulence and is parametrized by the relation $\eta = {\alpha}Hc_s$, where $\eta$ is the viscosity, $\alpha \leq 1$ is a free parameter of the theory, $c_s$ is the sound velocity and $H$ is the vertical scale of height of the disk. $H$ is supposed to be of the same order as the typical (isotropic) turbulence scale. However, disks based on such a formalism are, in general, thermally unstable as demonstrated long time ago by Piran (1978).
It is well known that self-gravitating disks may be also unstable but, in some cases, such an instability can be a source of turbulence in the flow (Duschl & Britsch 2006). Simulations of the gas inflow in the central regions of galaxies induced by the gravitational potential either of the stellar nucleus or the SMBH, reveal the appearance of highly supersonic turbulence with velocities of the order of the virial value (Regan & Haehenelt 2009; Levine et al. 2008; Wise, Turk & Abel 2008). No fragmentation is observed in such a gas despite of being isothermal and gravitationally unstable. This behavior can be explained if an efficient angular momentum transfer suppresses fragmentation. On the contrary, if the angular momentum transfer is inefficient, the turbulence decays and triggers global instabilities which regenerates a turbulent flow. Thus, one could expect that the flow will be self-regulated by such a mechanism. In this case, the flow must be characterized by a critical Reynolds number ${\cal R}$, determined by the viscosity below which the flow becomes unstable (de Freitas Pacheco & Steiner 1973). This critical viscosity is given by $$\label{visco}
\eta = \frac{2\pi r V_{\phi}}{{\cal R}}$$ where $r$ is the radial coordinate and $V_{\phi}=r\Omega$ is the azimuthal velocity of the flow.
Another difference between ”$\alpha$“-disks and ”critical viscosity” models is the local balance of energy that is fixed by the equilibration of the rate of the dissipated turbulent energy with the radiated and advected energy rates or, in other words $$\label{balance}
T_{r\phi}\frac{d\Omega}{dlg r} = \nabla\cdot F_{rad} + \varepsilon_{adv}$$ In the above equation the left side represents the rate of turbulent energy dissipated per unit volume, the first term on the right side gives the rate per unit volume of radiated energy and finally, the last term represents the rate per unit volume of advected energy. In eq. \[balance\], $T_{r\phi}$ is the $(r,\phi)$ component of the stress tensor, $\Omega$ is the angular flow velocity, $F_{rad}$ is the radiative flux and $\varepsilon_{adv}$ is the rate per unit volume of advected energy. In the ”$\alpha$“-disk model the considered stress component is given by $$T_{r\phi}=\alpha\rho Hc_s\left(\frac{d\Omega}{dlg r}\right)$$ while in the ”critical viscosity” model the stress is given by $$T_{r\phi}=\frac{2\pi}{{\cal R}}\rho r^2\Omega\left(\frac{d\Omega}{dlg r}\right)$$ The difference between the heating rates in both models implies that the temperature distribution along the disk is not the same and that the expected radiated spectrum of each disk model is also different as it will be discussed later.
In non-steady self-gravitating disk models, the dynamics of the disk evolves because the mass distribution changes with time as well as the mass of the central black hole. Near the BH the velocity is approximately Keplerian while beyond the transition region, where self-gravitation dominates, the rotational velocity decreases with distance more slowly than $1/\sqrt{r}$. Along the vertical axis, the disk is supposed to be in hydrostatic equilibrium. The vertical scale of height varies as the disk evolves. At early phases the disk is geometrically thick in the central regions due to radiation pressure effects. At late phases, the vertical scale of height increases with distance, approaching the behavior displayed by canonical non self-gravitating models. Additional details, including a description of the numerical code used to solve the hydrodynamic equations can be found in MF11 as mentioned previously.
Figure 5, adapted from MF11, shows some examples of models characterized by different values of the seed (100 or 1500 $M_\odot$) and of the critical Reynolds number (500, 1000 or 1500). Notice that an initial black hole of 100 $M_\odot$ can grow up to $3 \times 10^8 ~M_\odot$ in a timescale of only $10^8$ yr if the critical Reynolds number is 500 (black curve). Inspection of figure 5 shows that for the same initial mass, if the Reynolds number is increased (red and green curves), the rate of growth decreases. This can be explained by the fact that the accretion rate is inversely proportional to the viscous timescale, namely, $t_{vis}^{-1} \approx \eta/r^2 \approx \Omega/{\cal R}$. This is an immediate consequence of the fact that increasing the Reynolds number it is more difficult to generate the turbulence. Therefore the angular momentum transfer is less efficient, reducing the accretion rate.
It is important to emphasize that while the in the internal parts of the disk the gas flows inwards, the outer parts expand as a consequence of the transfer of angular momentum from inside to outside. Hence, only about 50% of the initial mass of the disk is in fact accreted by the black hole. In the region where the sign of the radial velocity changes (the ”stagnation“ point) a torus-like structure is formed, supporting the scenario of the so-called ”unified model” of AGNs. The models by MF11 indicate that a substantial fraction of the expanding gas remains neutral with a temperature in the range 100 - 2000 K most of the time. In the case of our own galaxy, such a behavior could be related to the molecular “ring” of 2 pc radius observed around Sgr $A^*$ (Gusten et al. 1987). The physical conditions prevailing in the outskirts of the disk are favorable to star formation and could be an explanation for the presence of massive early-type stars located in two rotating thin disks detected in the central region of the Milky Way (Genzel et al. 2003; Paumard et al. 2006).
Further tests of the model
--------------------------
In the very beginning of the disk evolution, the accretion rate (and the luminosity) increases very rapidly and then remains more or less constant during most of the growth process. At the final phases, the accretion rate decays very fast once half of the disk mass is captured by the central black hole. Such a behavior can be seen in figures 2 and 3 of the paper by MF11. Depending on the initial disk mass and on the critical Reynolds number, the activity phase corresponding to the luminosity maximum lasts for about $2\times 10^7$ up to $3\times 10^8$ years.
Despite the fact that the accretion rate (and the luminosity) varies very little during the active phase, the spectral distribution of the radiation emitted by the disk evolves. Such a spectral evolution is due to time variations of the optical depth radial profile as well as to time variations of the radial temperature distribution, as mentioned earlier. There is a continuous shift of the emission maximum toward longer wavelengths that is a consequence of the decreasing average disk temperature as a function of time. In general for wavelengths $\lambda \geq 0.15~ \mu m$ the spectral intensity can be well represented by a power-law, that is, $I_{\lambda} \propto
\lambda^{-\alpha}$, where the power index is in the range $0.9 < \alpha < 1.3$, in agreement with values derived from most of quasar spectra.
The modeling of the spectral emission of the disk permits an estimate of the bolometric correction. Usually, the luminosity in a given wavelength is derived from observations of monochromatic fluxes and luminosity distances, which depend on the redshift. The bolometric luminosity can be computed by adopting an adequate correction. Nemmen & Brotherton (2010) have estimated the bolometric correction for luminosities at $\lambda = 0.30 \mu m$ based on models by Hubeny et al. (2000). The grid of models by the latter authors assimilates to each annulus of the disk an effective temperature and gravity, which are used to compute the emergent spectrum of an equivalent stellar atmosphere defined by those parameters. The sum of the radiation from all annuli gives the resulting spectrum of the disk. However the effective temperature formula adopted by those authors is adequate for a steady disk whose dynamics is dominated by the central black hole. In the case of non-steady self-gravitating disks the situation is rather different because both the local gravity and the effective temperature vary with time. Fortunately, the bolometric corrections for the monochromatic luminosities at $\lambda = 0.30 \mu m$ or at $\lambda = 3.6 \mu m$ do not vary too much during the active phase and a suitable average correction can be defined.
The adopted procedure to estimate such a bolometric correction requires an adequate choice of the representative parameters of the disk, since the seeds must be able to grow in timescales less than 1 Gyr and form SMBHs with masses larger than $5\times 10^8 ~ M_\odot$. Figure 6 shows the surface ”M-age-${\cal R}$“ derived from a grid of models where the seed mass was fixed to 100 $M_\odot$. It worth mentioning that in such a plot the parameter ”age” means the timescale required for the seed to accrete 50% of the initial disk mass, the same definition that was adopted by MF11. Inspection of figure 6 indicates that Reynolds numbers in the range $1000 < {\cal R} < 2500$ are required in order to satisfy those constraints. Then, bolometric corrections at $\lambda$ = 0.30 $\mu m$ and at $\lambda$= 3.6 $\mu m$ were computed for a series of models characterized by ${\cal R}$ = 2200, mass of the seed equal to 100 $M_\odot$ and different initial disk masses, corresponding to about twice the final black hole masses. After averaging the results from different models, the corrections are simply given by
$$\log L_{bol} = \log \lambda L_{\lambda} + 0.83 \,\,\,\, for\,\, \lambda = 0.30~\mu$$
and $$\log L_{bol} = \log \lambda L_{\lambda} + 0.92 \,\,\,\, for\,\, \lambda = 3.6~\mu m$$ where the luminosities are given in $erg.s^{-1}$. In figure 7 the bolometric correction for $\lambda$ = 0.30 $\mu m$ derived from these models is compared with the correction adopted by Nemmen & Brotherton (2010) based on steady and non self-gravitating disk models. It should be emphasized that the present bolometric luminosities derived either from UV or infrared monochromatic luminosities are in very good agreement when the corrections above are applied.
The present disk model can also be tested by the comparison between theoretical predictions and data in the diagram bolometric luminosity versus black hole mass. Data on high redshift QSOs ($z \geq 6.0$), including masses, monochromatic luminosities at $\lambda$ = 0.30 or 3.6 $\mu m$ and redshift, compiled by Trakhtenbrot et al. (2017) were used in the calculations. Black hole masses were estimated from the width of MgII lines and monochromatic UV-luminosities were derived from the best-fit model of the Mg II emission line complex. Then, using the derived bolometric corrections, the bolometric luminosity of each object was estimated and plotted as a function of the black hole mass in figure 8. The ”Mass-Luminosity" relation derived from our models can be adequately represented by the fit $$\label{luminosity}
\frac{L_{bol}}{L_\odot} = 1.41\left(\frac{500}{{\cal R}}\right)\left(\frac{M_{seed}}{100M_\odot}\right)^{0.52}\left(\frac{M}{M_\odot}\right)^{1.5}$$ which depends essentially on the seed mass and on the critical Reynolds number. Such a relation for $M_{seed} = 100~M_\odot$ and ${\cal R}$ = 2200 is shown in figure 8 as a red line.
In figure 8 is also shown the expected relation for the Eddington limited luminosity (see eq. \[eddington\]), frequently used to estimate the mass of the black hole. Notice that the theoretical ”M-L“ relation approaches the Eddington limit for black holes masses greater than $10^{10}~M_\odot$. It is worth mentioning that identifying the bolometric luminosity with the Eddington limit leads to an underestimate of the black hole mass, as it can be seen in the considered plot, where the majority of the data points are below the expected Eddington limit line. On the other hand, the theoretical ”L-M” relation displayed in figure 8 shows clearly that our disk models radiate below the Eddington limit.
It should be emphasized that in the case of accretion disks, the balance between gravity and radiation pressure along the vertical axis must be considered locally. The disk is locally stable if the radiative flux along the vertical direction is not greater than a critical flux limit given by $$\label{critical}
F_{rad} = \frac{\sqrt{3}}{4\pi}\left(\frac{m_Hc}{\sigma_T}\right)\left(\frac{\tau_s}{\tau_{ff}}\right)^{1/2}g_z$$ In the above equation $\tau_s$ is the optical depth due to electron scattering, $\tau_{ff}$ is the optical depth due to free-free absorption and $g_z$ is the local vertical gravitational acceleration. The condition expressed by eq. \[critical\] is valid in the inner regions of the disk where the electron scattering dominates over free-free absorption and where radiation pressure effects are more important. When the vertical radiation flux is higher than the critical value, the hydrostatic equilibrium is destroyed and outflows can be generated. Three-dimensional radiation magneto-hydrodynamical simulations were performed by Jiang et al. (2017), who have studied the evolution of an accretion disk with torus centered on a $5\times 10^8~M_\odot$ black hole. The radiation pressure in the internal regions of the disk may reach values up to $10^6$ times the gas pressure under certain conditions, producing outflows. In these simulations, the angular momentum transfer is controlled by magnetohydrodynamic turbulence that is not the case of our models.
The present accretion disk models can be also tested in the diagram black hole mass versus age (figure 9). This plot is simply the projection of the surface displayed in figure 6 on the considered plane ”M-age". Since the age parameter defined above is not directly accessible from observations, the age of the universe derived from the observed redshift of the QSO was used to plot the objects listed by Trakhtenbrot et al. (2017). The age of the universe represents a robust upper limit to the age parameter of the model. Theoretical predictions shown in figure 9 (solid lines) were computed for the same value of the seed mass ($M_{seed}$ = 150 $M_\odot$) and for two different critical Reynolds number: ${\cal R}$ = 1800 and ${\cal R}$ = 2500. These two values enclose in the plot most of the observed high-z QSOs, strongly constraining this fundamental parameter of the model.
Conclusions
===========
Present astronomical data are not in contradiction with a scenario in which two different evolutionary paths exist for the formation of SMBHs from small mass seeds.
In the first evolutionary path, seeds having masses around 100 $M_\odot$ grow intermittently following the gradual assembly of the host galaxy, according to the hierarchical picture. In this case, the coeval evolution between the host galaxy and the seed must be investigated by cosmological simulations. This procedure is justified by the complexity of the physical mechanisms involved in the process of growth. As we have previously seen, these numerical experiments are able to reproduce the observed luminosity density of QSOs and the observed correlations between the black hole mass at $z =0$ and the properties of the host galaxy like the stellar luminosity or the central projected stellar velocity dispersion. Despite these successful results, these simulations are unable to form SMBHs with masses around $10^9~M_\odot$ at high redshift, unless the masses of the seeds are dramatically increased up $10^5 - 10^6 ~ M_\odot$. Although this could be a possibility and despite some studies in this sense, such an alternative seems to be unrealistic.
The existence of bright QSOs at $z \approx 6 - 7$ and the difficulty for cosmological simulations to form these objects points toward a new direction, that is, the possibility of a very fast growth of seeds fed by massive accretion disks. This picture is supported by observations of large amounts of gas and dust in QSOs at high-z as discussed before. Models of non-steady self-gravitating disks in which the angular momentum transfer is controlled by turbulent viscosity were developed by Montesinos & de Freitas Pacheco (2011). These models have demonstrated that seeds can grow in timescales of the order of 1 Gyr or even less, being able to explain the main features of QSOs observed at high-z.
Further investigations on these ”critical-viscosity“ disks permitted an estimation of the bolomentric correction that should be applied to monochromatic luminosities measured at 0.30 $\mu m$ and 3.6 $\mu m$. These corrections permitted a comparison of existing data with theoretical predictions in the diagram ”Luminosity-Mass”. Such a plot strongly suggests that accretion disks radiate below the so-called Eddington limit. This means that black hole masses derived from such a limit are underestimated. Another useful diagram permitting the comparison of model predictions with data is the plot ”Mass - age". Here it is necessary to recall the remarks done before, that is, the age derived from the redshift is the age of the universe at that moment, representing only a robust upper limit to the disk age. Nevertheless, despite such limitations, both diagrams permit to constrain the two important parameters of the model, namely, the mass of the seeds and the critical Reynolds number. The former is probably in the range 100-150 $M_\odot$ while the latter should be in the interval $1800 < {\cal R} < 2500$.
Finally, it worth mentioning that some SMBH in the local universe ($z = 0$) have masses above that expected from simulations. This is the case of NGC 5252 and Cynus A as already mentioned, but may also be the case of NGC 3115 and probably of NGC 4594. This last object is more uncertain since its estimated mass is only 4.4 times greater that that expected from the simulated ”M-$\sigma$" relation. These objects are probably the remnants of a fast growth occurred in the early evolutionary phases of the universe and not the consequence of a coeval evolution involving the seed and the host galaxy.
Barnes J.E., 2002, Month.Not.Roy.Ast.Soc., 333, 481 Bertoldi F., Cox P., Neri R. et al., 2003, Astron.&Astrophys., 409, L47 Bertschinger E., arXiv:astro-ph/9506070 Burkert A. and Silk J., 2001, Astrophys.J., 554, L151 de Freitas Pacheco J.A. and Steiner J.E., 1976, Astrophys.Sp.Sci., 39, 487 Durier F. and de Freitas Pacheco J.A., 2011, Int.J.Mod.Phys., E20, 44 Downes D., Neri R., Wiklind T., Wilner D.J. and Shaver P.A., 1999, Astrophys.J., 513, L1 Duschel W.J. and Britsch M., 2006, Astrophys.J., 653, L92 Eisenstein D.J. and Loeb A. 1995, Astrophys.J., 443, 11 Ferrarese L. and Merrit D. 2000, Astrophys.J., 539, L9 Filloux Ch., Durier F., de Freitas Pacheco J.A. and Silk J., 2010, Int.J.Mod.Phys., D19, 1233 Filloux Ch., de Freitas Pacheco J.A., Durier F. and de Araújo J.N.C., 2011, Int.J.Mod.Phys., D20, 2399 Flammang R.A., 1982, Month.Not.Roy.Ast.Soc., 199, 833 Galli D., Palla F., 1998, Astron.&Astrophys., 335, 403 Gebhardt K., Bender R., Bower G. et al., 2000, Astrophys.J., 539, L13 Genzel R., Schodel R., Ott T. et al., 2003, Astrophys.J., 594, 813 Graham A.W. 2007, Month.Not.Roy.Ast.Soc., 379, 711 Gusten R., Genzel R., Wright, M.C.H. et al., 1987, Astrophys.J., 318, 124 Haehnelt M.G. and Rees M.J., 1993, Month.Not.Roy.Ast.Soc., 263, 168 Haring N. and Rix H.-W., 2004, Astrophys.J., 604, L89 Hopkins P.F., Richards G.T. and Hernquist L., 2007, Astrophys.J., 654, 731 Hubeny I., Blaes O., Krolik J. H. and Agol E., 2001, Astrophys.J., 559, 680 Inayoshi K., Haiman Z. and Ostriker J.P., 2016, Month.Not.Roy.Ast.Soc., 459, 3738 Jiang Y.-F., Stone J. and Davis S.W., 2017, arXiv:1709.02845 Kaplan S.A., 1954, Dokl.Akad.Nauk.SSSR, 94, 33 Kaplan S.A. and Pikel’ner S.B., 1970 in The Interstellar Medium, Cambridge, Harvard University Press Katz N., Weinberg D.H.and Hernquist L., 1996, Astrophys.J.Supp., 105, 19 Koide S., Shibata K., Kudoh T.,and Meier D.L., 2002, Science, 295, 1688 Kormendy J. and Gebhardt K. 2001, in AIP Conf.Proc. 586, 20th Texas Symposium on Relativistic Astrophysics, ed. J.C. Wheeler & H. Martel, NY, 363 Kormendy J. and Richstone, 1995, Ann.Rev.Astron.&Astrophys., 33, 581 Koushiappas S.M., Bullock J.S. and Dekel A., 2004, Month.Not.Roy.Ast.Soc., 354, 292 Levine R., Gnedin N.Y., Hamilton A.J.S. and Kravtsov, A. V., 2008, Astrophys.J., 678, 154 Magorrian et al., 1998, Astron.J., 115, 2285 Maraschi L., Reina C. and Treves A., 1974, Astron.&Astrophys., 35, 389 Marconi A. and Hunt, L.K. 2003, Astrophys.J., 589, L21 Merrit D. and Ferrarese L. 2001, Month.Not.Roy.Ast.Soc., 320, L30 Mihos C. and Hernquist L., 1996, Astrophys.J., 464, 641 Milosavljevic M., Couch S.M. and Bromm V., 2009, Astrophys.J., 696, L146 Montesinos M. and de Freitas Pacheco J.A. (MF11), 2011, Astron.&Astrophys., 526, A146 Navarro J.F. and White S.D.M., 1993, Month.Not.Roy.Ast.Soc., 265, 271 Nemmen R.S. and Brotherton M.S., 2010, Month.Not.Roy.Ast.Soc., 408, 1598 Nobili L., Turolla R. and Zampieri, L., 1991, Astrophys.J., 383, 250 Pacucci F., Volonteri M. and Ferrara A., 2015, Month.Not.Roy.Ast.Soc., 452, 1922 Paumard,T., Genzel R., Martins F. et al., 2006, Jour.Phys. Conf. Ser., 54, 199 Peebles P.J.E., 1993, in Principles of Physical Cosmology, Princeton University Press, p. 165 Peirani S. and de Freitas Pacheco J.A., 2008, Phys.Rev.D, 77, 064023 Piran T., 1978, Astrophys.J. 221, 652 Regan J.A. and Haehnelt M.G., 2009, Month.Not.Roy.Ast.Soc., 396, 343 Richstone D., Ajhar E.A., Bender R. et al., 1998, Nature, 395, 14 Salpeter, E.E., 1964, Astrophys.J., 140, 79 Sazonov S.Yu., Ostriker J.P., Ciotti L. and Sunyaev R.A., 2005, Month.Not.Roy.Ast.Soc., 358, 168 Schmidt, M., 1963, Nature, 197, 1040 Shao Y., Wang R., Jones G.C. et al., 2017, Astrophys.J., 845, art. 138, 7pp Shakura N.I. and Sunyaev R.A., 1973, Astron.&Astrophys., 24, 337 Shapiro S.L. 2004, Astrophys.J., 613, 1213 Shvartsman, V.F., 1971, Sov. Astron. (AJ), 15, 377 Silk J. and Rees M.J., 1998, Astron.&Astrophys., 331, L1 Small T.A. and Blandford R.D., 1992, Month.Not.Roy.Ast.Soc., 259, 725 Springel V., 2005, Month.Not.Roy.Ast.Soc., 364, 1105 Soltan A. 1982, Month.Not.Roy.Ast.Soc., 200, 115 Sutherland R.S. and Dopita M.A., 1993, Astrophys.J.Supp., 88, 253 Trakhtenbrot B., Volonteri M. and Natarajan P., 2017, Astrophys.J., 836, L1 Tremaine S., Gebhardt K., Bender R. et al. 2002, Astrophys.J., 574, 740 Venemans B.P., Walter F., Decarli R. et al., 2017, arxiv:1712.01886 Walter F., Riechers D., Cox P. et al., 2009, Nature, 457, 699 Weiss A., Downes D., Walter F. and Henkel C., 2007, ASP Conference Series, 375, 25 Wise J.H., Turk M.J. and Abel T., 2008, Astrophys.J., 682, 745 Wu, X.-B., Wang, F., Fan, X., et al. 2015, Nature, 518, 512
|
{
"pile_set_name": "ArXiv"
}
|
Conventionally, there is a known vehicle brake hydraulic pressure control apparatus including driving force estimation section for estimating a drive torque transferred from an engine to the drive wheels of a vehicle, backward force estimation section for estimating a backward force acting on the vehicle based on a road surface gradient, and braking force addition section for adding a braking force balanced with the backward force to the wheels based on the deviation between the drive torque and the backward force estimated by the driving force estimation section and the backward force estimation section, as a vehicle brake hydraulic pressure control apparatus holding the brake hydraulic pressure so as to, for example, keep the halt state of the vehicle (see PATENT LITERATURE 1). Specifically, this technique sets the braking force so that the sum of the drive torque and the braking force equals the backward force.
|
{
"pile_set_name": "USPTO Backgrounds"
}
|
361 Mass. 156 (1972)
279 N.E.2d 901
FRANK B. DOUCETTE
vs.
DIANNE P. DOUCETTE & another.
Supreme Judicial Court of Massachusetts, Suffolk.
December 9, 1971.
February 11, 1972.
Present: TAURO, C.J., CUTTER, QUIRICO, BRAUCHER, & HENNESSEY, JJ.
Howard J. Alperin for the defendant Dianne P. Doucette.
Bernard R. Silva, Jr., for the plaintiff.
TAURO, C.J.
The defendant Dianne P. Doucette appeals from a final decree in equity fixing the ownership rights in certain property between herself and the plaintiff, her former husband Frank B. Doucette. The Superior Court in its final decree ordered the defendant Dianne to pay the plaintiff the sum of $3,296.51 as his share in two bank accounts, jointly held, from which Dianne had withdrawn all funds. The defendant was also ordered to transfer certain shares of stock and the dividends therefrom to the plaintiff. The court dismissed the defendant's counterclaim of an interest in certain real estate, shares of stock and insurance policies.
*157 The judge below adopted his "Findings, Rulings and Order for Decree" as a report of material facts. The judge found that the plaintiff did not intend to make a gift to the defendant Dianne of any portion of the Enterprise Co-operative bank account in dispute, that each party was the owner of one-half of the Hyde Park Co-operative bank account,[1] and that the plaintiff did not intend to make a gift of part of the stock in dispute to Dianne. The judge also found that there was no intention to give or transfer the insurance policies to Dianne but, due to the inadequacy of the record, the court made no determination as to the policies. There was no error.
No useful purpose will be served in an elaborate recitation of the facts. The judge's detailed subsidiary findings amply support his ultimate findings and conclusions. "The determination of the interest ... [the parties] had in the deposits in the joint accounts is dependent primarily on what their intention was, and this is a question of fact." Buckley v. Buckley, 301 Mass. 530, 531. Nowicki v. Nowicki, 335 Mass. 392. And in equity, "the findings of a judge made on oral testimony are not to be reversed unless they are plainly wrong." Russell v. Meyers, 316 Mass. 669, 672. Boston v. Santosuosso, 307 Mass. 302, 332.
While it is true there is a rebuttable presumption that money or other property delivered by a husband to his wife is intended as a gift, advancement, or settlement for her benefit (Powell v. Powell, 260 Mass. 505, 508; Thompson v. Thompson, 312 Mass. 245, 247), we cannot say that the judge, on the basis of conflicting oral evidence, was plainly wrong in finding that the plaintiff had rebutted the presumption in the present case. The defendant's reliance on G.L.c. 170, § 15, to support her contention that she was entitled to money withdrawn from the two bank accounts is misplaced. The statute governs only the rights between the bank and its depositors and not the *158 rights between the parties. "It is settled that, while the contract of deposit is conclusive as between the parties and the bank ... nevertheless ... it is still open ... to show by attendant facts and circumstances that the ... [plaintiff] did not intend to make a present completed gift of a joint interest in the account, and that the mere form of the deposits does not settle the matter." Ball v. Forbes, 314 Mass. 200, 203-204. Malone v. Walsh, 315 Mass. 484, 486. Drain v. Brookline Sav. Bank, 327 Mass. 435, 440-441.
Decree affirmed.
NOTES
[1] The other defendant in this case was Hyde Park Co-operative Bank. The bank filed no brief on appeal.
|
{
"pile_set_name": "FreeLaw"
}
|
Lest you forget, Fox was sure to make you aware that this is Derek Jeter's final season in the majors. The Captain's name was spoken no fewer than 100 times on tonight's All-Star Game broadcast, but at what cost? That of remembering people like Tony Gwynn, Don Zimmer, or Bob Welch—none of whom were mentioned during the broadcast.
Some have called for a mid-game "in memoriam" break, similar to those seen during Hollywood awards shows. That would probably make an already notoriously-long broadcast even longer, but what other opportunity does the game of baseball have to honor those who have passed? (When Ted Williams died prior to the 2002 All-Star Game, MLB responded by naming the game's MVP award after him; then, due to the game ending in a tie, didn't give the award out after all.)
[Fox]
|
{
"pile_set_name": "OpenWebText2"
}
|
Podcast-serie Det er svært at forestille sig, hvordan det er at være til stede i et lokale, hvor en mand med et jagtgevær løber rundt med det ene formål at dræbe så mange som muligt. Hvordan reagerer man i det, hvordan forfølger de rædsomme oplevelser én bagefter, og er man stadigvæk mærket af det den dag i dag, selvom det er 25 år siden?
Lyt til 2. afsnit af Skyderiet på Aarhus Universitet herunder
Det ved de personer, der var til stede 5. april 1994 i Aarhus Universitets bygning på Trøjborg, hvor en 35-årig mandlig studerende stod bag danmarkshistoriens første og eneste skoleskyderi.
Læs også Interaktivt kort : Sådan foregik skyderiet på Aarhus Universitet
I Lokalavisen Aarhus' andet afsnit af podcast-serien om skyderiet på Aarhus Universitet dykker journalist Magnus Bekholm ned i, hvordan de overlevende blev hjulpet i dagene efter skuddramaet, og så kigger han på, hvordan flere stadigvæk er forfulgt af skyderiets rædsler.
"Jeg går ikke bare ind i dag og sætter mig 'blindt'. Jeg orienterer mig i rummet og ved, at der er en vej mere ud, eller jeg ved, at der kun er den her vej ud. Det er ikke noget, jeg tænker over. Det er bare naturligt," lyder det blandt i podcast-afsnittet fra Jan Storm.
Måtte sige op
Han var Falck-redder og en af de første på stedet den april-dag i 1994. Det syn og den frygt for gerningsmanden, han oplevede den dag i universitetsbygningen, har forfulgt ham efterfølgende. Men han er ikke den eneste, der har det sådan.
Falck-redder Jan Storm tog imod psykologhjælpen, efter han havde været til sted ved skuddramaet på Aarhus Universitet. Foto: Magnus F. Bekholm
For Karen Gleit, der stod i kantinen den dag for 25 år siden, har skyderiet haft voldsomme konsekvenser, fortæller hun i podcast-afsnittet.
Halvandet år på efter hændelsen begyndte følelserne og minderne langsomt at snige sig ind på hende, når hun opholdt sig i kantinen.
Læs også Chefredaktørens klumme: Skoleskyderiet i Aarhus
"En dag jeg tog mig selv i at løbe, så meget jeg kunne, igennem gangene og ud, og så tænkte jeg, 'at det her er bare helt galt. Det går slet ikke'," fortæller Karen Gleit i andet afsnit af 'Skyderiet på Aarhus Universitet'.
Tre år efter skuddramaet sagde hun derfor op.
Første afsnit af Lokalavisen Aarhus' podcast-serie om skuddramaet for næsten 25 år siden er ude nu. Søg på 'Skyderiet på Aarhus Universitet' dér, hvor du normalt lytter til dine podcasts. Grafik: Klaus Møller
Også Claus Skade Sørensen, der som studerende befandt sig i kantinen under skyderiet, har mærket, hvordan oplevelserne har påvirket hverdagen.
"I tiden efter har jeg det sådan, at jeg er meget opmærksom i større forsamlinger. Jeg sætter mig gerne ned, så jeg kan se indgange med ryggen mod muren og kigger, hvor jeg kan flygte hen. Der går et stykke tid, hvor jeg er ekstra opmærksom i forsamlinger og i forhold til flugtveje," fortæller han i afsnittet.
Læs også 25 år siden i år: Ny podcast-serie om skyderiet på Aarhus Universitet er ude nu
Afsnit to er ude nu og kan høres i iTunes, på Spotify, Soundcloud og på diverse andre podcast-platforme. Søg efter 'Skyderiet på Aarhus Universitet' dér, hvor du normalt hører dine podcasts. Hvis du abonnerer på podcasten, er du sikker på at få en besked om næste afsnit, så snart det er lagt op.
|
{
"pile_set_name": "OpenWebText2"
}
|
Mechanics of cellular packing of nanorods with finite and non-uniform diameters.
To understand the mechanics of cellular/intracellular packing of one-dimensional nanomaterials, we performed theoretical analysis and molecular dynamics simulations to investigate how the morphology and mechanical behaviors of a lipid vesicle are regulated by encapsulated rigid nanorods of finite and non-uniform diameters, including a cylindrical rod, a rod with widened ends, a cone-shaped rod, and a screwdriver-shaped rod. As the rod length increases, the vesicle evolves from a sphere into different shapes, such as a lemon, a conga drum, a cherry, a bowling pin, or a tubular shape for long and thick rods. The contact between the vesicle protrusion and the rod plays an important role in regulating the vesicle tubulation, membrane tension, and axial contact force on the rod. Our analysis provides a theoretical basis to understand a wide range of experiments on morphological transitions that occur in cellular packing of actin or microtubule bundles, mitotic cell division, and intracellular packing of carbon nanotubes.
|
{
"pile_set_name": "PubMed Abstracts"
}
|
Man charged with DUI
Published 1:53 pm, Monday, September 24, 2012
Matthew Longcore, 40, of Norwalk, was charged early Thursday with driving under the influence.
Longcore was seen driving south on Saugatuck Avenue and almost struck the curbing as he veered sharply into the right lane, police said. As he turned onto the entrance ramp of Interstate 95, he failed to use a directional signal and was stopped, police said.
An officer said he smelled alcohol on Longcore's breath while he was speaking with him, according to the report. Longcore also failed field sobriety tests, police said.
Longcore was charged with operating under the influence of drugs or alcohol. He was released on $500 bond and is scheduled to appear Sept. 28 in Norwalk Superior Court.
|
{
"pile_set_name": "Pile-CC"
}
|
Internet Service Providers
An Internet Service Provider (ISP) is a company that offers internet and network services to its clients. Dial-up (phone line) and cable or DSL based broadband connections are the most common internet services offered by ISPs. Modern technology and digitization have made it necessary for everyone to be familiar with the internet. Whether it is for business or personal use, almost everyone needs access to a reliable and fast internet connection. ISPs usually charge you in exchange for the internet services they offer.
Here are some of the most popular Internet Service Providers in the United States.
Verizon Fios is a fast internet solution based on 100% fiber-optic network for fast and reliable connectivity. Verizon Fios offers home, business and Wireless internet solutions to suit the needs of different types of users. All plans include super-fast speed and easy wireless sharing. Verizon home internet plans are ideal for basic internet uses, such as video streaming, gaming, shopping, web surfing, social media, and more.
AT&T internet service is suitable for anyone, looking for a high speed internet service provider with great offers. Some of the internet services offered by AT&T include Wireless internet, U-Verse internet, DSL High Speed Internet, Dial up, etc. Whether you want an internet service for your home or a Wireless internet device, AT&T has a plan for you. U-Verse is the most popular product that offers a high-speed, reliable and well-connected internet.
NetZero is an internet service provider specializing in mobile broadband, WiFi, DSL, and Dial-up internet services. It offers multiple mobile data plans to suit the needs of different users. The home internet plans include dial-up and DSL broadband internet services for home based users and businesses. NetZero mobile broadband or WiFi devices allow users to access the internet anywhere, on-the-go, or at the home.
CenturyLink offers high speed internet services for homes and businesses. CenturyLink home internet services are ideal for all individual and basic internet needs with different plans available for different internet speed and features. Even the lowest speed plans allow live streaming movies and shows, home Wi-Fi, Skype, and much more. CenturyLink business plans are designed to suit the needs of small, medium and enterprise level businesses.
Cox internet service provider offers internet services to homes and businesses all over the United States. Cox residential internet plans include high speed internet with in-home Wi-Fi at affordable prices that allow fast download, live streaming, online gaming, and much more. Cox business class solutions are suitable for businesses of every industry and size. Business internet plans include WiFi, online backup and high level security, and other services to suit individual business needs.
EarthLink internet service provider (ISP) specializes in high speed and low cost broadband and dial-up internet services. They have internet plans for both residential and business purposes. EarthLink home internet plans include dial-up, DSL, wireless and high-speed broadband cable internet options with features like, high speed modem, online security, 24/7 support, and many more. Business internet solutions include MPLS networks, hosted voice, and more.
Time Warner Cable is a cable TV, phone and high speed internet service provider. They specialize in fast internet for live streaming and high definition cable TV services. Internet solutions are available for both residential and business areas with different plans for different needs. TWC home internet features include secure home WiFi, unlimited internet, HD streaming, TWC WiFi hotspots for wireless connection, free calls, and much more.
Xfinity Comcast offers high speed internet services at affordable prices to its users. Xfinity specializes in super-fast internet speed that allows users to stream multiple videos at a time. It is ideal for personal and households use with capacity to operate multiple devices. Users, who need more speed and features, can choose from higher speed plans that allow access to millions of hotspots nationwide with ultra-fast internet speed.
Charter Spectrum is a complete internet solution for home internet users. With some of the most affordable internet plans, it offers features and services like, fast internet speeds, no data caps, virus detection and online security, multiple devices connectivity, and more. Charter is committed to provide fast internet experience to its users, allowing them to access super fast video streaming, play online games, download data, upload photos, and access over multiple devices without compromising speed.
Frontier is a popular internet Service Provider (ISP) that offers various residential, small business and enterprise internet packages. High-Speed Internet is the front aspect of Frontier that involves high internet browsing and downloading speed to allow users to play online games, watch videos, video chat, and stream movies with Wi-Fi access on multiple devices. Other features include live TV, digital security, free voice calls, internet privacy, and many more.
Optimum internet services are available for individuals and home users with access to multiple Wi-Fi hotspots in public places. All users get a dedicated web-based email to manage their internet subscriptions and other activities. Some of Optimum internet features include millions of hotspots, internet protection, hotspot finder app, HD video streaming, online router management, always up web hosting, SpamScrub, and more.
Suddenlink offers fast internet services to users in their existing locations or makes it easier to move. It specializes in fast, reliable broadband internet services with internet TV, live streaming, and multiple other options. Suddenlink2GO app allows users to watch TV on the go on any device, including smartphone and tablet. WiFi@Home is a home internet service that allows internet connection in multiple devices wirelessly.
Juno is a dial-up internet service provider service that offers multiple internet plans to suit the needs of different customers. It is ideal for home and residential areas that are in the need of fast and reliable internet connection. Dial-up internet provides services like, unlimited internet, 5x faster internet speed, email with spam protection, antivirus, and more. Those, who need even more speed, can choose from DSL/ broadband plans.
Bright House Networks offers high speed home internet services to users in the United States. In addition to home based internet, they also offer on-the-go WiFi internet service to allow internet access using free WiFi hotspots. Lightning fast internet speed enables users to stream video, play online games, video chat, and much more. There are available multiple plans to suit the needs and budget of different users and households.
Google Fiber is an ultra-fast internet service from Google that offers network connections up to thousands megabytes per second for faster downloads and internet browsing. The ISP is suitable for any home business or domestic use that involves internet surfing, video chatting, online gaming, downloading & uploading, and much more. Other notable features include free cloud storage, wireless & wired connections, and many more.
Cable ONE is an internet service provider that offers domestic and business internet services to people all over the country. Some of its benefits include speed reliability for continuous surfing, fast internet, and best value for money. It offers multiple internet plans to suffice the needs and budget of different users. All plans include services like, internet access to multiple devices, video streaming, WiFi, free email accounts, and more.
FreedomPop offers free Wireless broadband internet services for mobile users to allow unlimited internet access from home or on-the-go. FreedomPop free WiFi network is accessible to all registered users in specific locations. Users can choose from available devices or can use their own device to access the fast, reliable internet service. All devices are featured with specific internet plans and free data with WiFi facility.
Wow! is an internet service provider company that offers high-speed internet services for homes and businesses. Their services are offered in the Midwest and Southeast areas, covering all major states of the country. Wow! ISP offers multiple internet plans based on the needs for internet speeds, purpose, location, and other features. Wi-Fi connection allows users to connect multiple devices to the internet at one time without compromising speed.
Mediacom offers high-speed, reliable internet service for all your personal and household needs. With WiFi and cable connection, it allows multiple users to access the internet simultaneously for searching the web, downloading, and streaming. It is suitable for a family of any number of members or a basic home based business or freelancers. Users can choose from multiple plans with different speeds and data usage for different needs.
Windstream is an ideal internet service provider for residential, small business and enterprises. Windstream residential internet package includes high speed internet plans with digital TV, phone service, WiFi, security packs, and much more. The plan is suitable for any household and allows internet connectivity for multiple members and devices. Small business and enterprise internet plans are available with multiple options and features to suit different business needs.
RCN provides fiber-based high-speed internet services to domestic and residential users. Their internet plans are suitable for online streaming, watching online tv, online gaming, home business or freelancing, WiFi, and more. Some important features include same day installation, 24/7 customer support, US based customer service, WiFi and cable internet, secure connection, and more. Wireless function allows internet connectivity to multiple devices simultaneously without compromising performance.
PeoplePC expertise in low-cost dial-up internet services with fast and reliable networks. Its main features and services include, accelerator technology for faster surfing, unlimited internet access, smart dialer for more reliability, anti-virus & anti-spam for online protection, and more. They also offer broadband internet services for users who need even faster net speed. The service offers affordable plans with lightning-fast network speed and no busy signals.
Armstrong Zoom Internet is suitable for all basic and household internet uses, including work from home, downloads, music sharing, social networking, photo uploads, online tv, gaming, or basic net surfing. All Zoom plans are featured with high-speed, reliable internet, free 24/7 support, and free premium virus protection. Other features include multiple internet plans with different speeds, Zoom share for wireless connectivity, Zoom WiFi locations, and much more.
Cincinnati Bell is an internet service provider for households and businesses in the Cincinnati area. They specialize in high-speed internet, Wireless, TV and phone services for all home, family and individual needs. Residential network packages include various high-speed internet plans to suit different needs of home workers, freelancers, basic net users, and WiFi users. Connect Cincinnati apps provides access to free WiFi hotspots to its users.
MegaPath offers fast and affordable DSL internet services to small and medium-sized businesses or home offices. With optimum speed and high performance, MegaPath DSL ensures high scalability and reliability to fit every business need. In addition to DSL broadband, it also offers Ethernet, managed WiFi, T3 lines, and other services. Different business internet plans and solutions are available based on different business sizes, industries and needs.
Wave is a local internet service provider (ISP) service in the northwest area that specializes in high-speed internet bundles with free TV and phone calls. Wave ISP is suitable for both home and business network requirements. Home business packages include multiple high-speed internet plans and home networking services powered with Wave G Fiber. Other key features include reliable connection, 24 hour technical support, package switching facility, and more.
DSL Extreme specializes in high-speed internet service that suits the needs and budget of everyone. They have multiple internet plans with different speeds and prices. Users can try internet plans and choose the most suitable and affordable ones. In addition to broadband connection for home based businesses, individuals and basic users, DSL Extreme provides internet connectivity to small and large businesses with dedicated DSL, T1 & Ethernet connections.
Midcontinent offers fast, reliable broadband internet connections for households and domestic users. Their internet service Midco Xstream is famous for flawless streaming, fast download and smoother gaming experience. It is available in different plans with different upload/download speeds for different user needs and budgets. Other features and services offered by Midcontinent include Email/ webspace, network device, security, technical assistance, speed test, cloud storage, and more.
Sonic internet service provider offers high speed gigabit fiber internet services with free nationwide phone calls with their fixed monthly plans. Users can choose a suitable network plan and enjoy super-fast internet speed of up to 1Gbps with Sonic. Their network services are feasible both for home and business purposes. Home internet plans are designed to be fast, reliable and affordable to suit the needs and budgets of common men.
FairPoint specializes in internet and internet phone & TV services for residential and business areas. They are suitable for individuals, home workers, home uses, small & medium size businesses, and enterprises. Users can choose from available plans for their area from variety of internet speeds and data limits. The seasonal suspend program allows users to put their phone or internet services on hold while they are away.
Rise Broadband is a fast, reliable wireless broadband internet solution for homes, business and residential areas. Their super fast networks make it easier for users to watch HD movies online, play HD games with no buffering, surf the internet, chat on Skype, or download anything in less time. Rise Broadband internet connections are available for all your devices and in multiple states in the country.
TOAST.net offers US based nationwide dial-up, DSL and wireless network services for residential and business internet needs. Users can choose from cable internet, high-speed DSL, mobile hotspot or dial-up services to suit their specific speed and accessibility needs. Some of these plans are available nationwide while others are location-based. All network plans guarantee high-speed, uninterrupted internet, great for online gaming, Netflix, video streaming, YouTube, chat, and more.
Grande is an ISP that offers high-speed internet services to communities, residential, and businesses in the Texas area. Users are given multiple bundled and internet specific plans to choose from to suit different needs of upload & download speeds. All plans are equipped with free email accounts that can be accessed through Grande’s web interface. Users can anytime call Grande customer service to ask any technical questions about their desired plans.
Atlantic Broadband network service is a suitable ISP for home and business internet needs. They offer affordable yet fast and powerful internet plans along with phone & cable TV services for residential areas in the US states. High speed internet allows users to watch live TV online, surf the internet, video chat, download or upload anything, and much more. Wireless home networking provides internet access to multiple devices simultaneously.
ISP.com is a broadband internet service provider that offers dial-up and high speed DSL network access to its users in multiple locations across the country. With a very affordable standard dial-up plan, users can enjoy high-speed internet with uninterrupted web surfing. For more speed, users can choose high speed dial-up or DSL extreme plans that specialize in ultra-fast, always on connection with no busy signals, suitable for all home and office needs.
|
{
"pile_set_name": "Pile-CC"
}
|
---
bibliography:
- 'References.bib'
---
\
Danijel Grahovac$^1$[^1], Nikolai N. Leonenko$^2$[^2]\
\
**Abstract:** Multifractal analysis of stochastic processes deals with the fine scale properties of the sample paths and seeks for some global scaling property that would enable extracting the so-called spectrum of singularities. In this paper we establish bounds on the support of the spectrum of singularities. To do this, we prove a theorem that complements the famous Kolmogorov’s continuity criterion. The nature of these bounds helps us identify the quantities truly responsible for the support of the spectrum. We then make several conclusions from this. First, specifying global scaling in terms of moments is incomplete due to possible infinite moments, both of positive and negative order. For the case of ergodic self-similar processes we show that negative order moments and their divergence do not affect the spectrum. On the other hand, infinite positive order moments make the spectrum nontrivial. In particular, we show that the self-similar stationary increments process with the nontrivial spectrum must be heavy-tailed. This shows that for determining the spectrum it is crucial to capture the divergence of moments. We show that the partition function is capable of doing this and also propose a robust variant of this method for negative order moments.
Introduction
============
The notion of multifractality first appeared in the setting of measures. The importance of scaling relations was first stressed in the work of Mandelbrot in the context of turbulence modeling ([@mandelbrot1972; @mandelbrot1974]). Later the notion has been extended to functions and studying fine scale properties of functions (see [@muzy1993multifractal; @jaffard1997multifractal1; @jaffard1996old]). In this setting, multifractal analysis deals with the local scaling properties of functions characterized by the Hausdorff dimension of sets of points having the same Hölder exponent. Hausdorff dimension of these sets for varying Hölder exponent yields the so-called spectrum of singularities (or multifractal spectrum). The function is called multifractal if its spectrum is nontrivial, in the sense that it is not a one point set.
However, from a practical point of view, it is impossible to numerically determine the spectrum directly from the definition. Frisch and Parisi ([@frisch1985fully]) were the first to propose the idea of determining the spectrum based on certain average quantities, as a numerically attainable way. In order to relate this global scaling property and the local one based on the Hölder exponents, one needs “multifractal formalism” to hold. This is not always the case and there has been an extensive research on this topic (see [@jaffard1997multifractal1; @riedi1995improved; @jaffard1997multifractal2; @jaffard2000frisch; @riedi1999multifractal]). In order to overcome the problem, one takes the other way around and seeks for different definitions of global and local scaling properties that would always be related by a certain type of multifractal formalism (see [@jaffard2006wavelet] for an overview in the context of measures and functions). Many authors claim that wavelets provide the best way to specify the multifractal formalism, both theoretically and numerically (see e.g. [@jaffard2006wavelet; @bacry1993singularity]).
For stochastic processes, the local scaling properties can be immediately generalized by simply applying the definition for a function on the sample paths. As a global property, the extension is not so straightforward. In [@MFC1997MMAR], the authors present a theory of multifractal stochastic processes and define the scaling property in terms of the process moments. The underlying idea is to define a scaling property more general than the well known self-similarity. However, this can lead to discrepancy. For example, $\alpha$-stable Lévy processes with $0<\alpha<2$ are known to be self-similar with index $1/\alpha$. On the other hand, it follows from [@jaffard1999] that the sample paths of these processes exhibit multifractal features in the sense of the nontrivial spectrum.
The goal of this paper is to make a contribution to the multifractal theory of stochastic processes by exhibiting limitations of the existing definitions and proposing methods to overcome these. The issue of infinite moments has so far been discussed mostly as a problem of the estimation methods for determining the spectrum and has been a major critic for the partition function method. To our best knowledge, our results are the first that link heavy-tails of self-similar processes with their path irregularities in this sense. It is an intriguing fact that in this case, ignorant estimation of infinite moments will yield the correct spectrum. The bounds on the support of the spectrum we derive can be used to easily detect trivial spectrum. We do this for the class of Hermite processes. Although these bounds are very general, we later restrict our attention to stationary increments processes. We consider only $\mathbb{R}$-valued stochastic processes and our treatment is intended to be probabilistic.
The paper is organized as follows. In the next section we formally state different definitions of multifractal stochastic processes and recall some implications between them. We also discuss the multifractal formalism and different estimation methods. In Section \[sec3\] we derive general bounds that determine the support of the multifractal spectrum and relate the bounds with the moment scaling properties. We show implications of these results for self-similar stationary increments processes. Section \[sec4\] provides examples of stochastic processes from the perspective of different definitions. We show how the results of Section \[sec3\] apply for each example. In Section \[sec5\] we propose a simple modification of the partition function method that overcomes divergencies of negative order moments. We illustrate on the simulated data the advantages of this modification. Appendix contains some general facts about processes considered in Section \[sec4\].
Definitions of the multifractal stochastic processes {#sec2}
====================================================
In this section we provide an overview of different scaling relations that are usually referred to as multifractality. Examples of processes that satisfy these properties are given in Section \[sec4\]. All the processes considered in this paper are assumed to be measurable, separable, nontrivial (in the sense that they are not a.s. constant) and stochastically continuous at zero, meaning that for every $\varepsilon>0$, $P(|X(h)|>\varepsilon)\to 0$ as $h\to 0$.
The best known scaling relation in the theory of stochastic processes is the self-similarity. A stochastic process $\{X(t), t \geq 0\}$ is said to be self-similar if for any $a>0$, there exists $b>0$ such that $$\{X(at)\} \overset{d}{=} \{b X(t)\},$$ where equality is in finite dimensional distributions. If $\{X(t)\}$ is self-similar, nontrivial and stochastically continuous at $0$, then $b$ must be of the form $a^H, a>0$, for some $H\geq 0$, i.e. $$\{X(at)\} \overset{d}{=} \{a^H X(t)\}.$$ A proof can be found in [@embrechts2002]. These weak assumptions are assumed to hold for every self-similar process considered in the paper. The exponent $H$ is usually called the Hurst parameter or index and we say $\{X(t)\}$ is $H$-ss and $H$-sssi if it also has stationary increments.\
Following [@MFC1997MMAR], the definition of a multifractal that we present first is motivated by generalizing the scaling rule of self-similar processes in the following manner:
\[defD\] A stochastic process $\{X(t)\}$ is multifractal if $$\label{mfdefgeneral}
\{X(ct)\} \overset{d}{=} \{M(c) X(t)\},$$ where for every $c>0$, $M(c)$ is a random variable independent of $\{X(t)\}$ whose distribution does not depend on $t$.
When $M(c)$ is non-random, then $M(c)=c^H$ and the definition reduces to $H$-self-similarity. The scaling factor $M(c)$ should satisfy the following property: $$\label{multiplicativeproperty}
M(ab) \overset{d}{=} M_1(a) M_2(b),$$ for every choice of $a$ and $b$, where $M_1$ and $M_2$ are independent copies of $M$. This is sometimes called log-infinite divisibility and a motivation for this property can be found in [@MFC1997MMAR]. In [@bacry2008continuous], the authors show that implies .\
However, instead of Definition , scaling is usually specified in terms of moments. The idea of extracting the scaling properties from average type quantities, like $L^p$ norm, dates back to the work of Frisch and Parisi ([@frisch1985fully]).
\[defM\] A stochastic process $\{X(t)\}$ is multifractal if there exist functions $c(q)$ and $\tau(q)$ such that $$\label{mfdefEq}
E|X(t)-X(s)|^q=c(q) |t-s|^{\tau(q)}, \quad \text{for all } t,s \in \mathfrak{T}, q \in \mathfrak{Q},$$ where $\mathfrak{T}$ and $\mathfrak{Q}$ are intervals on the real line with positive length and $0\in \mathfrak{T}$.
The function $\tau(q)$ is called the scaling function. Set $\mathfrak{Q}$ can also include negative reals. The definition can also be based on the moments of the process instead of the increments. If the increments are stationary, these definitions coincide. It is clear that if $\{X(t) \}$ is $H$-sssi then $\tau(q)=Hq$. One can also show that $\tau(q)$ must be concave. Strict concavity can hold only over a finite time horizon, otherwise $\tau(q)$ would be linear. This is not considered to be a problem for practical purposes (see [@MFC1997MMAR] for details). Since the scaling function is linear for self-similar processes, every departure from linearity can be attributed to multifractality. However for this reasoning to make sense, one must assume moment scaling to hold as otherwise self-similarity and multifractality are not complementary notions.
The drawback of involving moments in the definition is that they can be infinite. This narrows the applicability of the definition and as we show later, can hide the information about the singularity spectrum.
It is easy to see that under stationary increments the defining property along with the property implies multifractality Definition \[defM\]. Indeed, implies that $E|M(c)|^q$ must be of the form $c^{\tau(q)}$ and from $X(t) \overset{d}{=} M(t) X(1)$ the claim follows. One has to assume finiteness of the moments involved in order for the statements like to have sense. Also notice that both definitions imply $X(0)=0$ a.s. which will be used through the paper.
There exist many variations of the Definition \[defM\]. Some processes, like the classical multiplicative cascade, obey the definition only for small range of values $t$ or for asymptotically small $t$. The stationarity of increments can also be imposed. When referring to multifractality we will make clear which definition we mean. However we exclude the case of self-similar processes from the preceding definitions.\
Definition \[defM\] provides a simple criterion for detecting the multifractal property of the data set. Consider a stationary increments process $X(t)$ defined for $t \in [0,T]$ and suppose $X(0)=0$. Divide the interval $[0,T]$ into $\lfloor T / \Delta t \rfloor$ blocks of length $\Delta t$ and define the partition function (sometimes also called the structure function): $$\label{partitionfun}
S_q(T,\Delta t) = \frac{1}{\lfloor T / \Delta t \rfloor} \sum_{i=1}^{\lfloor T / \Delta t \rfloor} \left| X ( i \Delta t) - X ( (i-1) \Delta t) \right|^q.$$ If $\{ X(t) \}$ is multifractal with stationary increments then $E S_q(T,\Delta t)= E |X (\Delta t) |^q = c(q) {\Delta t}^{\tau(q)}$. So, $$\label{linearrelation}
\ln E S_q(T, \Delta t)=\tau(q) \ln {\Delta t} + \ln c(q).$$ One can also see $S_q(T, \Delta t)$ as the empirical counterpart of the left-hand side of .
As follows from , it makes sense to consider $\tau(q)$ as the slope of the linear regression of $\ln S_q(T, \Delta t)$ on $\ln {\Delta t}$. In practice, one should first check that relation is valid. See [@FCM1997multifractalityDEM; @anh2010simulation] for more details on this methodology. It was shown in [@GL] that a large class of processes behaves as the relation holds even though there is no exact moment scaling .
Suppose that the process is sampled at equidistant time points. We can assume these are the time points $1,\dots,T$ (see [@GL]). By choosing points $0\leq {\Delta t}_1 < \cdots < {\Delta t}_N \leq T$ and $q_j > 0$, $j=1,\dots,M$, based on the sample $X_1,\dots,X_T$ we can calculate $$\label{points}
\left\{ S_{q_j} (n, \Delta t_i) \ : \ i=1,\dots, N, j=1,\dots,M \right\}.$$ Suppose that it is checked that for fixed $q$ the points $(\ln \Delta t_i , \ln S_q(T, \Delta t))$, $i=1,\dots,n$ behave approximately linear. Using the well known formula for the slope of the linear regression line, we can define the empirical scaling function: $$\label{tauhat}
\hat{\tau}_{N,T}(q) = \frac{\sum_{i=1}^{N} \ln {\Delta t_i} \ln S_q(n,\Delta t_i) - \frac{1}{N} \sum_{i=1}^{N} \ln {\Delta t_i} \sum_{j=1}^{N} \ln S_q(n,\Delta t_i) }{ \sum_{i=1}^{N} \left(\ln {\Delta t_i}\right)^2 - \frac{1}{N} \left( \sum_{i=1}^{N} \ln {\Delta t_i} \right)^2 },$$ where $N$ is the number of time points chosen in the regression. For reference, we state the following property as a definition.
\[defE\] A stochastic process $\{X(t)\}$ is (empirically) multifractal if it has stationary increments and the empirical scaling function is non-linear.
Although the definition follows naturally from the moment scaling relation , it is not very common in the literature. Usually one tries to estimate the scaling function by using only the smallest time scale available. For example, for the cascade process on the interval $[0,T]$ the smallest interval is usually of the length $2^{-j}T$ for some $j$. One can then estimate the scaling function at point $q$ as $$\label{tauhatalternative}
\frac{\log_2 S_q(T,2^{-j}T)}{-j}.$$ Estimator estimates the scaling function across different time scales and is therefore more general than .
Spectrum of singularities
-------------------------
Preceding definitions involve “global” properties of the process. Alternatively, one can base the definition on the “local” scaling properties, such as roughness of the process sample paths measured by the pointwise Hölder exponents. There are different approaches on how to develop the notion of a multifractal function. First, we say that a function $f: [0,\infty) \to \mathbb{R}$ is $C^{\gamma}(t_0)$ if there exists constant $C>0$ such that for all $t$ in some neighborhood of $t_0$ $$|f(t)-f(t_0) | \leq C |t - t_0|^{\gamma}.$$ One can also define that $f$ is Hölder continuous at point $t_0$ if $|f(t)-P_{t_0} (t) | \leq C |t - t_0|^{\gamma}$ for some polynomial $P_{t_0}$ of degree at most $\lfloor \gamma \rfloor$. Two definitions coincide if $\gamma<1$. Therefore we will use the former one in this paper as in many cases we consider only functions for which $\gamma<1$ at any point. For more details see [@riedi1999multifractal].
A pointwise Hölder exponent of the function $f$ at $t_0$ is then $$\label{pointwiseHolder}
H(t_0)= \sup \left\{ \gamma : f \in C^{\gamma}(t_0) \right\}.$$ Consider sets $S_h=\{ t : H(t)=h \}$ where $f$ has the Hölder exponent of value $h$. These sets are usually fractal in the sense that they have non-integer Hausdorff dimension. Define $d(h)$ to be the Hausdorff dimension of $S_h$, using the convention that the dimension of an empty set is $-\infty$. Function $d(h)$ is called the spectrum of singularities (also multifractal or Hausdorff spectrum). We will refer to set of $h$ such that $d(h)\neq - \infty$ as the support of the spectrum. Function $f$ is said to be multifractal if support of its spectrum contains an interval of non-empty interior. This is naturally extended to stochastic processes:
\[defL\] A stochastic process $\{X(t)\}$ on some probability space $(\Omega, \mathcal{F}, P )$ is multifractal if for (almost) every $\omega \in \Omega$, $t \mapsto X(t,\omega)$ is a multifractal function.
When considered for a stochastic process, Hölder exponents are random variables and $S_h$ random sets. However in many cases the spectrum is deterministic ([@balanca2013]).
Multifractal formalism
----------------------
Multifractal formalism relates local and global scaling properties by connecting singularity spectrum with the scaling function via the Legendre transform: $$\label{formalism}
d(h)= \inf_q \left( hq - \tau(q) +1\right).$$ When $d(h)=-\infty$, $h$ is not the Hölder exponent, thus the convention that $\dim_{H}(\emptyset)=-\infty$. Since the Legendre transform is concave, the spectrum is always concave function, provided multifractal formalism holds. If the multifractal formalism holds, the spectrum can be estimated as the Legendre transform of the estimated scaling function.
Substantial work has been done to investigate when this formalism holds. The validity of the formalism depends which definition of $\tau$ one uses. Since it ensures that the spectrum can be estimated from computable global quantities, it is a desirable property of the object considered. This is the reason many authors seek for different definitions of global and local scaling properties that would always be related by a certain type of multifractal formalism.
The validity of the multifractal formalism is known to be narrow when the scaling function is based on the process increments ([@muzy1993multifractal]). It has been showed that a large class of processes can produce nonlinear scaling function and that this behaviour is influenced by the heavy tail index ([@GL]). These nonlinearities are not connected with the spectrum, except in the models that posses some scaling property. In many examples negative order moments can also produce concavity since in many models they are infinite. As we will show on the example of self-similar stationary increments processes, divergence of the negative order moments has nothing to do with the spectrum. Thus the estimated nonlinearity is merely an artefact of the estimation method. We propose a simple modification of the partition function that will make it more robust. On the other hand, nonlinearity that comes from diverging positive order moments is crucial in estimating the spectrum with . For self-similar processes, increments based partition function can capture these nonlinearities correctly.
Wavelets are considered to be the best approach to define multifractality. This is usually done by basing the definition of the partition function on the wavelet decomposition of the process (see e.g. [@riedi1999multifractal; @audit2002wavelet]). This leads to different methods for multifractal analysis based on wavelets. However, this type of definition is also sensitive to diverging moments as has been noted in [@gonccalves2005diverging], where the wavelet based estimator of the tail index is proposed. Scaling based on the wavelet coefficients is also unable to yield a full spectrum of singularities. In [@jaffard2004wavelet], the formalism based on wavelet leaders has been proposed. This in some sense resembles the method we propose in Section \[sec5\], although our motivation comes from the results given in the next section.
On the other hand, one can also replace the definition of the spectrum to achieve multifractal formalism. For other definitions of the local scaling, such as the one based on the so-called coarse Hölder exponents, see e.g. [@riedi1999multifractal; @CFM1997large].
The choice of the range over which the infimum in is taken can also be a subject of discussion. From the statistical point of view, moments of negative order are not usually investigated. Sometimes $\tau(q)$ is calculated only for $q>0$ and can therefore yield only left (increasing) part of the spectrum. For more details see [@riedi1999multifractal; @jaffard1999].
Bounds on the support of the spectrum {#sec3}
=====================================
The fractional Brownian motion (FBM) is a Gaussian process $\{B_H(t)\}$, which starts at zero, has zero expectation for every $t$ and the following covariance function $$E B_H(t) B_H(s) = \frac{1}{2} \left( |t|^{2H} + |s|^{2H} - |t-s|^{2H} \right), \quad H \in (0,1).$$ If $H=1/2$, FBM is the standard Brownian motion (BM). FBM is $H$-sssi and has a trivial spectrum consisting of only one point, i.e. $d(H)=1$, and $d(h)=-\infty$ for $h\neq H$. So there is no doubt that FBM is self-similar and not multifractal in the sense of all definitions considered. However some self-similar processes have nontrivial spectrum. Our goal in this section is to identify the property of the process that makes the spectrum nontrivial.
We do this by deriving the bounds on the support of the spectrum. The lower bound is a consequence of the well-known Kolmogorov’s continuity theorem. For the upper bound we prove a sort of complement of this theorem.
Before we proceed, we fix the following notation for some general process $\{X(t), t \in [0,T]\}$. We denote the range of finite moments as $\mathfrak{Q}=(\underline{q},\overline{q})$, i.e. $$\label{qLU}
\begin{aligned}
\overline{q} &= \sup \{ q >0 : E|X(T)|^q < \infty \},\\
\underline{q} &= \inf \{ q <0 : E|X(T)|^q < \infty \}.
\end{aligned}$$ If $\{X(t)\}$ is multifractal in the sense of Definition \[defM\] with the scaling function $\tau$ define $$\label{H-+}
\begin{aligned}
H^- &= \sup \left\{ \frac{\tau(q)}{q} - \frac{1}{q} : q \in (0, \overline{q}) \ \& \ \tau(q)>1 \right\},\\
\widetilde{H^+} &= \inf \left\{ \frac{\tau(q)}{q} - \frac{1}{q} : q \in (\underline{q},0) \ \& \ \tau(q)<1 \right\}.
\end{aligned}$$
The lower bound
---------------
Using the well known Kolmogorov’s criterion it is easy to derive the lower bound on the support of the spectrum. The proof of the following theorem can be found in [@karatzas1991brownian Theorem 2.8].
\[thm:Kolmogorov-Chentsov\] Suppose that a process $\{X(t), t \in [0,T]\}$ satisfies $$\label{kolmomcrit}
E|X(t)-X(s)|^{\alpha} \leq C |t-s|^{1+\beta},$$ for some positive constants $\alpha,\beta,C$. Then there exists a modification $\{\tilde{X}(t), t \in [0,T]\}$ of $\{X(t)\}$, which is locally Hölder continuous with exponent $\gamma$ for every $\gamma \in (0,\beta/\alpha)$. This means that there exists some a.s. positive random variable $h(\omega)$ and constant $\delta>0$ such that $$P \left( \omega : \sup_{|t-s|<h(\omega), \ s,t \in [0,T]} \frac{|\tilde{X}(t,\omega)-\tilde{X}(s,\omega)|}{|t-s|^{\gamma}} \leq \delta \right)=1.$$
\[prop1\] Suppose $\{X(t), t \in [0,T]\}$ is multifractal in the sense of Definition \[defM\]. If for some $q>0$, $E|X(T)|^q<\infty$ and $\tau(q)>1$, then there exists a modification of $\{X(t)\}$ which is locally Hölder continuous with exponent $\gamma$ for every $$\gamma \in \left(0,\frac{\tau(q)}{q} - \frac{1}{q} \right).$$ In particular, there exist a modification such that for almost every sample path, $$H^- \leq H(t) \quad \text{ for each } t \in [0,T],$$ where $H(t)$ is defined by and $H^-$ by .
This is a simple consequence of Theorem \[thm:Kolmogorov-Chentsov\] since Definition \[defM\] implies $$E|X(t)-X(s)|^q=c(q) |t-s|^{1+(\tau(q)-1)}.$$ Fixing $s$ in the definition of the local Hölder exponent gives the pointwise Hölder exponent.
In the sequel we always suppose to work with the modification from Proposition \[prop1\]. We can conclude that the spectrum $d(h)=-\infty$ for $h \in (0,H^-)$. This way we can establish an estimate for the left endpoint of the interval where the spectrum is defined. It also follows that if the process is $H$-sssi and has finite moments of every positive order, then $H^-=H\leq H(t)$. Thus, when moment scaling holds, path irregularities are closely related with infinite moments of positive order. We make this point stronger later.
Theorem \[thm:Kolmogorov-Chentsov\] is valid for general stochastic processes. Although moment condition is appealing, the condition needed for the proof of Theorem \[thm:Kolmogorov-Chentsov\] can be stated in a different form. If we assume stationarity of the increments, other forms can also be derived. Some of them may seem strange at the moment but will prove to be useful later on.
\[lemma:kolconditions\] Suppose that $\{X(t), t \in [0,T]\}$ is a stochastic process. Then there exists a modification of $\{X(t)\}$ which is a.s. locally Hölder continuous of order $\gamma>0$ if any of the following holds:
(i) for some $\eta>1$ it holds that for every $s\in [0,T)$ and $C>0$ $$P \left( \left| X(s+t) - X(s) \right| \geq C t^{\gamma} \right) = O(t^{\eta}), \quad \text{ as } t \to 0,$$
(ii) for some $m \in \mathbb{N}$, $\eta>1$ it holds that for every $s\in [0,T)$ and $C>0$ $$P \left(\max_{l=1,\dots,m} \left| X(s+lt) - X(s+(l-1)t) \right| \geq C t^{\gamma} \right) = O(t^{\eta}), \quad \text{ as } t \to 0$$
(iii) for some $m \in \mathbb{N}$, $\alpha>0$ and $\beta > \alpha \gamma + 1$ it holds that for every $s\in [0,T)$ $$E \left[ \max_{l=1,\dots,m} \left| X(s+lt) - X(s+(l-1)t) \right| \right]^{\alpha} = O \left( t^{ \beta} \right), \quad \text{ as } t \to 0.$$
If $\{X(t)\}$ has stationary increments it is enough to consider only $s=0$.
That $(i)$ is sufficient is obvious from the proof of Theorem \[thm:Kolmogorov-Chentsov\]; see [@karatzas1991brownian Theorem 2.8]. Since $m$ is fixed it is easy to see that $(ii)$ implies $(i)$. That $(iii)$ implies $(ii)$ follows from the Chebyshev’s inequality.
The upper bound
---------------
It is considered that the negative order moments determine the right part of the spectrum. We show that this is only partially true, as this depends on whether the negative order moments are finite. To establish the bound on the right endpoint of the spectrum, one needs to show that sample paths are nowhere Hölder continuous of some order $\gamma$, i.e. that a.s. $t \mapsto X_t \notin C^{\gamma}(t_0)$ for each $t_0\in [0,T]$. To show this we first use a criterion based on the negative order moments, similar to . The resulting theorem can be seen as a sort of a complement of the Kolmogorov-Chentsov theorem. We then apply this to moment scaling multifractals to get an estimate for the support of the spectrum.
\[thm:ComplementKolmogorov-Chentsov\] Suppose that a process $\{X(t), t \in [0,T]\}$ defined on some probability space $(\Omega,\mathcal{F}, P)$ satisfies $$\label{kolmomcritnega}
E|X(t)-X(s)|^{\alpha} \leq C |t-s|^{1+\beta},$$ for all $t,s \in [0,T]$ and for some constants $\alpha<0$, $\beta<0$ and $C>0$. Then, for $P$-a.e. $\omega\in \Omega$ it holds that for each $\gamma > \beta/\alpha$ the path $t \mapsto X_t(\omega)$ is nowhere Hölder continuous of order $\gamma$.
If suffices to prove the statement by fixing arbitrary $\gamma > \beta/\alpha$. Indeed, this would give events $\Omega_{\gamma}$, $P(\Omega_{\gamma})=0$ such that for $\omega \in \Omega \backslash \Omega_{\gamma}$, $t \mapsto X_t(\omega)$ is nowhere Hölder continuous of order $\gamma$. If $\Omega_0$ is the union of $\Omega_{\gamma}$ over all $\gamma \in (\beta/\alpha, \infty) \cap \mathbb{Q}$, then $\Omega_0 \in \mathcal{F}$, $P(\Omega_0)=0$ and $\Omega \backslash \Omega_0$ would fit the statement of the theorem.
For notational simplicity, we assume $T=1$. For $j,k\in \mathbb{N}$ define the set $$M_{jk} := \bigcup_{t\in [0,1]} \bigcap_{h \in [0,1/k]} \left\{ \omega \in \Omega : |X_{t+h}(\omega) - X_t(\omega)| \leq j h^{\gamma} \right\}.$$ It is clear that if $\omega \notin M_{jk}$ for every $j,k \in \mathbb{N}$, then $t \mapsto X_t(\omega)$ is nowhere Hölder continuous of order $\gamma$. As there is countably many $M_{jk}$, it is enough to fix arbitrary $j,k \in \mathbb{N}$ and show that $M_{jk} \subset A$ for some $A\in \mathcal{F}$ such that $P(A)=0$.
Suppose $n>2 k$ and $\omega \in M_{jk}$. Then there is some $t \in [0,1]$ such that $$\label{doktm2pom1}
|X_{t+h}(\omega)-X_t(\omega) | \leq j h^{\gamma}, \quad \text{ for all } h\in [0,1/k].$$ Take $i \in \{1,\dots,n\}$ such that $$\label{doktm2pom2}
\frac{i-1}{n} \leq t < \frac{i}{n}.$$ Then since $n>2 k$ we have $$0 \leq \frac{i}{n} - t < \frac{i+1}{n} - t \leq \frac{i+1}{n} - \frac{i-1}{n} = \frac{2}{n},$$ and from it follows $$|X_{\frac{i+1}{n}}(\omega)-X_{\frac{i}{n}}(\omega) | \leq |X_{\frac{i+1}{n}}(\omega) - X_t( \omega)| + |X_t( \omega)-X_{\frac{i}{n}}(\omega) | \leq 2 j n^{-\gamma}.$$ Put $A_i^{(n)}=\left\{ |X_{\frac{i+1}{n}}-X_{\frac{i}{n}} | \leq 2 j n^{-\gamma} \right\}$. Since $\omega$ was arbitrary it follows that $$M_{jk} \subset \bigcup_{i=1}^n A_i^{(n)}.$$ Using Chebyshev’s inequality for $\alpha<0$ and the assumption of the theorem we get $$\label{condinproof}
\begin{aligned}
P(A_i^{(n)}) &\leq \frac{E |X_{\frac{i+1}{n}}-X_{\frac{i}{n}} |^{\alpha} }{(2j)^{\alpha} n^{-\gamma \alpha}} \leq C (2j)^{-\alpha} n^{\gamma \alpha - 1 - \beta},\\
P \left(\bigcup_{i=1}^n A_i^{(n)} \right) &\leq \sum_{i=1}^n P(A_i^{(n)}) \leq C (2j)^{-\alpha} n^{-(\beta - \gamma \alpha)}.
\end{aligned}$$ If we set $$A = \bigcap_{n > k} \bigcup_{i=1}^n A_i^{(n)},$$ then $A \in \mathcal{F}$ and $M_{jk} \subset A$. Since $\gamma> \beta / \alpha$, it follows that $\beta- \gamma \alpha>0$ and hence $P(A)=0$. This proves the theorem.
\[prop2\] Suppose $\{X(t), t \in [0,T]\}$ is multifractal in the sense of Definition \[defM\]. If for some $q<0$, $E|X(T)|^q<\infty$ and $\tau(q)<1$, then almost every sample path of $\{X(t)\}$ is nowhere Hölder continuous of order $\gamma$ for every $$\gamma \in \left(\frac{\tau(q)}{q} - \frac{1}{q}, \ +\infty \right).$$ In particular, for almost every sample path, $$H(t) \leq \widetilde{H^+} \quad \text{ for each } t \in [0,T].$$
Definition \[defM\] implies $$E|X(t)-X(s)|^q=c(q) |t-s|^{1+(\tau(q)-1)}.$$ Since $q<0$, $\tau(q)<0$ the statement follows from Theorem \[thm:ComplementKolmogorov-Chentsov\].
This proposition shows that $d(h)=-\infty$ for $h \in (\widetilde{H^+},\infty)$. Recall that $\widetilde{H^+}$ is defined in .
\[remark2\] Statements like the ones in the Proposition \[prop1\] and \[prop2\] are stronger than saying, for example, that for every $t \in [0,T]$, $H(t) \leq C$ almost surely. Indeed, an application of the Fubini’s theorem would yield that for almost every path, $H(t) \leq C$ for almost every $t$. If we put $h=C + \delta$, then the Lebesgue measure of the set $S_h=\{ t : H(t)=h \}$ is zero a.s. This, however, does not imply that $d(h)=-\infty$ and hence it is impossible to say something about the spectrum of almost every sample path. On the other hand, it is clear that this type of statements are implied by Propositions \[prop1\] and \[prop2\].
For the example of this weaker type of the bound, consider $\{X(t), t \in [0,T]\}$ multifractal in the sense of Definition \[defM\]. If for some $q<0$, $E|X(t)|^q<\infty$, then for every $t \in [0,T]$ $$H(t) \leq \frac{\tau(q)}{q} \text{ a.s.}$$ Indeed, let $\delta>0$ and suppose $C>0$. Since $q<0$, by the Chebyshev’s inequality $$P \left( \left| X(t+\varepsilon) - X(t) \right| \leq C \varepsilon^{\frac{\tau(q)}{q}+\delta} \right) \leq \frac{E \left| X(t+\varepsilon) - X(t) \right|^q }{C^q \varepsilon^{\tau(q)+\delta q}} = \frac{c(q)}{C^q \varepsilon^{\delta q}} \to 0,$$ as $\varepsilon \to 0$. We can choose a sequence $(\varepsilon_n)$ that converges to zero such that $$P \left( \left| X(t+\varepsilon_n) - X(t) \right| \leq C \varepsilon_n^{\frac{\tau(q)}{q}+\delta} \right) \leq \frac{1}{2^n}.$$ Now, by the Borel-Cantelli lemma $$\frac{\left| X(t+\varepsilon_n) - X(t) \right|}{\varepsilon_n^{\frac{\tau(q)}{q}+\delta}} \to \infty \ \ a.s., \text{ as } n \to \infty.$$ Thus for arbitrary $\delta>0$ it holds that for every $t$, $H(t) \leq \frac{\tau(q)}{q}+\delta$ a.s. However this result does not allow us to say anything about the spectrum.
Consider for the moment the FBM. The range of finite moments is $(-1,\infty)$ and $\tau(q)=Hq$ for $q \in (-1,\infty)$, so we have $\widetilde{H^+}=H+1$. Thus, the best we can say from Proposition \[prop2\], is that $d(h)=-\infty$ for $h > H+1$. However we know that $d(h)=-\infty$ for $h > H$. If the bound $\widetilde{H^+}$ could be considered over all negative order moments, we would get exactly the right endpoint of the support of the spectrum.
The fact that the bound derived in Proposition \[prop2\] is not sharp enough for some examples points that negative order moments may not be the right paradigm to explain the spectrum. We therefore provide more general conditions that do not depend on the finiteness of moments. First of them is obvious from the proof of Theorem \[thm:ComplementKolmogorov-Chentsov\], Equation .
\[lemma2\] Suppose that $\{X(t), t \in [0,T]\}$ is a stochastic process. Then almost every sample path of $\{X(t)\}$ is nowhere Hölder continuous of order $\gamma>0$ if for every $s\in [0,T]$ and $C>0$ $$P \left( \left| X(s+t) - X(s) \right| \leq C t^{\gamma} \right) = O(t^{\eta}), \quad \text{ as } t \to 0.$$ with some $\eta>1$. If the increments are stationary it is enough to take $s=0$.
\[thm3\] Let $\{X(t), t \in [0,T]\}$ be a stochastic process defined on some probability space $(\Omega,\mathcal{F}, P)$. Suppose that for some $\gamma>0$, $\eta>1$, $m \in \mathbb{N}$ it holds that for every $s\in [0,T]$ and $C>0$ $$\label{thm3condition}
P \left( \max_{l=1,\dots,m} \left| X(s+lt) - X(s+(l-1)t) \right| \leq C t^{\gamma} \right) = O \left( t^{\eta} \right), \quad \text{as } t \to 0.$$ Then, for $P$-a.e. $\omega\in \Omega$ the path $t \mapsto X_t(\omega)$ is nowhere Hölder continuous of order $\gamma$. In the stationary increments case it is enough to consider $s=0$.
The first part of the proof goes exactly as in the proof of Theorem \[thm:ComplementKolmogorov-Chentsov\]. Fix $j,k \in \mathbb{N}$ and take $n \in \mathbb{N}$ such that $$n > (m+1) k.$$ If $\omega \in M_{jk}$, then there is some $t \in [0,1]$ and $i \in \{1,\dots,n\}$ such that and hold. Choice of $n$ ensures that for $l \in \{1,\dots,m\}$ $$\label{}
0< \frac{i+l-1}{n}-t < \frac{i+l}{n}-t < \frac{i-l}{n}-t + \frac{l+1}{n} \leq \frac{l+1}{n} \leq \frac{1}{k}.$$ It follows from that for each $l \in \{1,\dots,m\}$ $$|X_{\frac{i+l}{n}}(\omega)-X_{\frac{i+l-1}{n}}(\omega) | \leq j \left( \frac{l+1}{n} \right)^{\gamma} + j \left( \frac{l}{n} \right)^{\gamma} \leq 2 j \left( \frac{m+1}{n} \right)^{\gamma}.$$ Denote $$\begin{aligned}
A_{i,l}^{(n)} &=\left\{ |X_{\frac{i+l}{n}}-X_{\frac{i+l-1}{n}} | \leq 2 j \left( \frac{m+1}{n} \right)^{\gamma} \right\},\\
A_{i}^{(n)} &= \bigcap_{l=1}^m A_{i,l}^{(n)}.\end{aligned}$$ It then follows that $$M_{jk} \subset \bigcup_{i=1}^n A_i^{(n)}.$$ From the assumption we have $$\begin{aligned}
P(A_i^{(n)}) &= P \left( \max_{l=1,\dots,m} |X_{\frac{i+l}{n}}-X_{\frac{i+l-1}{n}} | \leq 2 j (m+1)^{\gamma} \left( \frac{1}{n} \right)^{\gamma} \right) \leq C n^{-\eta},\\
P \left(\bigcup_{i=1}^n A_i^{(n)} \right) &\leq \sum_{i=1}^n P(A_i^{(n)}) \leq C_1 n^{-(\eta - 1)}.\end{aligned}$$ Now setting $$A = \bigcap_{n > k} \bigcup_{i=1}^n A_i^{(n)} \in \mathcal{F},$$ it follows that $P(A)=0$, since $\eta>1$.
Theorem \[thm3\] enables one to avoid using moments in deriving the bound. As an example, we consider how Theorem \[thm3\] can be applied in the simple case when $\{X(t)\}$ is the BM. Since $\{X(t)\}$ is $1/2$-sssi we have $$P \left( \max_{l=1,\dots,m} \left| X(lt) - X((l-1)t) \right| \leq C t^{\gamma} \right) = P \left( \max_{l=1,\dots,m} \left| X(l) - X(l-1) \right| \leq C t^{\gamma-1/2} \right).$$ Due to independent increments, then $$P \left( \max_{l=1,\dots,m} \left| X(l) - X(l-1) \right| \leq C t^{\gamma-1/2} \right) \leq C_1 t^{m(\gamma-1/2)},$$ This holds for every $\gamma>1/2$ and $m \in \mathbb{N}$ and by taking $m>1/(\gamma-1/2)$ we conclude $d(h)=-\infty$ for $h>1/2$.
Before we proceed on applying these results, we state the following simple corollary that expresses the criterion in terms of negative order moments, but now moments of the maximum of increments. This is a generalization of Theorem \[thm:ComplementKolmogorov-Chentsov\] that enables bypassing infinite negative order moments under very general conditions. From this criterion we derive in the next subsection, strong statements about the $H$-sssi processes.
\[newComplementKC\] Suppose that a process $\{X(t), t \in [0,T]\}$ defined on some probability space $(\Omega,\mathcal{F}, P)$ satisfies $$\label{thmmomentcond}
E \left[ \max_{l=1,\dots,m} \left| X(s + lt) - X(s + (l-1)t) \right| \right]^{\alpha} \leq C t^{1+\beta},$$ for all $t,s \in [0,T]$ and for some $\alpha<0$, $\beta<0$, $m \in \mathbb{N}$ and $C>0$. Then, for $P$-a.e. $\omega\in \Omega$ it holds that for each $\gamma > \beta/\alpha$ the path $t \mapsto X_t(\omega)$ is nowhere Hölder continuous of order $\gamma$.
This follows directly from the Chebyshev’s inequality for negative order moments and Theorem \[thm3\] $$\begin{aligned}
&P \left( \max_{l=1,\dots,m} \left| X(s+lt) - X(s+(l-1)t) \right| \leq C t^{\gamma} \right)\\
&\quad \leq C^{-\alpha} t^{-\gamma \alpha} E \left[ \max_{l=1,\dots,m} \left| X(s+lt) - X(s+(l-1)t) \right| \right]^{\alpha} = O \left( t^{-\alpha \gamma+1+\beta} \right),\end{aligned}$$ since $1+\beta- \alpha \gamma >1$.
The case of the self-similar stationary increments processes
------------------------------------------------------------
In this subsection we refine our results for the case of the $H$-sssi processes by using Corollary \[newComplementKC\]. These results can also be viewed in the light of the classical papers [@vervaat1985sample; @takashima1989sample]. To be able to do this, we need to make sure that the moment in can indeed be made finite by choosing $m$ large enough. We state this condition explicitly for reference.
\[C1\] Suppose $\{X(t),t\geq 0 \}$ is a stationary increments process. For every $\alpha<0$ there is $m_0 \in \mathbb{N}$ such that $$E \left[ \max_{l=1,\dots,m_0} \left| X(l) - X(l-1) \right| \right]^{\alpha} < \infty.$$
One way of assessing the Condition \[C1\] is given in the following lemma which is weak enough to cover all the examples considered later. Recall the definition of the range of finite moments $\underline{q}$ and $\overline{q}$ given in .
\[lemma:condition1\] Suppose $\{X(t),t\geq 0 \}$ is stationary increments process which is ergodic in the sense that if $E |f(X_1)| < \infty$ for some measurable $f$, then $$\frac{\sum_{l=1}^m f(X_l-X_{l-1}) }{m} \overset{a.s.}{\to} E f(X_1), \ \text{ as } m \to \infty.$$ Suppose also that $\underline{q}<0$. Then Condition \[C1\] holds.
Let $r<0$ be such that $E|X(1)-X(0)|^{r} <\infty$. Then $$\begin{aligned}
\inf_{l\in \mathbb{N}} |X(l) - X(l-1)|^r &= \lim_{m\to \infty} \min_{l=1,\dots,m} |X(l) - X(l-1)|^r \\
& \leq \lim_{m\to \infty} \frac{\sum_{l=1}^m |X(l) - X(l-1)|^r }{m} = E |X(1)-X(0)|^r=:M, \ \text{ a.s.}\end{aligned}$$ So, $$\inf_{l\in \mathbb{N}} |X(l) - X(l-1)|^{\alpha} = \left( \inf_{l\in \mathbb{N}} |X(l) - X(l-1)|^{r}\right)^{\frac{\alpha}{r}} = \left( \inf_{l\in \mathbb{N}} |X(l) - X(l-1)|^r \right)^{\frac{\alpha}{r}} \leq M^{\frac{\alpha}{r}}, \ \text{ a.s.}$$ and $\inf_{l\in \mathbb{N}} |X(l) - X(l-1)|^{\alpha} $ is bounded and thus has finite expectation. Given $\alpha<0$ we can choose $m_0$ such that $$\begin{aligned}
&\left[ \max_{l=1,\dots,m_0} \left| X(l) - X(l-1) \right| \right]^{\alpha} = \left[ \frac{1}{\max_{l=1,\dots,m_0} \left| X(l) - X(l-1) \right|} \right]^{-\alpha}\\
&=\left[ \min_{l=1,\dots,m_0} \frac{1}{\left| X(l) - X(l-1) \right|} \right]^{-\alpha} = \min_{l=1,\dots,m_0} |X(l) - X(l-1)|^{\alpha} \leq M^{\frac{\alpha}{r}}, \ \text{ a.s.}\end{aligned}$$ which implies the statement.
Two examples may provide insight of how far the assumptions of Lemma \[lemma:condition1\] are from Condition \[C1\]. If $X(t)=tX$ for some random variable $X$, then $\max_{l=1,\dots,m} \left| X(l) - X(l-1) \right|=X$ and thus Condition \[C1\] depends on the range of finite moments of $X$. For the second example, suppose $X(l)-X(l-1)$ is an i.i.d. sequence such that $P(|X(1)-X(0)| \leq x) \sim 1/ \ln x$ as $x \to 0$. This implies, in particular, that $E|X(1)-X(0)|^r=\infty$ for any $r<0$. Moreover, $$\begin{aligned}
E \left[ \max_{l=1,\dots,m} \left| X(l) - X(l-1) \right| \right]^{\alpha} &= - \int_0^{\infty} \alpha y^{\alpha-1} P(\max_{l=1,\dots,m} \left| X(l) - X(l-1) \right| \leq y ) dy\\
&= - \int_0^{\infty} \alpha y^{\alpha-1} \frac{1}{(\ln y )^m} dy = \infty,\end{aligned}$$ for every $\alpha<0$ and $m \in \mathbb{N}$, thus Condition \[C1\] does not hold.
We are now ready to prove a general theorem about the $H$-sssi processes.
\[thm:boundsHsssi\] Suppose $\{X(t), t \in [0,T]\}$ is $H$-sssi stochastic process such that Condition \[C1\] holds and $H-1/\overline{q}\geq 0$. Then, for almost every sample path, $$H-\frac{1}{\overline{q}} \leq H(t) \leq H \quad \text{ for each } t \in [0,T].$$ Moreover, $d(H)=1$ a.s.
By the argument at the beginning of the proof of Theorem \[thm:ComplementKolmogorov-Chentsov\] it is enough to take arbitrary $\gamma>H$. Given $\gamma$ we take $\alpha<1/(H-\gamma)<0$ which implies $\gamma > H -1/\alpha$. Due to Condition \[C1\], we can choose $m_0\in \mathbb{N}$ such that $E \left[ \max_{l=1,\dots,m_0} \left| X(lt) - X((l-1)t) \right| \right]^{\alpha} < \infty$. Self-similarity then implies that $$E \left[ \max_{l=1,\dots,m_0} \left| X(lt) - X((l-1)t) \right| \right]^{\alpha} = t^{H \alpha} E \left[ \max_{l=1,\dots,m_0} \left| X(l) - X(l-1) \right| \right]^{\alpha} = C t^{1+ (H \alpha-1)}.$$ The claim now follows immediately from Corollary \[newComplementKC\] with $\beta=H \alpha - 1$ since $\gamma > \beta/\alpha$.
That $H(t) \geq H-1/\overline{q}$ follows from Proposition \[prop1\]. Since $E|X(t)|^q<\infty$ for some $q<0$ it follows from Remark \[remark2\] that for every $t\in [0,T]$, $H(t)\leq H$ a.s. On the other hand, taking $0<q<\overline{q}$ we can get that for $\delta>0$ and $C>0$ $$P \left( \left| X(t+\varepsilon) - X(t) \right| \geq C \varepsilon^{H-\delta} \right) \leq \frac{E \left| X(t+\varepsilon) - X(t) \right|^q }{C^q \varepsilon^{Hq-\delta q}} = \frac{c(q)}{C^q \varepsilon^{-\delta q}} \to 0,$$ as $\varepsilon \to 0$. The same arguments as in Remark \[remark2\] imply that for every $t\in [0,T]$, $H(t)\geq H$ a.s. By Fubini’s theorem it follows that a.s. for almost every $t\in [0,T]$ $H(t)=H$. Thus the Lebesgue measure of the set $S_H=\{ t : H(t)=H \}$ is $1$ and so $d(H)=1$.
A simple consequence of the preceding is the following statement.
\[cor:Hsssi\] Suppose that Condition \[C1\] holds. A $H$-sssi process with all positive order moments finite must have trivial spectrum, i.e. $d(h)=-\infty$ for $h\neq H$.
This applies to FBM, but also to all Hermite processes, like e.g. Rosenblatt process (see Section \[sec4\]). Thus, under very general conditions a self-similar stationary increments process with a nontrivial spectrum must be heavy-tailed. This shows clearly how infinite moments can affect path properties when scaling holds. The following simple result shows this more precisely.
Suppose $\{X(t)\}$ is $H$-sssi. If $\gamma<H$ and $d(\gamma)\neq - \infty$, then $E|X(1)|^{q}=\infty$ for $q>1/(H-\gamma)$.
Suppose $E|X(t)|^{q}<\infty$ for $q>1/(H-\gamma)$. Then for $\varepsilon>0$ we can apply Chebyshev’s inequality to get $$P \left( |X(t)| \geq C t^{\gamma} \right) = P \left( |X(1)| \geq C t^{\gamma-H} \right) \leq \frac{E \left|X(1)\right|^{\frac{1}{H-\gamma}+\varepsilon} }{t^{-1-\varepsilon(H-\gamma)}}=O(t^{1+\varepsilon(H-\gamma)}).$$ By Theorem \[thm:Kolmogorov-Chentsov\] and Lemma \[lemma:kolconditions\] this implies $d(\gamma)=-\infty$, which is a contradiction.
The case of the multifractal processes {#subsec3.4}
--------------------------------------
Our next goal is to show that in the definition of $\widetilde{H^+}$ one can essentially take the infimum over all $q<0$. At the moment this makes no sense as $\tau$ from Definition \[defM\] may not be defined in this range. It is therefore necessary to redefine the meaning of the scaling function. We therefore work with the more general Definition \[defD\].
In the next section we will see on the example of the log-normal cascade process that when the multifractal process has all negative order moments finite, the bound derived in Proposition \[prop2\] is sharp. In general this would not be the case for any multifractal in the sense of Definition \[defD\]. Take for example a multifractal random walk (MRW), which is a compound process $X(t)=B(\theta(t))$ where $B$ is BM and $\theta$ is the independent cascade process, say log-normal cascade (see [@bacry2003]). By the multifractality of the cascade for $t<1$, $\theta(t)=^d M(t) \theta(1)$ and multifractality of MRW implies $X(t)=^d (M(t) \theta(1))^{1/2} B(1)$. Now by independence of $B$ and $\theta$, if $E|B(1)|^q=\infty$ then $E|X(t)|^q=\infty$. Since $B(1)$ is Gaussian, moments will be infinite for $q \leq -1$.
We thus provide a more general bound which only has a restriction on the moments of the random factor from the Definition \[defD\]. Therefore, if the process satisfies Definition \[defD\] and if the random factor $M$ is multifractal by Definition \[defM\] with scaling function $\tau$, we define $$H^+ = \min \left\{ \frac{\tau(q)}{q} - \frac{1}{q} : q < 0 \ \& \ E|M(t)|^q<\infty \right\}.$$
\[cor:upperboundMF\] Suppose $\{X(t), t \in [0,T]\}$ is defined on some probability space $(\Omega,\mathcal{F}, P)$, has stationary increments and Condition \[C1\] holds. Suppose also it is multifractal by Definition \[defD\] and the random factor $M$ satisfies Definition \[defM\] with the scaling function $\tau(q)$. If $E|M(T)|^{q} < \infty$ for $q<0$, then for $P$-a.e. $\omega\in \Omega$ it holds that for each $$\gamma > \frac{\tau(q)}{q}-\frac{1}{q}$$ the path $t \mapsto X_t(\omega)$ is nowhere Hölder continuous of order $\gamma$.\
In particular, for almost every sample path, $$H(t) \leq H^+ \quad \text{ for each } t \in [0,T].$$
By Condition \[C1\] for $m$ large enough it follows from the multifractal property that $$E \left[ \max_{l=1,\dots,m} \left| X(lt) - X((l-1)t) \right| \right]^{q} = E |M(t)|^{q} E \left[ \max_{l=1,\dots,m} \left| X(l) - X(l-1) \right| \right]^{q} = C t^{1 + \tau(q)-1}.$$ The claim now follows from Corollary \[newComplementKC\] with $\alpha=q$ and $\beta=\tau(q)-1$ and by the argument at the beginning of the proof of Theorem \[thm:ComplementKolmogorov-Chentsov\].
In summary, we provide bounds on the support of the multifractal spectrum. We show that the lower bound can be derived using positive order moments and link the non-existing moments with the path properties for the case of $H$-sssi process. In general, negative order moments are not appropriate for explaining the right part of the spectrum. To derive an upper bound on the support of the spectrum, we use negative order moments of the maximum of increments. This avoids the non existence of the negative order moments which is a property of the distribution itself.
Examples {#sec4}
========
In this section we list several examples of stochastic processes and investigate if the Definitions \[defD\]-\[defL\] hold. We show how the results of Section \[sec3\] apply in these cases and also discuss how the multifractal formalism could be achieved. Definitions and further details on the processes considered are given in the Appendix.
Self-similar processes
----------------------
It follows from Theorem \[thm:boundsHsssi\] and Corollary \[cor:Hsssi\] that if $H$-sssi process is ergodic with finite positive order moments, then the spectrum is simply $$d(h) =
\begin{cases}
1, & \text{if } \ h=H\\
-\infty, & \text{otherwise}.
\end{cases}$$ This applies to all Hermite processes, e.g. BM, FBM and Rosenblatt process. Indeed, Hermite processes have all positive order moments finite and the increments are ergodic (see e.g. [@samorodnitsky2007long Section 7]). We now discuss heavy tailed examples of $H$-sssi processes.
### Stable processes
Suppose $\{ X(t)\}$ is an $\alpha$-stable Lévy motion. $\{ X(t)\}$ is $1/\alpha$-sssi and moment scaling holds but makes sense only for a range of finite moments, that is for $\mathfrak{Q}=(-1,\alpha)$ in Definition \[defM\]. For this range of $q$, $\tau(q)=q/\alpha$ and the process is self-similar. Due to infinite moments beyond order $\alpha$ the empirical scaling function will asymptotically behave for $q>0$ as $$\tau_{\infty}(q)=
\begin{cases}
\frac{q}{\alpha}, & \text{if } 0<q\leq \alpha,\\
1, & \text{if } q>\alpha.
\end{cases}$$ See [@GL] for the precise result. Non-linearity points to multifractality in the sense of Definition \[defE\]. The spectrum of singularities is given by ([@jaffard1999]): $$d(h) =
\begin{cases}
\alpha h, & \text{if } \ h \in [0, 1/\alpha],\\
-\infty, & \text{if } \ h \in (1/\alpha,+\infty].
\end{cases}$$ Hence the spectrum is nontrivial and supported on $[0, 1/\alpha]$. These are exactly the bounds given in Theorem \[thm:boundsHsssi\] as in this case $H=1/\alpha$. We stress that even self-similar processes can have multifractal paths and that this is closely related with infinite moments.
We now discuss which form of the scaling function would yield the multifractal spectrum via the Legendre transform. This will highly depend on the range of $q$ over which the infimum in the Legendre transform is taken. For example if we take infimum over $q \in [0,\alpha]$, then we get the correct spectrum from Definitions \[defM\] and \[defE\]. Since in practice $\alpha$ is unknown, one can take infimum over $q \in [0,+\infty)$. In this case Definition \[defE\] yields the formalism, i.e. $$d(h)= \inf_{q \in [0,\infty)} \left( hq - \tau_{\infty}(q) +1\right).$$ So even though the moments beyond order $\alpha$ are infinite, estimating infinite moments with the partition function can lead to the correct spectrum of singularities.
### Linear fractional stable motion
In the same manner we treat linear fractional stable motion (LFSM) (see Appendix for the definition). Dependence introduces a new parameter in the scaling relations and the spectrum. LFSM $\{ X(t)\}$ is $H$-sssi, thus is not multifractal in the sense of Definition \[defD\]. For the range of finite moments $\mathfrak{Q}=(-1,\alpha)$, Definition \[defM\] holds with $\tau(q)=Hq$. In this sense process is self-similar. As follows from the results of [@GLT] (see also [@HeydeSly2008]), empirical scaling function asymptotically behaves for $q>-1$ as $$\label{LFSMtau}
\tau_{\infty}(q)=
\begin{cases}
Hq, & \text{if } 0<q\leq \alpha,\\
1+q(H-\frac{1}{\alpha}), & \text{if } q>\alpha.
\end{cases}$$ The combined influence of infinite moments and dependence produces concavity, pointing to multifractality in the sense of Definition \[defE\]. In [@balanca2013], the spectrum was established for $\alpha \in [1,2)$, $H\in (0,1)$ and the long-range dependence case $H>1/\alpha$: $$\label{LFSMspec}
d(h) =
\begin{cases}
\alpha (h-H)+1, & \text{if } \ h \in [H-\frac{1}{\alpha}, H],\\
-\infty, & \text{otherwise}.
\end{cases}$$ It is known that in the case $H<1/\alpha$ sample paths are nowhere bounded which explains the assumptions. Also, increments of the LFSM are ergodic (see e.g. [@cambanis1987ergodic]). Since $\alpha=\overline{q}$ is the tail index, Theorem \[thm:boundsHsssi\] gives sharp bounds on the support of the spectrum.
One can easily check that multifractal formalism can not be achieved with any of the definitions considered, except the empirical one. Indeed, it holds that $$d(h)= \inf_{q \in [0,\infty)} \left( hq - \tau_{\infty}(q) +1\right).$$ It is a curiosity that if we ignorantly estimate the scaling function using non-existing moments we get the correct spectrum.
### Inverse stable subordinator
Inverse stable subordinator $\{X(t)\}$ is a non-decreasing $\alpha$-ss stochastic process, for some $\alpha \in (0,1)$. However application of the results of the previous section is not straightforward as it has non-stationary increments. Yet we can prove that it has a trivial spectrum defined only in point $\alpha$.
To derive the lower bound we use Theorem \[thm:Kolmogorov-Chentsov\]. First recall that $a^{\alpha}+ b^{\alpha} \leq (a+b)^{\alpha}$ for $a,b\geq0$ and $\alpha \in (0,1)$. Taking $a=t-s$, $b=s$ when $t\geq s$ and $a=t$, $b=s-t$ when $t<s$ gives that $|t^{\alpha} - s^{\alpha}| \leq |t-s|^{\alpha}$. Since $\{X(t)\}$ has finite moments of every positive order we have for arbitrary $q>0$ and $t,s>0$ $$E|X(t)-X(s)|^q = |t^{\alpha} - s^{\alpha}|^q E |X(1)|^q \leq E|X(1)|^q |t-s|^{1 + \alpha q - 1}.$$ By Theorem \[thm:Kolmogorov-Chentsov\] there exists modification which is a.s. locally Hölder continuous of order $\gamma< \alpha-1/q$. Since $q$ can be taken arbitrarily large, we can get the modification such that a.s. $H(t) \geq \alpha$ for every $t \in [0,T]$.
For the lower bound we use Theorem \[thm3\]. Given $\gamma>\alpha$ we choose $m \in \mathbb{N}$ such that $m>1/(\gamma - \alpha)$. If $\{Y(t)\}$ is the corresponding stable subordinator, from the property $\{ X(t) \leq a \} = \{ Y(a) \geq t \}$ we have for every $t_1<t_2$ and $a>0$ $$\{X(t_2)-X(t_1) \leq a\}= \{ Y_{X(t_1)+a} \geq t_2 \} = \{ Y_{X(t_1)+a} -t_1 \geq t_2 - t_1\}.$$ By [@bertoin1998levy Theorem 4, p. 77], for every $t_1>0$, $P(Y_{X(t_1)}>t_1)=1$, thus on this event $$\{ Y_{X(t_1)+a} -t_1 \geq t_2 - t_1\} \subset \{ Y_{X(t_1)+a} - Y_{X(t_1)} \geq t_2 - t_1\}.$$ Now by the strong Markov property choosing $t$ small enough and stationarity of increments of $\{Y(t)\}$ we have $$\begin{aligned}
&P \left( \max_{l=1,\dots,m} \left| X(s+lt) - X(s+(l-1)t) \right| \leq C t^{\gamma} \right) \\
&= P \left( X(s+t) - X(s) \leq C t^{\gamma}, \dots, X(s+mt) - X(s+(m-1)t) \leq C t^{\gamma}\right)\\
&\leq P \left( Y_{X(s)+C t^{\gamma}} - Y_{X(s)} \geq t, \dots, Y_{X(s+(m-1)t)+Ct^{\gamma}} - Y_{X(s+(m-1)t)} \geq t \right)\\
&\leq \left( P \left( Y(Ct^{\gamma}) \geq t \right) \right)^m = \left( P \left( Y(1) \geq C^{-\frac{1}{\alpha}} t^{1- \frac{\gamma}{\alpha}} \right) \right)^m \leq \left( C_1 t^{\gamma-\alpha} \right)^m,\end{aligned}$$ by the regular variation of the tail. Due to choice of $m$, $m(\gamma-\alpha)>1$. This property of the first-passage process has been noted in [@bertoin1998levy p. 96]
Lévy processes
--------------
Suppose $\{X(t), t \geq 0\}$ is a Lévy process. The Lévy processes in general do not satisfy the moment scaling of the form . The only such examples are the BM and the $\alpha$-stable Lévy process. It was shown in [@GL] that the data from these processes will behave as obeying the scaling relation . If $X(1)$ is zero mean with heavy-tailed distribution with tail index $\alpha$ and if $\Delta t_i$ in is of the form $T^{\frac{i}{N}}$ for $i=1,\dots,N$, then for every $q>0$ as $T,N \to \infty$ the empirical scaling function will asymptotically behave as $$\label{tau}
\tau_{\infty}(q)=
\begin{cases}
\frac{q}{\alpha}, & \text{if } 0<q\leq \alpha \ \& \ \alpha\leq 2,\\
1, & \text{if } q>\alpha \ \& \ \alpha\leq 2,\\
\frac{q}{2}, & \text{if } 0<q\leq \alpha \ \& \ \alpha> 2,\\
\frac{q}{2}+\frac{2(\alpha-q)^2 (2\alpha+4q-3\alpha q)}{\alpha^3 (2-q)^2}, & \text{if } q>\alpha \ \& \ \alpha> 2.
\end{cases}$$ See [@GL] and [@GJLT] for the proof and more details. This shows that estimating the scaling function under infinite moments is influenced by the value of the tail index $\alpha$ and will yield a concave shape of the scaling function.
Local regularity of Lévy processes has been established in [@jaffard1999] and extended in [@balanca2013] under weaker assumptions. Denote by $\beta$ the Blumenthal-Getoor (BG) index of a Lévy process, i.e. $$\beta=\inf \left\{ \gamma\geq 0 : \int_{|x|\leq 1} |x|^{\gamma} \pi (dx) < \infty \right\},$$ where $\pi$ is the corresponding Lévy measure. If $\sigma$ is a Brownian component of the characteristic triplet, define $$\beta' =
\begin{cases}
\beta , & \text{if } \ \sigma=0,\\
2, & \text{if } \ \sigma\neq 0.\\
\end{cases}$$ Multifractal spectrum of the Lévy process is given by $$\label{spectrumLevyP}
d(h) =
\begin{cases}
\beta h, & \text{if } \ h \in [0, 1/\beta'),\\
1, & \text{if } \ h = 1/\beta',\\
-\infty, & \text{if } \ h \in (1/\beta',+\infty].
\end{cases}$$ Thus the most Lévy processes have a non-trivial spectrum. Moreover, the estimated scaling function and the spectrum are not related as they depend on the different parts of the Lévy measure. Behaviour of the estimated scaling function is governed by the tail index which depends on the behaviour of the Lévy measure at infinity since for $q>0$, $E|X(1)|^q<\infty$ is equivalent to $\int_{|x|>1} |x|^q \pi (dx) < \infty$. On the other hand, the spectrum is determined by the behaviour of $\pi$ around origin, i.e. by the BG index. The discrepancy happens as there is no exact scaling in the sense of or . If there is an exact scaling property, like in the case of the stable process, spectrum can be estimated correctly. It is therefore important to check the validity of relation from the data. This may be problematic as it is hard to distinguish exact scaling from the asymptotic one exhibited by a large class of processes.
As there is no exact moment scaling, Propositions \[prop1\] and \[prop2\] generally do not hold. Thus, in order to establish bounds on the support of the spectrum we use other criteria from Section \[sec3\]. We present two analytically tractable examples to illustrate the use of these criteria.
### Inverse Gaussian Lévy process
Inverse Gaussian Lévy process is a subordinator such that $X(1)$ has an inverse Gaussian distribution $IG(\delta, \lambda)$, $\delta>0, \lambda\geq 0$, given by the density $$f(x)=\frac{\delta}{\sqrt{2\pi}} {\mathop{}\!\mathrm{e}}^{\delta \lambda} x^{-3/2} \exp \left\{ -\frac{1}{2} \left( \frac{\delta^2}{x} + \lambda^2 x \right) \right\}, \quad x>0.$$ The expression for the cumulant reveals that for each $t$ $X(t)$ has $IG(t\delta,\lambda)$ distribution. Lévy measure is absolutely continuous with density given by $$g(x)=\frac{\delta}{\sqrt{2\pi}} y^{-3/2} \exp \left\{ -\frac{\lambda^2 y }{2} \right\}, \quad x>0,$$ thus the BG index is $\beta=1/2$. See [@eberlein2004generalized] for more details. Inverse Gaussian distribution has moments of every order finite and for every $q \in \mathbb{R}$ we can express them as $$\begin{aligned}
E |X(1)|^q &= \int_0^{\infty} x^q f(x) dx = \frac{\delta}{\sqrt{2\pi}} {\mathop{}\!\mathrm{e}}^{\delta \lambda} \left( \frac{2}{\lambda^2} \right)^{q-1/2} \int_0^{\infty} x^{q-3/2} \exp \left\{ -x - \frac{\delta^2 \lambda^2}{4 x} \right\} dx\\
&=\frac{\delta}{\sqrt{2\pi}} {\mathop{}\!\mathrm{e}}^{\delta \lambda} \left( \frac{2}{\lambda^2} \right)^{q-1/2} K_{-q+\frac{1}{2}} ( \delta \lambda ) 2 \left( \frac{\delta \lambda}{2} \right)^{q-\frac{1}{2}} = \sqrt{\frac{2}{\pi}} {\mathop{}\!\mathrm{e}}^{\delta \lambda} \delta^{q+\frac{1}{2}} \lambda^{-q+\frac{1}{2}} K_{-q+\frac{1}{2}} ( \delta \lambda ).\end{aligned}$$ where we have used [@olver2010 Equation 10.32.10] and $K_{\nu}$ denotes the modified Bessel function of the second kind. This implies that $$E |X(t)|^q = \sqrt{\frac{2}{\pi}} {\mathop{}\!\mathrm{e}}^{t \delta \lambda} t^{q+\frac{1}{2}} \delta^{q+\frac{1}{2}} \lambda^{-q+\frac{1}{2}} K_{-q+\frac{1}{2}} ( t \delta \lambda ) \sim C t^{q+\frac{1}{2}} t^{-|-q+\frac{1}{2}|}, \ \text{ as } t \to 0,$$ where we have used $K_{\nu}(z) \sim \frac{1}{2} \Gamma(\nu) (\frac{1}{2} z )^{-\nu}$ for $z>0$ and $K_{-\nu}(z)=K_{\nu}(z)$. For any choice of $\gamma>0$ condition (i) of Lemma \[lemma:kolconditions\] cannot be fulfilled, so the best we can say is that the lower bound is $0$, in accordance with . Since negative order moments are finite, Lemma \[lemma2\] yields the sharp upper bound on the spectrum. Indeed, given $\gamma>1/\beta=2$ we have for $q<1/(2-\gamma)<0$ $$P \left( |X(T) \leq C t^{\gamma} \right) \leq \frac{E|X(t)|^q}{t^{\gamma q}} \leq C_1 t^{-q (\gamma-2)}.$$ It follows that the upper bound is $2$ which is exactly the reciprocal of the BG index.
### Tempered stable subordinator
Positive tempered stable distribution is obtained by exponentially tilting the Lévy density of the totally skewed $\alpha$-stable subordinator, $0<\alpha<1$. Tempered stable subordinator is a Lévy process $\{X(t)\}$ such that $X(1)$ has positive tempered stable distribution given by the cumulant function $$\Phi(\theta) = \log E \left[ {\mathop{}\!\mathrm{e}}^{-\theta X(1)} \right] = \delta \lambda - \delta \left( \lambda^{1/\alpha} + 2 \theta \right)^{\alpha},\quad \theta \geq 0,$$ where $\delta$ is the scale parameter of the stable distribution and $\lambda$ is the tilt parameter. In this case BG index is equal to $\alpha$ (see [@schoutens2003levy] for more details). We use Lemma \[lemma2\] for $\gamma>\alpha$ to get $$P \left( |X(T) \leq C t^{\gamma} \right) \leq {\mathop{}\!\mathrm{e}}^{-1} E \left[ {\mathop{}\!\mathrm{e}}^{-\frac{X(t)}{Ct^{\gamma}}} \right] = {\mathop{}\!\mathrm{e}}^{-1} {\mathop{}\!\mathrm{e}}^{t \Phi (C^{-1} t^{-\gamma}) } = O( {\mathop{}\!\mathrm{e}}^{-t^{1-\gamma/\alpha} }), \ \text{ as } t \to 0,$$ As this decays faster than any power of $t$ as $t\to 0$, the upper bound follows.
Multiplicative cascade
----------------------
Although it is ambiguous what multifractality means, some models are usually studied in this sense. One of the first models of this kind is the multiplicative cascade. Cascades are actually measures, but can be used to construct non-negative increasing multifractal processes. Discrete cascades satisfy only discrete scaling invariance, in the sense that Definition \[defM\] is valid only for discrete time points. Another drawback of these processes is the non-stationarity of increments.
In [@bacry2003], a class of measures has been constructed having continuous scaling invariance and called multifractal random measures, thus generalizing the earlier cascade models. We will refer to a process obtained from these measures simply as the cascade. Since this is a notable example of a theoretically well developed multifractal process, we analyze it in the view of the results of the preceding section. Furthermore, we consider only one cascade process, the log-normal cascade. One can use cascades as subordinators to BM to build more general models called log-infinitely divisible multifractal processes (see [@bacry2003; @ludena2008lp] and the references therein).
Following properties hold for the log-normal cascade $\{ X(t)\}$ with parameter $\lambda^2$ ([@bacry2008continuous]). $\{ X(t)\}$ satisfies Definition \[defD\] with the random factor $M(c)=c {\mathop{}\!\mathrm{e}}^{2\Gamma_c}$ where $\Gamma_c$ is Gaussian random variable and can therefore be considered as a true multifractal. Moment scaling holds with $$\tau(q)=q(1+2 \lambda^2)-2 \lambda^2 q^2.$$ Increments of $\{ X(t) \}$ are heavy-tailed with tail index equal to $2/\lambda^2$ and moments of every negative order are finite provided $\lambda^2<1/2$ (see [@bacry2013lognormal Proposition 5]). Although the asymptotic behaviour of the scaling function defined by is unknown, there are results for the estimator defined by . Fixed domain asymptotic properties of this estimator for the multiplicative cascade has been established in [@ossiander2000statistical] where it was shown that when $j\to \infty$ estimator tends a.s. to $$\label{LGNtau}
\tau_{\infty}(q)=
\begin{cases}
h_0^{-} q, & \text{if } q\leq q_0^{-},\\
q(1+2 \lambda^2)-2 \lambda^2 q^2, & \text{if } q_0^{-} < q < q_0^{+}\\
h_0^{+} q, & \text{if } q\geq q_0^{+},
\end{cases}$$ where $$\begin{aligned}
\label{q0+-}
q_0^{+} &= \inf \{ q \geq 1 : q \tau'(q)-\tau(q) + 1 \leq 0 \}=\frac{1}{\sqrt{2 \lambda^2}},\\
q_0^{-} &= \sup \{ q \leq 0 : q \tau'(q)-\tau(q) + 1 \leq 0 \}=-\frac{1}{\sqrt{2 \lambda^2}}\end{aligned}$$ and $h_0^{+}=\tau'(q_0^{+})$, $h_0^{-}=\tau'(q_0^{-})$. So the estimator is consistent for a certain range of $q$, while outside this interval the so-called linearization effect happens. Similar results have been established in the mixed asymptotic framework ([@bacry2010multifractal]); see also [@ludena2014estimating] for a different method. The spectrum of the log-normal cascade is supported on the interval $\left[ 1 + 2 \lambda^2 - 2 \sqrt{2\lambda^2}, 1 + 2 \lambda^2 + 2 \sqrt{2\lambda^2}\right]$, given by $$d(h)= \inf_{q \in (-\infty,\infty)} \left( hq - \tau(q) +1\right) = 1- \frac{(h-1 - 2 \lambda^2)^2}{8 \lambda^2},$$ and the multifractal formalism holds ([@barral2002multifractal]).
Condition $\tau(q)>1$ of Proposition \[prop1\] yields $q \in (1,1/(2\lambda^2))$. We then get that $$H^-=1+2\lambda^2 - 2 \sqrt{2 \lambda^2}.$$ This is exactly the left endpoint of the interval where the spectrum of the cascade is defined, in accordance with Proposition \[prop1\]. This maximal lower bound is achieved for $q=1/\sqrt{2 \lambda^2}=q^+_0$. If $q^-$ is the point at which maximal lower bound $H^-$ is achieved, then $$\left( \frac{\tau(q)}{q} - \frac{1}{q} \right)'= \frac{1}{q^2} \left( q \tau'(q) - \tau(q) + 1 \right)$$ must be equal to $0$ at $q^-$. This is exactly defined in . Although the range of finite moments is not relevant for computing $H^-$ in this case, in general it can depend on $\overline{q}$.
Since all negative order moments are finite we get that $$\widetilde{H^+}=H^+=1+2\lambda^2 + 2 \sqrt{2 \lambda^2}$$ achieved for $q=-1/\sqrt{2 \lambda^2}$. Thus again the bound from Proposition \[prop2\] is sharp giving the right endpoint of the interval where the spectrum is defined.
Multifractal random walk
------------------------
With this example we want to show that we may have $\widetilde{H^+} \neq H^+$ and that the definition of the scaling function needs to be adjusted to avoid infinite moments of negative order. Multifractal random walk (MRW) driven by the log-normal cascade is a compound process $X(t)=B(\theta(t))$ where $B$ is a BM and $\theta$ is the independent cascade process (see [@bacry2003]). Multifractal properties of this process are inherited from those of the underlying cascade. $\{ X(t)\}$ satisfies Definition \[defD\] with the random factor $M(c)=c^{1/2} {\mathop{}\!\mathrm{e}}^{\Gamma_c}$ where $\Gamma_c$ is Gaussian random variable and the scaling function is given by $$\tau(q)=q\left( \frac{1}{2} + \lambda^2 \right)-\frac{ \lambda^2}{2} q^2.$$ The range of finite moments is $(-1, 1/\lambda^2)$ as explained in Subsection \[subsec3.4\]. The spectrum is defined on the interval $\left[ 1/2 + \lambda^2 - \sqrt{2\lambda^2}, 1/2 + \lambda^2 + \sqrt{2\lambda^2}\right]$ and given by $$d(h)= \inf_{q \in (-\infty,\infty)} \left( hq - \tau(q) +1\right) = 1- \frac{(h-1/2 - \lambda^2)^2}{2 \lambda^2}.$$ Random factor $M(c)$ is the source of multifractality, has the same scaling function, but all negative order moments are finite. Thus we get $$\begin{aligned}
H^{-} &= 1/2 + \lambda^2 - \sqrt{2\lambda^2},\\
\widetilde{H^+} &= \frac{3}{2}+\frac{3 \lambda^2}{2},\\
H^+ &= 1/2 + \lambda^2 + \sqrt{2\lambda^2}.\end{aligned}$$ $H^{-}$ and $H^+$ give the sharp bounds, while $\widetilde{H^+}$ is affected by the divergence of the negative order moments. This shows that when the multifractal process has infinite negative order moments, one should specify scaling in terms of the random factor.
Robust version of the partition function {#sec5}
========================================
In Section \[sec3\] using Corollary \[newComplementKC\] we managed to avoid the problematic infinite moments of negative order and prove results like Theorem \[thm:boundsHsssi\] and Corollary \[cor:upperboundMF\]. When scaling function is estimated from the data, spurious concavity may appear for negative values of $q$ due to the effect of diverging negative order moments. We use the idea of Corollary \[newComplementKC\] to develop a more robust version of the partition function.
Instead of using plain increments in the partition function , we can use the maximum of some fixed number $m$ of the same length increments. This will make negative order moments finite for some reasonable range and prevent divergencies. The underlying idea also resembles the wavelet leaders method where leaders are formed as the maximum of the wavelet coefficients over some time scale (see [@jaffard2004wavelet]). Since $m$ is fixed, this does not affect the true scaling. Same idea can be used for $q>0$ as Lemma \[lemma:kolconditions\] indicates this condition can also explain the spectrum. It is important to stress that the estimation of the scaling function makes sense only if the underlying process is known to possess scaling property of the type .
Suppose $\{X(t)\}$ has stationary increments and $X(0)=0$. Divide the interval $[0,T]$ into $\lfloor T / (m \Delta t) \rfloor$ blocks each consisting of $m$ increments of length $\Delta t$ and define the modified partition function: $$\label{modifiedpartitionfun}
\widetilde{S}_q(T,\Delta t) = \frac{1}{\lfloor T / (m \Delta t) \rfloor} \sum_{i=1}^{\lfloor T / (m \Delta t) \rfloor} \max_{l=1,\dots,m} \left| X ( i m \Delta t+l \Delta t) - X ( i m \Delta t+ (l-1) \Delta t) \right|^q.$$ One can see $\widetilde{S}_q(T,\Delta t)$ as a natural estimator of . Analogously we define the modified scaling function as in by using $\widetilde{S}_q(n,\Delta t_i)$: $$\label{modifiedtauhat}
\widetilde{\tau}_{N,T}(q) = \frac{\sum_{i=1}^{N} \ln {\Delta t_i} \ln \widetilde{S}_q(n,\Delta t_i) - \frac{1}{N} \sum_{i=1}^{N} \ln {\Delta t_i} \sum_{j=1}^{N} \ln \widetilde{S}_q(n,\Delta t_i) }{ \sum_{i=1}^{N} \left(\ln {\Delta t_i}\right)^2 - \frac{1}{N} \left( \sum_{i=1}^{N} \ln {\Delta t_i} \right)^2 }.$$ One can alter the definition only for $q<0$ although there is no much difference between two forms when $q>0$.
To illustrate how this modification makes the scaling function more robust we present several examples comparing and . We generate sample paths of several processes and estimate the scaling function by both methods. We also estimate the spectrum numerically using . Results are shown in Figures \[Fig1\]-\[Fig4\]. Each figure shows the estimated scaling functions and the estimated spectrum by using standard definition and by using . We also added the plots of the scaling function that would yield the correct spectrum via multifractal formalism and the true spectrum of the process.
For the BM (Figure \[Fig1\]) and the $\alpha$-stable Lévy process (Figure \[Fig2\]) we generated sample paths of length $10000$ and we used $\alpha=1$ for the latter. LFSM (Figure \[Fig3\]) was generated using $H=0.9$ and $\alpha=1.2$ with path length $15784$ (see [@stoev2004simulation] for details on the simulation algorithm used). Finally, MRW of length $10000$ was generated with $\lambda^2=0.025$ (Figure \[Fig4\]). For each case we take $m=20$ in defining the modified partition function .
In all the examples considered, the modified scaling function is capable of yielding the correct spectrum of the process with the multifractal formalism. As opposed to the standard definition, it is unaffected by diverging negative order moments. Moreover, it captures the divergence of positive order moments which determines the shape of the spectrum.
Appendix {#appendix .unnumbered}
========
We provide a brief overview of different classes of stochastic processes that are used along the paper.
Hermite process $\{Z_H^{(k)}(t), t \geq 0 \}$ with $H \in (1/2,1)$ and $k \in \mathbb{N}$ can be defined as $$Z_H^{(k)} (t) = C(H,k) \int_{\mathbb{R}^k}^{'} \int_0^t \left( \prod_{j=1}^k (s-y_j)_{+}^{-( \frac{1}{2} + \frac{1-H}{k})} \right) {\mathop{}\!\mathrm{d}}s {\mathop{}\!\mathrm{d}}B(y_1) \cdots {\mathop{}\!\mathrm{d}}B(y_k), \quad t \geq 0,$$ where $\{B(t) \}$ is the standard BM and the integral is taken over $\mathbb{R}^k$ except the hyperplanes $y_i=y_j$, $i \neq j$. Constant $C(H,k)$ is chosen such that the $E [Z_H^{(k)}(1) ]^2=1$ and $(x)_{+}=\max (x,0)$. Hermite processes are $H$-sssi. For $k=1$ one gets the FBM and for $k=2$ Rosenblatt process. See e.g. [@embrechts2002] for more details.
Lévy process is a process with stationary and independent increments starting form $0$. Given an infinitely divisible distribution there exists a Lévy process such that $X(1)$ has this distribution. Moreover, characteristic function can be uniquely represented by the Lévy-Khintchine formula. See [@bertoin1998levy] and [@schoutens2003levy] for more details.
$\alpha$-stable Lévy process is a process such that $X(1)$ has stable distribution with stability index $0<\alpha<2$. In general, a random variable $X$ has an $\alpha$-stable distribution with index of stability $\alpha \in (0,2)$, scale parameter $\sigma \in (0,\infty)$, skewness parameter $\beta\in [-1,1]$ and shift parameter $\mu \in \mathbb{R}$, denoted by $X \sim S_{\alpha}(\sigma,\beta,\mu)$ if its characteristic function has the following form $$E \exp \{ i\zeta X \} =
\begin{cases}
\exp\left\{-\sigma^{\alpha}|\zeta|^{\alpha}\left( 1-i\beta{\rm sign}(\zeta)\tan{\frac{\alpha\pi}{2}}+i\zeta\mu\right) \right\}, & \text{if } \alpha\ne1,\\
\exp\left\{-\sigma|\zeta|\left(1-i\beta\frac{2}{\pi}{\rm sign}(\zeta)\ln{|\zeta|}+i\zeta\mu\right)\right\}, & \text{if } \alpha=1,
\end{cases}
\quad \zeta \in \mathbb{R}.$$ Stable Lévy process is $1/\alpha$-sssi.
Linear fractional stable motion (LFSM) is an example of a process with heavy-tailed and dependent increments. LFSM can be defined through the stochastic integral $$X(t)=\frac{1}{C_{H,\alpha}} \int_{\mathbb{R}} \left( (t-u)_{+}^{H-1/\alpha} - (-u)_{+}^{H-1/\alpha} \right) dL_{\alpha}(u),$$ where $\{L_{\alpha}\}$ is a strictly $\alpha$-stable Lévy process, $\alpha \in (0,2)$, $0<H<1$ and $(x)_{+}=\max (x,0)$. The constant $C_{H,\alpha}$ is chosen such that the scaling parameter of $X(1)$ equals $1$, i.e. $$C_{H,\alpha} = \left( \int_{\mathbb{R}} \left| (1-u)_{+}^{H-1/\alpha} - (-u)_{+}^{H-1/\alpha} \right|^{\alpha} du \right)^{1/ \alpha}.$$ It is then called standard LFSM. The LFSM is $H$-sssi. Setting $\alpha=2$ in the definition reduces the LFSM to the FBM. By analogy to this process, the case $H>1/\alpha$ is referred to as a long-range dependence and the case $H<1/\alpha$ as negative dependence. The parameter $\alpha$ governs the tail behaviour of the marginal distributions implying, in particular, that $E|X(t)|^q=\infty$ for $q \geq \alpha$. For more details see [@samorodnitsky1994].
A Lévy process $\{Y(t)\}$ such that $Y(1) \sim S_{\alpha}(\sigma,1,0)$, $0<\alpha<1$ is referred to as the stable subordinator. It is nondecreasing and $1/\alpha$-sssi. Inverse stable subordinator $\{ X(t)\}$ is defined as $$X(t) = \inf \left\{ s >0 : Y(u) >t \right\}.$$ It is $\alpha$-ss with dependent, non-stationary increments and corresponds to a first passage time of the stable subordinator strictly above level $t$. For more details see [@meerschaert2013inverse] and references therein.
[^1]: [email protected]
[^2]: [email protected]
|
{
"pile_set_name": "ArXiv"
}
|
{
"word": "Dermatology",
"definitions": [
"The branch of medicine concerned with the diagnosis and treatment of skin disorders."
],
"parts-of-speech": "Noun"
}
|
{
"pile_set_name": "Github"
}
|
Entertainment Acts
Our roster of amazing talent is just the beginning of what Fantasy Entertainment can provide you. Our roster spans from a to z and every act is as unique as the last. Acrobats, aerialist, Fire Performers, hand and human balancing in ways that AWE! the audience and captivates their creativity. Las Vegas's Fantasy Entertainment Production, is your one stop destination to live entertainment and spectacular corporate events.
|
{
"pile_set_name": "Pile-CC"
}
|
745 F.2d 380
The YOUGHIOGHENY AND OHIO COAL COMPANY, Petitioner,v.BENEFITS REVIEW BOARD, United States Department of Labor;Arthur W. Sullivan, Director, Office of Workers'Compensation Programs, United StatesDepartment of Labor, Respondents.
No. 83-3488.
United States Court of Appeals,Sixth Circuit.
Argued June 7, 1984.Decided Oct. 5, 1984.
Gerald P. Duff, Kinder, Kinder & Hanlon, John G. Paleudis (argued), St. Clairsville, Ohio, for petitioner.
Paul A. Pachuta (argued), Reynoldsburg, Ohio, J. Michael O'Neil, Troy B. Smith (LC) (argued), U.S. Dept. of Labor, Washington, D.C., for respondent Benefits Review Bd.
Before ENGEL and MERRITT, Circuit Judges, and CELEBREZZE, Senior Circuit Judge.
PER CURIAM.
1
The Youghiogheny and Ohio Coal Company (the Company) petitions for review of a decision of the United States Department of Labor Benefits Review Board (the Board) in which the Board held that it did not have jurisdiction to review the decision of a Department of Labor hearing officer.
2
This action began when Arthur Sullivan filed a claim for benefits with the Department of Labor pursuant to the Black Lung Benefits Act. 30 U.S.C. Secs. 901-45 (1982). The Department of Labor determined that Sullivan was eligible for benefits and that the Company was potentially liable for the payment of those benefits.
3
The Company requested and obtained a hearing concerning the award of benefits. The hearing officer decided in favor of Sullivan. The Company appealed this decision to the Benefits Review Board which remanded the case to the hearing officer for further proceedings. On July 11, 1978, the hearing officer reaffirmed his previous decision. The same day, the Company filed a motion with the hearing officer requesting another hearing. On July 27, 1978, the hearing officer denied this motion.
4
The Company did not receive notice of the July 27 decision until June 9, 1982. On June 17, 1982, the Company appealed the hearing officer's decision of July 27, 1978. The Benefits Review Board declined to review the decision because the notice of appeal was not timely filed. The Board held that the improper mailing of a decision or order to a party does not extend the time for filing a notice of appeal. The Company petitions this court to reverse the determination of the Benefits Review Board.
5
When the hearing officer entered the order denying the Company's motion for an additional hearing, it became his duty under 33 U.S.C. Sec. 919(e) to file the decision in the Office of the Deputy Commissioner of the Department of Labor. The Office of the Deputy Commissioner was then responsible for sending a copy of the decision to the Company. 33 U.S.C. Sec. 919(e) (1982); 20 C.F.R. 725.484 (1978).1 Notice of the hearing officer's decision was not sent until June, 1982. In declining to review the hearing officer's decision, the Board took the position that the improper mailing of a decision or order to the parties does not extend the time for filing a notice of appeal. See Sauls v. Armco Steel Corporation, 5 Black Lung Rptr. 1-712 (Benefits Review Board 1983). However, in its brief on appeal the Board has now acknowledged that according to 33 U.S.C. 919(e) and 20 C.F.R. 725.484, "the Hearing Officer's July 27, 1978 decision denying the employer's request for further hearing in this case did not become effective, and the thirty-day appeal time did not begin to run until June 9, 1982, when [the decision] was served on the parties." In Bennett v. Director, 717 F.2d 1167, 1168-69 (7th Cir.1983), the Seventh Circuit agreed with this interpretation:
6
According to 20 C.F.R. Sec. 725.478, a decision of an ALJ is "considered to be filed in the office of the deputy commissioner" on the date it is issued and served on the parties. Under these regulations, the petitioner had thirty days after he was served with ALJ Levin's decision and order on March 24, 1980, in which to appeal the decision to the BRB. (emphasis added)
7
We agree that the Board's interpretation of the statute and regulations is correct.
8
Notwithstanding this change of position, the Board persists in its refusal to allow the Company to appeal the hearing officer's decision here, because of (1) the employer's failure to take any action during the four-year period between July, 1978 and June, 1982 to obtain the hearing officer's decision, and (2) the substantial prejudice to the interests of the miner that would allegedly result were the Review Board reversed. In the opinion of the court, the Board may not ignore the statute and regulations for these reasons. In addition, the Board's argument fails because we find neither prejudice to the miner nor unreasonable delay in this case.
9
The Board contended that the miner would be prejudiced because, should the miner ultimately lose his benefits as a result of the petitioner's appeal to the Board, the miner would be obligated to reimburse the Black Lung Disability Trust Fund for the benefits he has received since 1974. 20 C.F.R. 725.540-. 544 (1983). However, these regulations also ensure that a recipient of overpayments will not be required to repay the benefits if he is without fault and if repayment would deprive him of income required for ordinary and necessary living expenses.2 Thus, the law and regulations already provide protection against any substantial injury which might result from the delay.
10
The Board also claims that the Company was guilty of unreasonable delay in asserting its rights because it did not inquire about the hearing officer's decision for four years after it was rendered.
11
The appeal by the Company was taken within thirty days of the date the decision was served upon the parties. It was, therefore, timely. To hold otherwise where the delay was caused in the first instance by the Board's neglect would introduce uncertainty into the administrative scheme. This, in our view, is too high a price to pay for the altogether speculative conclusion, unsupported by the record, that the Company should have assumed the Board's neglect and acted earlier to protect itself.
12
Accordingly, the petition for review is granted and the cause REMANDED to the Benefits Review Board for reinstatement of the appeal.
1
20 C.F.R. 725.484 was in effect at the time of the hearing officer's decision. The regulations were later changed to require the administrative law judge, instead of the deputy commissioner, to serve by certified mail any decision or order issued in a case. 20 C.F.R. 725.478, .479 (1983)
2
20 C.F.R. Sec. 725.542 (1983) provides:
There shall be no adjustment or recovery of an overpayment in any case where an incorrect payment has been made with respect to an individual:
(a) Who is without fault, and where
(b) Adjustment or recovery would either:
(1) Defeat the purpose of title IV of the Act, or
(2) Be against equity and good conscience.
20
C.F.R. Sec. 725.543 (1983) provides:
The standards for determining the applicability of the criteria listed in Sec. 725.542 shall be the same as those applied by the Social Security Administration under Secs. 410.561-561h of this title.
29
C.F.R. Sec. 410.561(c) (1983) provides in part that
"defeat the purpose of title IV" for purposes of this subpart, means defeat the purpose of benefits under this title, i.e., to deprive a person of income required for ordinary and necessary living expenses. This depends upon whether the person has an income or financial resources sufficient for more than ordinary and necessary needs, or is dependent upon all of his current benefits for such needs.
|
{
"pile_set_name": "FreeLaw"
}
|
Role of Tyr306 in the C-terminal fragment of Clostridium perfringens enterotoxin for modulation of tight junction.
We previously reported that the C-terminal fragment of Clostridium perfringens enterotoxin (C-CPE) is a novel type of absorption enhancer that interacts with claudin-4 and that Tyr306 of C-CPE plays a role in ability of C-CPE to modulate barrier of tight junctions. In the current study, to investigate effects of Tyr306 on the C-CPE activity, we prepared some C-CPE mutants substituted Tyr306 with Trp (Y306W), Phe (Y306F) and Lys (Y306K). We found that Y306W and Y306F mutants of C-CPE had claudin-4 binding affinities and effects on the barrier function of tight junctions, whereas both of these properties were greatly reduced with the Y306K mutant. Finally, the Y306K but not the Y306F and Y306W mutants had reduced abilities to enhance absorption in rat jejunum. These results indicate that aromatic and hydrophobic properties, not hydrogen bonding potential, of Tyr306 are involved in the interaction of C-CPE with claudin-4 and in the modulation of the tight junction barrier function by C-CPE.
|
{
"pile_set_name": "PubMed Abstracts"
}
|
The invention relates to interposer assemblies of the type which are sandwiched between substrates to form electrical connections between opposed pads on the substrates, and to cantilever contacts for forming electrical connections with contact pads.
Interposer assemblies typically include plastic plates with through passages and contacts in the passages for forming electrical connections between opposed contact pads.
Interposer assemblies form electrical connections between contact pads arranged in a very close proximity to each other. The pads may be arranged on a one millimeter center-to-center grid. Each assembly may include as many as 961 contacts with four interposer assemblies mounted in a single frame with a total of 3,844 contacts in the frame. The contacts must establish reliable electrical connections with the pads when the assemblies are sandwiched together between circuit members. Failure of a single contact to make a reliable connection renders the entire frame useless.
Contacts in interposer assemblies include contact surfaces which mechanically engage the contact pads and form electrical connections with the contact pads. Conventional interposer assemblies have single surface contacts which engage each pad to form a single electrical connection with each pad. The contact may wipe along the pad to improve the quality of the electrical connection. Impurities, oxides or contaminants on either the contact surface or the pad can impair the single surface electrical connections with the pads. Contacts used in interposer assemblies are typically symmetrical about the center of the insulating plate, each including a separate spring which biases a single contact surface against a pad.
Accordingly, there is a need for an improved interposer assembly in which each contact makes redundant contacts with each pad so that when the assembly is sandwiched between overlying and underlying substrates each contact establishes two reliable electrical connections with each pad. The connections should have small contact areas to increase the contact pressure between the contact and the pad. Wiped high contact pressure redundant connections would provide reliable interposer assembly electrical connections. There is also a need for a method of making a contact with spaced contact points from strip stock, which may be very thin and difficult to form.
Further, there is need for a spring contact having spaced contact points for engaging a contact pad and forming redundant wiped high pressure redundant electrical connections between the contact and the pad.
The invention is an improved interposer assembly including contacts mounted in passages extending through an insulating plate with each contact having two contact points on each end of the contact. When the interposer assembly is sandwiched between overlying and underlying substrates the pairs of contact points are brought into wiped pressure engagement with overlying and underlying pads and forms redundant electrical connections with the pads.
The contact points are formed on rounded edge corners of the contacts and have small contact areas, resulting in high contact pressure and reliable electrical connections despite debris, oxides and surface contaminants on the contacts and pads.
Each contact includes two tapered spring arms joined to a central portion. A pair of contact points is formed on the outer end of each spring arm. The points project above and below the plate. The arms are independently deflected during compression of the contact by overlying and underlying substrates. The spring arms may include retention legs extending outwardly from the contact points for engagement with adjacent cam surfaces. Compression of the contacts moves the ends of the legs along the cam surface to further stress the spring and increase contact force.
Additionally, the invention relates to a contact having a beam with a mounting end and a contact end carrying a pair of laterally spaced contact points. Movement of a pad against the contact points stresses the beam and moves the contact points laterally along the pad to form wiped high pressure electrical connections between the contact and the pad. The contact points are rounded edge corners and have a very small contact area in order to increase contact pressure and form redundant wiped high pressure electrical connections between the contact and the pad. The contact points are preferably located on opposite sides of the spring arm and stabilize the contact against twisting.
Other objects and features of the invention will become apparent as the description proceeds, especially when taken in conjunction with the accompanying drawings illustrating the invention, of which there are five sheets of drawings and two embodiments.
|
{
"pile_set_name": "USPTO Backgrounds"
}
|
The effect of a novel thromboxane synthetase inhibitor dazmegrel (UK38485) on random pattern skin flaps in the rat.
In the light of work (Emerson and Sykes, 1981) suggesting a protective effect of prostacyclin on the random pattern dorsal skin flap of the rat, the ability of dazmegrel, a novel thromboxane synthetase inhibitor, to protect against ischaemic necrosis in similar flaps has been assessed. No such protective effect could be demonstrated and a possible explanation of the earlier observations is offered.
|
{
"pile_set_name": "PubMed Abstracts"
}
|
10 Ways You Can Pair A Blazer & Rock Any Occasion!
RELATED POSTS
Blazers can easily brighten and smarten any attire. It doesn’t matter if it’s Indian or Western, or if it’s a dress or pants. Blazers go with any and everything. Bollywood actresses are following this trend and the style is extremely popular with people all over the world.
With fusion wear becoming popular in the country, you can go quirky at wedding dos. Just pair heavy blazers with sarees and kurtas. It is absolutely worth the fuss. Scroll down and check out ways to pair your blazer.
Pro tip: Always keep a black and a white blazer in your wardrobe. They are lifesavers!
With Cropped Trousers/Jeans/Culottes
Distressed jeans and cropped pants look absolutely great with a blazer. We even like to pair them with a multi-print culotte for the day look.
With a Plaid Shirt
For a smart corporate look go for a pair shirt under a smart-cut blazer. Go for colors that are not too loud.
Pair a plain or monochrome tee with a printed colorful blazer, and a graphic one with a solid color blazer. It’s great for casual Fridays or Sunday brunches in case you don’t feel like going the shirt way.
With a pair of Shorts/Mini
Bollywood is a fan of this combo and we are with them on this. It’s comfortable and looks very smart.
With a Crop Top
Pair it with a crop top. you can do a stylish bralette for late night dinners or try Sonam Kapoor‘s geek chic approach for a day look.
With a LBD
The little black dress with a white blazer is the ultimate combo. You can never go wrong with this one. But don’t stop yourself there, experiment with little colors to add that extra zing to your LBD.
With a Maxi Skirt/Dress
Pair your blazer with a flowy maxi and you’ll look like a star. Wear a knee-long one to add drama to the outfit.
With a Saree
Get your quirk on. Pair an embroidered blazer with a saree and you could give these Bollywood divas a run for their money. Try it at the next wedding you attend!
With a Kurta
Blazer with a kurta looks great. Instead of a short jacket like is the usual norm, mix and match a funky colorful one with a plain long kurta. You can even try it with the dhoti pants like our Bollywood queens.
With Nothing Else
Go without an inner and wear just the blazer. A double-breasted blazer like the one Cheryl is wearing looks smart and elegant.
|
{
"pile_set_name": "Pile-CC"
}
|
include/linux/blkdev.h | 2 ++
mm/swapfile.c | 3 +++
2 files changed, 5 insertions(+), 0 deletions(-)
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index 5c80189..af027de 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -1278,6 +1278,8 @@
unsigned long long);
int (*revalidate_disk) (struct gendisk *);
int (*getgeo)(struct block_device *, struct hd_geometry *);
+ /* this callback is with swap_lock and sometimes page table lock held */
+ void (*swap_slot_free_notify) (struct block_device *, unsigned long);
struct module *owner;
};
diff --git a/mm/swapfile.c b/mm/swapfile.c
index 6c0585b..0c373bc 100644
--- a/mm/swapfile.c
+++ b/mm/swapfile.c
@@ -579,6 +579,7 @@ static int swap_entry_free(struct swap_info_struct *p,
count = p->swap_map[offset];
/* free if no reference */
if (!count) {
+ struct gendisk *disk = p->bdev->bd_disk;
if (offset < p->lowest_bit)
p->lowest_bit = offset;
if (offset > p->highest_bit)
@@ -587,6 +588,8 @@ static int swap_entry_free(struct swap_info_struct *p,
swap_list.next = p - swap_info;
nr_swap_pages++;
p->inuse_pages--;
+ if (disk->fops->swap_slot_free_notify)
+ disk->fops->swap_slot_free_notify(p->bdev, offset);
}
if (!swap_count(count))
mem_cgroup_uncharge_swap(ent);
|
{
"pile_set_name": "Github"
}
|
1. Field of the Invention
The present invention relates to an image stabilization control circuit for correcting a vibration-caused displacement of the optical axis, and an image pickup apparatus that includes said image stabilization control circuit.
2. Description of the Related Art
Digital still cameras and digital movie cameras (hereinafter generically referred to as digital cameras) have been widely used by general users. Various methods for correcting camera shake are proposed for users who are not familiar with how to handle the cameras properly and therefore are likely to encounter camera shake when taking pictures. Among those digital cameras available, there is one, mounted on a portable telephone, which serves as one of functions in the portable telephone and a certain type of them are so designed that the camera is held by one hand only. In such devices operated by a thumb in one hand, the camera shake is more likely to occur as compared with commonly used cameras held by two hands to take pictures.
The following method is in practical use today to correct such camera shake. That is, the optical axis is corrected by a vibration detecting element for detecting the vibration of a camera and a driver element that moves a lens position in such a direction as to cancel out the displacement caused by the vibration.
|
{
"pile_set_name": "USPTO Backgrounds"
}
|
function helperGetDateTime (date) {
return date.getTime()
}
module.exports = helperGetDateTime
|
{
"pile_set_name": "Github"
}
|
Stanford researchers have developed a new type of robot that can grow like vines across long distances without moving its whole body.
According to its developers, this robot can grow at speeds of 22 mph. The unique feature of this robot is a flexible polyethylene plastic tube that is stored within the core of its body. When internal air pressure is applied, this tip of the tubes comes out of the robot’s body and starts to grow. Designers have installed a camera at the tip of the plastic tube that transmits the video or images captured to the operator of the robot via a cable running through the body of the robot. Robot’s body is made up of different chambers. To make the robot steer right or left, the user has to inflate the one side of the robot more than the other.
The length of the robot is about 11 inches, but it can grow up to 72 meters (236 feet) after application of air pressure.
Elliot Hawkes, the lead author of this research says the inspiration to design this robot came from an English ivy plant growing around the corner of his bookshelf. Elliot Hawkes is a roboticist at the University of California, Santa Barbara.
“The body lengthens as the material extends from the end but the rest of the body doesn’t move,” explained Elliot Hawkes.
“The body can be stuck to the environment or jammed between rocks, but that doesn’t stop the robot because the tip can continue to progress as new material is added to the end.”
According to Allison Okamura, professor of mechanical engineering and senior author of the paper, they are “trying to understand the fundamentals of this new approach to getting mobility or movement out of a mechanism.”
Robot’s developers say their device can be used in a variety of applications such as in rescue operations or medical procedures, etc. When searching people in the rubble of a collapsed building, this robot could be placed at the entrance of the debris, and then it will grow like a vine, into the mass of stones and dirt. The rescuers will then get a view of the places beneath the rubble with the help of the camera. Robot’s designers claim their robot can intelligently move in a tight environment, supply water to trapped disaster victims, or can also be used to feed cables or form a temporary antenna. It can navigate the places that are inaccessible to drones, humans, or hard body robots.
Stanford researchers reveal that they have created the prototype of the robot by hand, but now want to switch focus to automatic manufacturing. They are also planning to do further testing of their device, investigating other material for its soft body, and using liquids for expansion.
The detailed information about this robot has been provided in journal Science Robotics.
Source: Stanford University
|
{
"pile_set_name": "OpenWebText2"
}
|
Q:
How often should I pay my student loans?
I've got ~$30K in student loan debt, spread out between different loans with different interest rates. My minimum payment is $190, but I'm in a financial position where I can afford about $750 per month.
It's my understanding that payments are applied first to accrued interest, then proportionally (more money goes to larger loans) for the first $190 (my minimum) of a payment, and finally any remaining money (in my case about $560) goes to the loan with the highest interest rate. Please correct me if I'm wrong on any of that.
This all being said, it stands to reason that paying $750 every month is better than paying 3 × $750 = $2,500 every three months because the accrued interest will be less overall. Likewise, it stands to reason that paying $750 every month would also be better than paying $750 ÷ 3 = $250 three times a month because only $250 − $190 = $60 minus any accrued interest (compounded daily) would be going toward my high-interest-rate loans.
Is there a happy balance between these extremes? Does it even matter all that much? Do my assumptions check out?
Here are my loans:
A:
Reading Great Lakes' page How Payments Are Applied, I think you are probably correct about how the payments are applied: Interest first, minimum on each loan next, then any extra is applied to the highest interest loan.
If I were you, I would make one payment a month, and I would make that payment as large as I possibly could. Trying to make more than one payment in a month is too complicated (and you aren't sure exactly how those payments get credited), and saving up for a big payment every few months is pointless and will cost you interest.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Cloth animation export Unity Stutter
I'm attempting cloth animation in Blender and exporting to Unity. My process is as follows:
-Pin one point of the cloth (for it to dangle from) and then apply armature motion to that pinned point
-Simulate cloth physics and export the resulting vertex data as a point cache (.mdd)
-Duplicate the cloth, remove physics, and import point cache data (basically a physics bake)
The animation works perfectly in Blender like so:
However, when it comes to importing to Unity, something in the export/import process introduces stutter like so:
Any help debugging this problem would be greatly appreciated.
A:
Leave 'Rig / Animation Type' on 'Generic'
Switch to 'Animations', deactivate both 'Resample Curves' and 'Anim. Compression'-checkboxes
Switch back to 'Rig / Animation Type', select 'Legacy'
Click 'Apply'
Check the Animation-Preview - no stuttering!
Drag the FBX onto the scene-view-window
IMPORTANT: click 'Revert' rather than 'Apply' when asked whether unapplied import settings should be applied
Downsides:
No 'loop / ping-pong' etc. functionalities (I'll try to create a script-based-solution as a workaround)
|
{
"pile_set_name": "StackExchange"
}
|
Retroviruses are major pathogens that can affect all vertebrates causing an extremely wide range of responses in infected animal hosts. For example, one of the most potent and lethal retroviruses is HIV-1, the agent that causes AIDS. Retroviruses are a large and diverse family of viruses that replicate by a unique process that is significantly different from other forms of viruses. The virion particles that make up retroviruses contain (+) strand genomic RNA. When the retrovirus enters a host cell, the (+) strand RNA is converted into double-stranded DNA through action of the enzyme reverse transcriptase (RT). This double-stranded DNA copy of the viral genome is called proviral DNA. The proviral DNA is then integrated into the host chromosomal DNA for replication by the action of the enzyme integrase (IN). Integration links the ends of linear proviral DNA to host genomic DNA. In a productive infection, proviral DNA acts as a template for the formation of retroviral particles and transcription of viral proteins, through the action of host RNA polymerase II. As integration of proviral DNA is necessary for replication, infected cells without integrated proviral DNA cannot spread infection. The formation of the integrated provirus is believed responsible for maintaining a persistent infection, for permanent entry into the host germ line and for mutagenic or oncogenic activities.
Current methods, licensed and approved by the FDA, for study of retroviruses such as HIV, include antibody-based assays, where antibodies are detected using ELISA (enzyme linked immunosorbent assay) or EIA (enzyme immunoassay) methods. PCR-based methods have also been used to study retroviruses. In these assays, DNA is detected by PCR amplification with virus-specific primers.
|
{
"pile_set_name": "USPTO Backgrounds"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.