content
stringlengths 7
2.61M
|
---|
<filename>packages/monocle/src/Ix/index.ts
// ets_tracing: off
import { pipe } from "@effect-ts/core/Function"
import type { Option } from "@effect-ts/core/Option"
import type { At } from "../At"
import * as _ from "../Internal"
import type { Iso } from "../Iso"
import type { Optional } from "../Optional"
// -------------------------------------------------------------------------------------
// model
// -------------------------------------------------------------------------------------
export interface Index<S, I, A> {
readonly index: (i: I) => Optional<S, A>
}
// -------------------------------------------------------------------------------------
// constructors
// -------------------------------------------------------------------------------------
export const fromAt = <T, J, B>(at: At<T, J, Option<B>>): Index<T, J, B> => ({
index: (i) => _.lensComposePrism(_.prismSome<B>())(at.at(i))
})
/**
* Lift an instance of `Index` using an `Iso`
*/
export const fromIso =
<T, S>(iso: Iso<T, S>) =>
<I, A>(sia: Index<S, I, A>): Index<T, I, A> => ({
index: (i) => pipe(iso, _.isoAsOptional, _.optionalComposeOptional(sia.index(i)))
})
export const indexArray: <A = never>() => Index<ReadonlyArray<A>, number, A> =
_.indexArray
export const indexRecord: <A = never>() => Index<
Readonly<Record<string, A>>,
string,
A
> = _.indexRecord
|
The landscape of schizophrenia on twitter Introduction People with schizophrenia experience higher levels of stigma compared with other diseases. The analysis of social media content is a tool of great importance to understand the public opinion toward a particular topic. Objectives The aim of this study is to analyse the content of social media on schizophrenia and the most prevalent sentiments towards this disorder. Methods Tweets were retrieved using Twitters Application Programming Interface and the keyword schizophrenia. Parameters were set to allow the retrieval of recent and popular tweets on the topic and no restrictions were made in terms of geolocation. Analysis of 8 basic emotions (anger, anticipation, disgust, fear, joy, sadness, surprise, and trust) was conducted automatically using a lexicon-based approach and the NRC Word-Emotion Association Lexicon. Results Tweets on schizophrenia were heterogeneous. The most prevalent sentiments on the topic were mainly negative, namely anger, fear, sadness and disgust. Qualitative analyses of the most retweeted posts added insight into the nature of the public dialogue on schizophrenia. Conclusions Analyses of social media content can add value to the research on stigma toward psychiatric disorders. This tool is of growing importance in many fields and further research in mental health can help the development of public health strategies in order to decrease the stigma towards psychiatric disorders. Introduction: Challenged by the lack of collaboration between treatment sectors in psychiatric care in Germany, a legal basis for the implementation of Stationsquivalente Behandlung (StB), a programme for crisis resolution and home treatment (CRHT), was formed in 2017. It offers intensive care to patients with severe mental illness in their own living environments, carried out by a team of diverse professionals. Objectives: The present analysis is the first to evaluate the CRHTprogram that has been established in the greater Munich area in 2018. Methods: Qualitative and quantitative data were collected within the framework of a mixed-methods-analysis. Records of all patients (N=139) included in the CRHT over a thirteen-month period ('18-'19) were examined regarding sociodemographic, clinical parameters, and treatment data. A focus group with StB-employees (N=8) and individual interviews with patients (N=10) were conducted, then transcribed, and analysed using thematic analysis. Results: 139 patients (74% female) were treated in 164 cases for 38 days on average. Main diagnoses were schizophrenic diseases (43%) and mood disorders (35%), with patients ranging from markedly to severely ill (mean CGI-S: 5.8). 9.4% were in postpartum. Qualitative analysis is still in progress. Preliminary results demonstrate positive responses to individual treatment and environmental integration, whereas frequently changing contacts and the logistical effort were seen critically. Conclusions: Work is still in progress. We expect StB to be an adequate alternative to inpatient treatment for women in puerperium and a new opportunity for patients who need intensive treatment but refuse hospitalisation. Introduction: Adverse childhood experiences (ACEs) are shown to be risk factors for developing anxiety later in life. However, one's family relationship acts as a protective factor between ACEs and anxiety. Objectives: The present study examines the interaction between ACEs and family relationship and their effect on generalized anxiety (GA) amongst the youth population in Hong Kong. Methods: Participants aged 15-24 were recruited from a populationbased epidemiological study in Hong Kong. GA in the past two weeks was assessed using GAD-7, while ACEs were measured using the childhood section of Composite International Diagnostic Interview screening scales (CIDI-SC), encompassing parental psychopathology, physical, emotional, sexual abuse, and neglect before age 17. Family relationship was measured by the Brief Family Relationship Scale (BFRS). Linear regression and a two-way ANCOVA were conducted to examine the association between ACEs, family relationship and GA, while adjusted for age and gender. Results: 633 (70.7%) out of 895 participants had any ACEs. ACEs significantly predicted GAD-7 scores (=1.272, t=4.115, p<.001). Two-way ANCOVA reported a significant interaction effect of ACEs and family relationship on GA (F=4.398, p=.036), namely those who had any ACEs and poorer family relationship scored higher in GAD-7 (p<.001), whereas there was no difference in family relationship for those without ACEs on GA (p=.501). Conclusions: ACEs increases the vulnerability to GA later in life. However, its effect on anxiety decreases when one has a better family relationship. This suggests a possible moderating role of family relationship in developing GA among younger people. Keywords: youth population; adverse childhood experiences; family relationship; generalized anxiety European Psychiatry S367 EPP0652 The landscape of schizophrenia on twitter T. Rodrigues 1 *, N. Guimares 2 and J. Monteiro 3 Introduction: People with schizophrenia experience higher levels of stigma compared with other diseases. The analysis of social media content is a tool of great importance to understand the public opinion toward a particular topic. Objectives: The aim of this study is to analyse the content of social media on schizophrenia and the most prevalent sentiments towards this disorder. Methods: Tweets were retrieved using Twitter's Application Programming Interface and the keyword "schizophrenia". Parameters were set to allow the retrieval of recent and popular tweets on the topic and no restrictions were made in terms of geolocation. Analysis of 8 basic emotions (anger, anticipation, disgust, fear, joy, sadness, surprise, and trust) was conducted automatically using a lexiconbased approach and the NRC Word-Emotion Association Lexicon. Results: Tweets on schizophrenia were heterogeneous. The most prevalent sentiments on the topic were mainly negative, namely anger, fear, sadness and disgust. Qualitative analyses of the most retweeted posts added insight into the nature of the public dialogue on schizophrenia. Conclusions: Analyses of social media content can add value to the research on stigma toward psychiatric disorders. This tool is of growing importance in many fields and further research in mental health can help the development of public health strategies in order to decrease the stigma towards psychiatric disorders. EPP0654 Workplace violence in a 20 year follow-up study of norwegian physicians: The roles of gender, personality and stage of career Introduction: Workplace violence (WPV) is a worldwide health problem with major individual and societal consequences. Previously identified predictors of WPV include working in psychiatry and work stress. Results Conclusions: Higher rates of multiple threats and acts of violence were observed during early medical careers, with men at higher risk. Low levels of vulnerability traits (neuroticism) predicted independently the experience of violent threats. A cohort effect indicated a reduction in WPV (both threats and acts) in the younger cohort. |
// +build e2e
/*
Copyright 2020 The Knative Authors
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package ha
import (
"log"
"testing"
"time"
"knative.dev/pkg/ptr"
"knative.dev/pkg/system"
"knative.dev/pkg/test/logstream"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
pkgnet "knative.dev/pkg/network"
pkgTest "knative.dev/pkg/test"
"knative.dev/serving/pkg/apis/autoscaling"
revisionresourcenames "knative.dev/serving/pkg/reconciler/revision/resources/names"
rtesting "knative.dev/serving/pkg/testing/v1"
"knative.dev/serving/test"
"knative.dev/serving/test/e2e"
)
const (
activatorDeploymentName = "activator"
activatorLabel = "app=activator"
minProbes = 400 // We want to send at least 400 requests.
)
func TestActivatorHAGraceful(t *testing.T) {
t.Skip("TODO(8066): This was added too optimistically. Needs debugging and triaging.")
testActivatorHA(t, nil, 1)
}
func TestActivatorHANonGraceful(t *testing.T) {
// For non-graceful tests, we want the pod to receive a SIGKILL straight away.
testActivatorHA(t, ptr.Int64(0), 0.90)
}
// The Activator does not have leader election enabled.
// The test ensures that stopping one of the activator pods doesn't affect user applications.
// One service is probed during activator restarts and another service is used for testing
// that we can scale from zero after activator restart.
func testActivatorHA(t *testing.T, gracePeriod *int64, slo float64) {
clients := e2e.Setup(t)
cancel := logstream.Start(t)
defer cancel()
podDeleteOptions := &metav1.DeleteOptions{GracePeriodSeconds: gracePeriod}
if err := pkgTest.WaitForDeploymentScale(clients.KubeClient, activatorDeploymentName, system.Namespace(), haReplicas); err != nil {
t.Fatalf("Deployment %s not scaled to %d: %v", activatorDeploymentName, haReplicas, err)
}
activators, err := clients.KubeClient.Kube.CoreV1().Pods(system.Namespace()).List(metav1.ListOptions{
LabelSelector: activatorLabel,
})
if err != nil {
t.Fatal("Failed to get activator pods:", err)
}
// Create first service that we will continually probe during activator restart.
names, resources := createPizzaPlanetService(t,
rtesting.WithConfigAnnotations(map[string]string{
autoscaling.MinScaleAnnotationKey: "1", // Make sure we don't scale to zero during the test.
autoscaling.TargetBurstCapacityKey: "-1", // Make sure all requests go through the activator.
}),
)
test.EnsureTearDown(t, clients, names)
// Create second service that will be scaled to zero and after stopping the activator we'll
// ensure it can be scaled back from zero.
namesScaleToZero, resourcesScaleToZero := createPizzaPlanetService(t,
rtesting.WithConfigAnnotations(map[string]string{
autoscaling.WindowAnnotationKey: autoscaling.WindowMin.String(), // Make sure we scale to zero quickly.
autoscaling.TargetBurstCapacityKey: "-1", // Make sure all requests go through the activator.
}),
)
test.EnsureTearDown(t, clients, namesScaleToZero)
t.Logf("Waiting for %s to scale to zero", namesScaleToZero.Revision)
if err := e2e.WaitForScaleToZero(t, revisionresourcenames.Deployment(resourcesScaleToZero.Revision), clients); err != nil {
t.Fatal("Failed to scale to zero:", err)
}
t.Log("Starting prober")
prober := test.NewProberManager(log.Printf, clients, minProbes)
prober.Spawn(resources.Service.Status.URL.URL())
defer assertSLO(t, prober, slo)
for i, activator := range activators.Items {
t.Logf("Deleting activator%d (%s)", i, activator.Name)
if err := clients.KubeClient.Kube.CoreV1().Pods(system.Namespace()).Delete(activator.Name, podDeleteOptions); err != nil {
t.Fatalf("Failed to delete pod %s: %v", activator.Name, err)
}
// Wait for the killed activator to disappear from the knative service's endpoints.
if err := waitForEndpointsState(clients.KubeClient, resourcesScaleToZero.Revision.Name, test.ServingNamespace, readyEndpointsDoNotContain(activator.Status.PodIP)); err != nil {
t.Fatal("Failed to wait for the service to update its endpoints:", err)
}
if gracePeriod != nil && *gracePeriod == 0 {
t.Log("Allow the network to notice the missing endpoint")
time.Sleep(pkgnet.DefaultDrainTimeout)
}
t.Log("Test if service still works")
assertServiceEventuallyWorks(t, clients, namesScaleToZero, resourcesScaleToZero.Service.Status.URL.URL(), test.PizzaPlanetText1)
t.Logf("Wait for activator%d (%s) to vanish", i, activator.Name)
if err := pkgTest.WaitForPodDeleted(clients.KubeClient, activator.Name, system.Namespace()); err != nil {
t.Fatalf("Did not observe %s to actually be deleted: %v", activator.Name, err)
}
if err := pkgTest.WaitForServiceEndpoints(clients.KubeClient, resourcesScaleToZero.Revision.Name, test.ServingNamespace, haReplicas); err != nil {
t.Fatalf("Deployment %s failed to scale up: %v", activatorDeploymentName, err)
}
if gracePeriod != nil && *gracePeriod == 0 {
t.Log("Allow the network to notice the new endpoint")
time.Sleep(pkgnet.DefaultDrainTimeout)
}
}
}
func assertSLO(t *testing.T, p test.Prober, slo float64) {
t.Helper()
if err := p.Stop(); err != nil {
t.Error("Failed to stop prober:", err)
}
if err := test.CheckSLO(slo, t.Name(), p); err != nil {
t.Error("CheckSLO failed:", err)
}
}
|
What happened: According to a Fortune magazine piece, the micro-blogging service turned down two billion-dollar offers from Facebook and Google. Many, including Twitter's board, were puzzled by the decision. However, since then, Twitter has raised $800 million in what was the largest venture round in history. Despite its revenue being only in the tens of millions and lack of a strong business model, the four-year-old company is now valued at $8 billion. Sources say it's preparing to go public in 2013. |
<gh_stars>0
/*****************************************************************************
* ts_decoders.c: TS Demux custom tables decoders
*****************************************************************************
* Copyright (C) 2016 VLC authors and VideoLAN
*
* This program is free software; you can redistribute it and/or modify it
* under the terms of the GNU Lesser General Public License as published by
* the Free Software Foundation; either version 2.1 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU Lesser General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*****************************************************************************/
#ifdef HAVE_CONFIG_H
# include "config.h"
#endif
#include <vlc_common.h>
#ifndef _DVBPSI_DVBPSI_H_
#include <dvbpsi/dvbpsi.h>
#endif
#ifndef _DVBPSI_DEMUX_H_
#include <dvbpsi/demux.h>
#endif
#include <dvbpsi/psi.h>
#include "ts_decoders.h"
typedef struct
{
DVBPSI_DECODER_COMMON
ts_dvbpsi_rawsections_callback_t pf_callback;
void * p_cb_data;
} ts_dvbpsi_rawtable_decoder_t;
static void ts_dvbpsi_RawSubDecoderGatherSections( dvbpsi_t *p_dvbpsi,
dvbpsi_decoder_t *p_decoder,
dvbpsi_psi_section_t * p_section )
{
dvbpsi_demux_t *p_demux = (dvbpsi_demux_t *) p_dvbpsi->p_decoder;
ts_dvbpsi_rawtable_decoder_t *p_tabledec = (ts_dvbpsi_rawtable_decoder_t*)p_decoder;
if ( !p_tabledec )
{
dvbpsi_DeletePSISections( p_section );
return;
}
if ( p_demux->b_discontinuity )
{
dvbpsi_decoder_reset( DVBPSI_DECODER(p_decoder), true );
p_tabledec->b_discontinuity = false;
p_demux->b_discontinuity = false;
}
else if( p_decoder->i_last_section_number != p_section->i_last_number )
{
dvbpsi_decoder_reset( DVBPSI_DECODER(p_decoder), true );
}
/* Add to linked list of sections */
(void) dvbpsi_decoder_psi_section_add( DVBPSI_DECODER(p_decoder), p_section );
p_decoder->i_last_section_number = p_section->i_last_number;
/* Check if we have all the sections */
if ( dvbpsi_decoder_psi_sections_completed( DVBPSI_DECODER(p_decoder) ) )
{
p_tabledec->b_current_valid = true;
p_tabledec->pf_callback( p_dvbpsi,
p_decoder->p_sections,
p_tabledec->p_cb_data );
/* Delete sections */
dvbpsi_decoder_reset( DVBPSI_DECODER(p_decoder), false );
}
}
static void ts_dvbpsi_RawDecoderGatherSections( dvbpsi_t *p_dvbpsi,
dvbpsi_psi_section_t * p_section )
{
ts_dvbpsi_RawSubDecoderGatherSections( p_dvbpsi, p_dvbpsi->p_decoder, p_section );
}
void ts_dvbpsi_DetachRawSubDecoder( dvbpsi_t *p_dvbpsi, uint8_t i_table_id, uint16_t i_extension )
{
dvbpsi_demux_t *p_demux = (dvbpsi_demux_t *) p_dvbpsi->p_decoder;
dvbpsi_demux_subdec_t* p_subdec;
p_subdec = dvbpsi_demuxGetSubDec( p_demux, i_table_id, i_extension );
if ( p_subdec == NULL || !p_subdec->p_decoder )
return;
dvbpsi_DetachDemuxSubDecoder( p_demux, p_subdec );
dvbpsi_DeleteDemuxSubDecoder( p_subdec );
}
bool ts_dvbpsi_AttachRawSubDecoder( dvbpsi_t* p_dvbpsi,
uint8_t i_table_id, uint16_t i_extension,
ts_dvbpsi_rawsections_callback_t pf_callback,
void *p_cb_data )
{
dvbpsi_demux_t *p_demux = (dvbpsi_demux_t*)p_dvbpsi->p_decoder;
if ( dvbpsi_demuxGetSubDec(p_demux, i_table_id, i_extension) )
return false;
ts_dvbpsi_rawtable_decoder_t *p_decoder;
p_decoder = (ts_dvbpsi_rawtable_decoder_t *) dvbpsi_decoder_new( NULL, 0, true, sizeof(*p_decoder) );
if ( p_decoder == NULL )
return false;
/* subtable decoder configuration */
dvbpsi_demux_subdec_t* p_subdec;
p_subdec = dvbpsi_NewDemuxSubDecoder( i_table_id, i_extension,
ts_dvbpsi_DetachRawSubDecoder,
ts_dvbpsi_RawSubDecoderGatherSections,
DVBPSI_DECODER(p_decoder) );
if (p_subdec == NULL)
{
dvbpsi_decoder_delete( DVBPSI_DECODER(p_decoder) );
return false;
}
/* Attach the subtable decoder to the demux */
dvbpsi_AttachDemuxSubDecoder( p_demux, p_subdec );
p_decoder->pf_callback = pf_callback;
p_decoder->p_cb_data = p_cb_data;
return true;
}
bool ts_dvbpsi_AttachRawDecoder( dvbpsi_t* p_dvbpsi,
ts_dvbpsi_rawsections_callback_t pf_callback,
void *p_cb_data )
{
if ( p_dvbpsi->p_decoder )
return false;
ts_dvbpsi_rawtable_decoder_t *p_decoder = dvbpsi_decoder_new( NULL, 4096, true, sizeof(*p_decoder) );
if ( p_decoder == NULL )
return false;
p_dvbpsi->p_decoder = DVBPSI_DECODER(p_decoder);
p_decoder->pf_gather = ts_dvbpsi_RawDecoderGatherSections;
p_decoder->pf_callback = pf_callback;
p_decoder->p_cb_data = p_cb_data;
return true;
}
void ts_dvbpsi_DetachRawDecoder( dvbpsi_t *p_dvbpsi )
{
if( dvbpsi_decoder_present( p_dvbpsi ) )
dvbpsi_decoder_delete( p_dvbpsi->p_decoder );
p_dvbpsi->p_decoder = NULL;
}
|
The April 2014 deadline for compensation claims against BP over its U.S. oil spill is almost certain to be extended, say both side of the legal settlement that governs payouts, possibly into 2015.
The last date for claims, part of the oil company’s settlement last year with individual and business claimants, was always potentially moveable, but like the open-ended nature of its cost that hit home earlier this year, the indefinite extendibility may not have been fully appreciated by long suffering investors, analysts say.
Extension might not make much difference to the total number and cost of relevant claims – $9.6 billion at the end of June and rising – but it would drag on the company’s hopes of leaving behind the wider Gulf of Mexico oil spill litigation saga that has forced it to sell a fifth of the company to fund a $42.4 billion bill so far.
BP long ago slipped from the second-placed status among global oil companies to a distant fourth as the costs of the 2010 oil spill shrank its earning power. In the five months since the settlement re-emerged as a thorn in its side, its shares have slipped by 4.5 percent, a similar performance to bigger rival Shell despite a price-supportive March launch of an $8 billion share buyback.
“I think there is little doubt that the escalation in the dispute over the fairness or otherwise of business claims has cast a cloud over BP’s share price performance in recent months,” said Neill Morton, analyst at Investec in London.
The terms of the 1,033-page PSC Settlement, signed last year by BP and the Plaintiff’s Steering Committee (PSC), show that the most widely reported deadline, April 22, 2014 is not a fixed date by which claims must be made. It is in fact one of two possible deadlines – the later of which will apply.
The other, now more realistic one, is set as six months after an ‘effective date’, an as yet unknown day by which all legal appeals about the settlement’s validity must have been resolved.
Since the settlement was signed, groups of rebel claimants have filed legal briefs against it. BP has also appealed against the way the settlement is being interpreted.
Lawyers say there is almost no chance these issues will be resolved by Oct. 22, two months from now, six months from April 22, and therefore the last date on which April 22 can be the applicable deadline.
So starting from Oct. 22, the actual deadline after which no more claims will be considered will always be at least six months away, either until all outstanding appeals are resolved, or until all parties agree a different deadline.
BP Chief Executive Bob Dudley told reporters on July 30 the deadline would probably move to a later date. BP declined to comment for this story.
That scenario would move the deadline well into 2015.
The 2010 rig explosion killed 11 workers. The mile (1.6 km)-deep Macondo oil well then poured over 4 million barrels of oil into the Gulf of Mexico, fouling coastlines from Texas to Florida.
BP is saying the PSC settlement is being misinterpreted to pay out on “fictitious” claims. It awaits an appeal court ruling after a one-day hearing on July 8. If the company loses, it is expected to appeal to the Supreme Court.
Groups of rebel claimants have meanwhile also challenged the settlement, arguing it is not favourable enough to them.
The $1.4 billion increase in BP’s PSC Settlement bill in the second quarter to June was equivalent to more than half its net profit for the period. Having almost emptied a fund set up to pay for it and other costs, the company has said it will take future costs out of profits.
The business and personal compensation claims the settlement covers are just one part of the legal process.
This year, as the bill for the United States’ worst offshore environmental disaster has mounted and efforts to reduce investor uncertainty through negotiation have failed, BP has hardened its legal stance.
Dudley has said BP is “digging in” and will “play it long” for the sake of shareholders, who have already lost $5 billion a year worth of cash flow after asset sales to pay for clean-up, compensation, fines and other costs.
BP has settled criminal charges via a guilty plea and fine, but this year it gave up attempts to settle on the civil charges that could add tens of billions more to its bill.
The second phase of a civil trial in New Orleans under judge Carl Barbier is due to start next month, with federal charges brought under the Clean Water Act and damage claims sought by Gulf Coast states at stake.
In July, BP’s case in that trial was seen by legal experts to have won an edge when Halliburton, the company that did the cement work on the Macondo well, pleaded guilty to destroying evidence that might support BP’s case for being no more than negligent, as opposed to grossly negligent.
But BP has also been banned from new federal government contracts because of its criminal status. It has launched a legal challenge to that ban as well. |
<gh_stars>0
#!/usr/bin/python3
while True:
try:
#pull info from user
name = input('Enter a file name: ')
with open(name, 'w') as myfile:
myfile.write('well done\n')
except:
print('EH EH EH, you didn\'t say the magic word')
else:
print('Yeah, you\'re not an idiot.')
break
|
<filename>libs/core/src/lib/busy-indicator/busy-indicator-extended/busy-indicator-extended.directive.ts
import { AfterContentInit, ContentChild, Directive, ElementRef } from '@angular/core';
import { BusyIndicatorComponent } from '../busy-indicator.component';
const messageToastClass = 'fd-message-toast';
@Directive({
// eslint-disable-next-line @angular-eslint/directive-selector
selector: '[fd-busy-indicator-extended]'
})
export class BusyIndicatorExtendedDirective implements AfterContentInit {
/** @hidden */
@ContentChild(BusyIndicatorComponent)
busyIndicator: BusyIndicatorComponent;
/** @hidden */
constructor(private _eleRef: ElementRef) {}
/** @hidden */
ngAfterContentInit(): void {
this._appendCssToParent();
}
/** @hidden */
elementRef(): ElementRef<HTMLElement> {
return this._eleRef;
}
/** @hidden */
private _appendCssToParent(): void {
const hasLabel = this.busyIndicator.label;
if (!hasLabel) {
return;
}
const classList = this.elementRef().nativeElement.parentElement?.classList;
if (classList) {
classList.add('fd-busy-indicator-extended');
if (classList.contains(messageToastClass)) {
classList.add('fd-busy-indicator-extended--message-toast');
}
}
}
}
|
Radiologists' Diagnostic Performance in Differentiation of Rickets and Classic Metaphyseal Lesions on Radiographs: A Multicenter Study. Background: Despite evidence supporting specificity of classic metaphyseal lesions (CML) for diagnosis of child abuse, some medicolegal practitioners claim that CMLs result from rickets rather than trauma. Objective: To evaluate radiologists' diagnostic performance in differentiating rickets and CML on radiographs. Methods: This retrospective 7-center study included children <2 years old who underwent knee radiographs from January 2017 to December 2018 and who either had rickets (25-hydroxy vitamin D <20 ng/mL and abnormal knee radiographs) or knee CMLs and a diagnosis of child abuse from a child abuse pediatrician. Additional injuries were identified through medical record review. Radiographs were cropped and zoomed to present similar depictions of the knee. Eight radiologists independently interpreted radiographs for diagnoses of rickets or CML, rating confidence levels and recording associated radiographic signs. Results: Seventy children (27 girls, 43 boys) had rickets; 77 children (28 girls, 25 boys) had CML. Children with CML were younger than those with rickets (mean, 3.7 vs 14.2 months, p<.001; 89.6% vs 5.7%, <6 months old, 3.9% vs 65.7% >1 year old). All children with CML had additional injuries, other than the knee CMLs, identified on physical examination or other imaging examinations. Radiologists showed almost perfect agreement for moderate- or high-confidence interpretations for rickets (=0.92) and CML (=0.89). Across radiologists, estimated sensitivity, specificity, and accuracy for CMLs, for moderate or high-confidence interpretations, were 95.1%, 97.0%, and 96.0%. Accuracy was not significantly between pediatric and non-pediatric radiologists (p=.20) or less-experienced and more-experienced radiologists (p=.57). Loss of provisional metaphyseal zone of calcification, cupping, fraying, and physeal widening were more common in rickets than CMLs, being detected in <4% of children with CML; corner fracture, bucket-handle fracture, subphyseal lucency, deformed corner, metaphyseal irregularity, and subperiosteal new bone formation were more common in CML than rickets, being detected in <4% of children with rickets. Conclusion: Radiologists had high interobserver agreement and high diagnostic performance for differentiating rickets and CMLs. Recognition that CMLs mostly occur in children <6 months and are unusual in children >1 year may assist interpretations. Clinical Impact: Rickets and CMLs exhibit distinct radiographic signs, and radiologists can reliably differentiate these two entities. |
<reponame>dioufcs/TP1_LOG8371
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License;
* you may not use this file except in compliance with the Elastic License.
*/
package org.elasticsearch.xpack.core.security.action.user;
import org.elasticsearch.Version;
import org.elasticsearch.action.ActionResponse;
import org.elasticsearch.common.io.stream.StreamInput;
import org.elasticsearch.common.io.stream.StreamOutput;
import org.elasticsearch.xpack.core.security.authc.Authentication;
import org.elasticsearch.xpack.core.security.user.User;
import java.io.IOException;
public class AuthenticateResponse extends ActionResponse {
private Authentication authentication;
public AuthenticateResponse(StreamInput in) throws IOException {
super(in);
if (in.getVersion().before(Version.V_6_6_0)) {
final User user = User.readFrom(in);
final Authentication.RealmRef unknownRealm = new Authentication.RealmRef("__unknown", "__unknown", "__unknown");
authentication = new Authentication(user, unknownRealm, unknownRealm);
} else {
authentication = new Authentication(in);
}
}
public AuthenticateResponse(Authentication authentication){
this.authentication = authentication;
}
public Authentication authentication() {
return authentication;
}
@Override
public void writeTo(StreamOutput out) throws IOException {
if (out.getVersion().before(Version.V_6_6_0)) {
User.writeTo(authentication.getUser(), out);
} else {
authentication.writeTo(out);
}
}
}
|
Estimating the three dimensional structure of the harvard forest using a database driven multi-modal remote sensing technique The global forest covers over 30% of the Earth's landmass and plays a critical role in a number of global systems including the carbon cycle. Developing methods to track the carbon flow into and out of forests is necessary to gain a complete understanding of the global carbon cycle and, in turn, its effect on the climate. Remote sensing technologies such as satellite based passive optical remote sensing and synthetic aperture radar are uniquely capable of interrogating forests. Using data fusion or synergy, we present a novel approach to estimating forest aboveground biomass and mean canopy height in heterogeneous forest regions with minimal required ancillary ground measurements. We present a dynamic database driven approach wherein a region of study is divided into stands and each stand is compared to a set of simulated forest stands. The simulated stands each contain a set of simulated fractal trees with distributions based on observed ranges within the area of study. Each stand within the region of study is examined and compared to the simulated forest stands using a measure of similarity. If a simulated stand is not found to be similar to the measured stand, an iterative process is employed wherein the most similar simulated stand is dynamically modified including its species composition and the mean canopy height and above-ground biomass within each species until a similar simulated stand is constructed. The simulated stand, regardless of if it existed previously in the set of simulated stands or if it needed to be dynamically generated, is considered a reasonable representation for the measured stand under test and its height and biomass are reported. This approach relies heavily on our sensor simulators, including our fractal-based tree geometry generator, as well as SAR, IfSAR, LiDAR, and Optical simulators. We propose to validate our approach in the Harvard Forest, a heavily studied region in central Massachusetts. |
Read the side effects of Topical Clarithromycin as described in the medical literature. In case of any doubt consult your doctor or pharmacist.
Dryness, feeling of warmth, irritation, itching, peeling, redness, stinging and swelling of the skin.
* Wash your hands thoroughly after applying this gel.
Acne vulgaris, popularly known as 'pimples' or 'zits' is a skin condition affecting most teenagers. |
#[doc = "Reader of register PE7"]
pub type R = crate::R<u8, super::PE7>;
#[doc = "Writer for register PE7"]
pub type W = crate::W<u8, super::PE7>;
#[doc = "Register PE7 `reset()`'s with value 0"]
impl crate::ResetValue for super::PE7 {
type Type = u8;
#[inline(always)]
fn reset_value() -> Self::Type {
0
}
}
#[doc = "Wakeup Pin Enable For LLWU_P24\n\nValue on reset: 0"]
#[derive(Clone, Copy, Debug, PartialEq)]
pub enum WUPE24_A {
#[doc = "0: External input pin disabled as wakeup input"]
_00,
#[doc = "1: External input pin enabled with rising edge detection"]
_01,
#[doc = "2: External input pin enabled with falling edge detection"]
_10,
#[doc = "3: External input pin enabled with any change detection"]
_11,
}
impl From<WUPE24_A> for u8 {
#[inline(always)]
fn from(variant: WUPE24_A) -> Self {
match variant {
WUPE24_A::_00 => 0,
WUPE24_A::_01 => 1,
WUPE24_A::_10 => 2,
WUPE24_A::_11 => 3,
}
}
}
#[doc = "Reader of field `WUPE24`"]
pub type WUPE24_R = crate::R<u8, WUPE24_A>;
impl WUPE24_R {
#[doc = r"Get enumerated values variant"]
#[inline(always)]
pub fn variant(&self) -> WUPE24_A {
match self.bits {
0 => WUPE24_A::_00,
1 => WUPE24_A::_01,
2 => WUPE24_A::_10,
3 => WUPE24_A::_11,
_ => unreachable!(),
}
}
#[doc = "Checks if the value of the field is `_00`"]
#[inline(always)]
pub fn is_00(&self) -> bool {
*self == WUPE24_A::_00
}
#[doc = "Checks if the value of the field is `_01`"]
#[inline(always)]
pub fn is_01(&self) -> bool {
*self == WUPE24_A::_01
}
#[doc = "Checks if the value of the field is `_10`"]
#[inline(always)]
pub fn is_10(&self) -> bool {
*self == WUPE24_A::_10
}
#[doc = "Checks if the value of the field is `_11`"]
#[inline(always)]
pub fn is_11(&self) -> bool {
*self == WUPE24_A::_11
}
}
#[doc = "Write proxy for field `WUPE24`"]
pub struct WUPE24_W<'a> {
w: &'a mut W,
}
impl<'a> WUPE24_W<'a> {
#[doc = r"Writes `variant` to the field"]
#[inline(always)]
pub fn variant(self, variant: WUPE24_A) -> &'a mut W {
{
self.bits(variant.into())
}
}
#[doc = "External input pin disabled as wakeup input"]
#[inline(always)]
pub fn _00(self) -> &'a mut W {
self.variant(WUPE24_A::_00)
}
#[doc = "External input pin enabled with rising edge detection"]
#[inline(always)]
pub fn _01(self) -> &'a mut W {
self.variant(WUPE24_A::_01)
}
#[doc = "External input pin enabled with falling edge detection"]
#[inline(always)]
pub fn _10(self) -> &'a mut W {
self.variant(WUPE24_A::_10)
}
#[doc = "External input pin enabled with any change detection"]
#[inline(always)]
pub fn _11(self) -> &'a mut W {
self.variant(WUPE24_A::_11)
}
#[doc = r"Writes raw bits to the field"]
#[inline(always)]
pub fn bits(self, value: u8) -> &'a mut W {
self.w.bits = (self.w.bits & !0x03) | ((value as u8) & 0x03);
self.w
}
}
#[doc = "Wakeup Pin Enable For LLWU_P25\n\nValue on reset: 0"]
#[derive(Clone, Copy, Debug, PartialEq)]
pub enum WUPE25_A {
#[doc = "0: External input pin disabled as wakeup input"]
_00,
#[doc = "1: External input pin enabled with rising edge detection"]
_01,
#[doc = "2: External input pin enabled with falling edge detection"]
_10,
#[doc = "3: External input pin enabled with any change detection"]
_11,
}
impl From<WUPE25_A> for u8 {
#[inline(always)]
fn from(variant: WUPE25_A) -> Self {
match variant {
WUPE25_A::_00 => 0,
WUPE25_A::_01 => 1,
WUPE25_A::_10 => 2,
WUPE25_A::_11 => 3,
}
}
}
#[doc = "Reader of field `WUPE25`"]
pub type WUPE25_R = crate::R<u8, WUPE25_A>;
impl WUPE25_R {
#[doc = r"Get enumerated values variant"]
#[inline(always)]
pub fn variant(&self) -> WUPE25_A {
match self.bits {
0 => WUPE25_A::_00,
1 => WUPE25_A::_01,
2 => WUPE25_A::_10,
3 => WUPE25_A::_11,
_ => unreachable!(),
}
}
#[doc = "Checks if the value of the field is `_00`"]
#[inline(always)]
pub fn is_00(&self) -> bool {
*self == WUPE25_A::_00
}
#[doc = "Checks if the value of the field is `_01`"]
#[inline(always)]
pub fn is_01(&self) -> bool {
*self == WUPE25_A::_01
}
#[doc = "Checks if the value of the field is `_10`"]
#[inline(always)]
pub fn is_10(&self) -> bool {
*self == WUPE25_A::_10
}
#[doc = "Checks if the value of the field is `_11`"]
#[inline(always)]
pub fn is_11(&self) -> bool {
*self == WUPE25_A::_11
}
}
#[doc = "Write proxy for field `WUPE25`"]
pub struct WUPE25_W<'a> {
w: &'a mut W,
}
impl<'a> WUPE25_W<'a> {
#[doc = r"Writes `variant` to the field"]
#[inline(always)]
pub fn variant(self, variant: WUPE25_A) -> &'a mut W {
{
self.bits(variant.into())
}
}
#[doc = "External input pin disabled as wakeup input"]
#[inline(always)]
pub fn _00(self) -> &'a mut W {
self.variant(WUPE25_A::_00)
}
#[doc = "External input pin enabled with rising edge detection"]
#[inline(always)]
pub fn _01(self) -> &'a mut W {
self.variant(WUPE25_A::_01)
}
#[doc = "External input pin enabled with falling edge detection"]
#[inline(always)]
pub fn _10(self) -> &'a mut W {
self.variant(WUPE25_A::_10)
}
#[doc = "External input pin enabled with any change detection"]
#[inline(always)]
pub fn _11(self) -> &'a mut W {
self.variant(WUPE25_A::_11)
}
#[doc = r"Writes raw bits to the field"]
#[inline(always)]
pub fn bits(self, value: u8) -> &'a mut W {
self.w.bits = (self.w.bits & !(0x03 << 2)) | (((value as u8) & 0x03) << 2);
self.w
}
}
#[doc = "Wakeup Pin Enable For LLWU_P26\n\nValue on reset: 0"]
#[derive(Clone, Copy, Debug, PartialEq)]
pub enum WUPE26_A {
#[doc = "0: External input pin disabled as wakeup input"]
_00,
#[doc = "1: External input pin enabled with rising edge detection"]
_01,
#[doc = "2: External input pin enabled with falling edge detection"]
_10,
#[doc = "3: External input pin enabled with any change detection"]
_11,
}
impl From<WUPE26_A> for u8 {
#[inline(always)]
fn from(variant: WUPE26_A) -> Self {
match variant {
WUPE26_A::_00 => 0,
WUPE26_A::_01 => 1,
WUPE26_A::_10 => 2,
WUPE26_A::_11 => 3,
}
}
}
#[doc = "Reader of field `WUPE26`"]
pub type WUPE26_R = crate::R<u8, WUPE26_A>;
impl WUPE26_R {
#[doc = r"Get enumerated values variant"]
#[inline(always)]
pub fn variant(&self) -> WUPE26_A {
match self.bits {
0 => WUPE26_A::_00,
1 => WUPE26_A::_01,
2 => WUPE26_A::_10,
3 => WUPE26_A::_11,
_ => unreachable!(),
}
}
#[doc = "Checks if the value of the field is `_00`"]
#[inline(always)]
pub fn is_00(&self) -> bool {
*self == WUPE26_A::_00
}
#[doc = "Checks if the value of the field is `_01`"]
#[inline(always)]
pub fn is_01(&self) -> bool {
*self == WUPE26_A::_01
}
#[doc = "Checks if the value of the field is `_10`"]
#[inline(always)]
pub fn is_10(&self) -> bool {
*self == WUPE26_A::_10
}
#[doc = "Checks if the value of the field is `_11`"]
#[inline(always)]
pub fn is_11(&self) -> bool {
*self == WUPE26_A::_11
}
}
#[doc = "Write proxy for field `WUPE26`"]
pub struct WUPE26_W<'a> {
w: &'a mut W,
}
impl<'a> WUPE26_W<'a> {
#[doc = r"Writes `variant` to the field"]
#[inline(always)]
pub fn variant(self, variant: WUPE26_A) -> &'a mut W {
{
self.bits(variant.into())
}
}
#[doc = "External input pin disabled as wakeup input"]
#[inline(always)]
pub fn _00(self) -> &'a mut W {
self.variant(WUPE26_A::_00)
}
#[doc = "External input pin enabled with rising edge detection"]
#[inline(always)]
pub fn _01(self) -> &'a mut W {
self.variant(WUPE26_A::_01)
}
#[doc = "External input pin enabled with falling edge detection"]
#[inline(always)]
pub fn _10(self) -> &'a mut W {
self.variant(WUPE26_A::_10)
}
#[doc = "External input pin enabled with any change detection"]
#[inline(always)]
pub fn _11(self) -> &'a mut W {
self.variant(WUPE26_A::_11)
}
#[doc = r"Writes raw bits to the field"]
#[inline(always)]
pub fn bits(self, value: u8) -> &'a mut W {
self.w.bits = (self.w.bits & !(0x03 << 4)) | (((value as u8) & 0x03) << 4);
self.w
}
}
#[doc = "Wakeup Pin Enable For LLWU_P27\n\nValue on reset: 0"]
#[derive(Clone, Copy, Debug, PartialEq)]
pub enum WUPE27_A {
#[doc = "0: External input pin disabled as wakeup input"]
_00,
#[doc = "1: External input pin enabled with rising edge detection"]
_01,
#[doc = "2: External input pin enabled with falling edge detection"]
_10,
#[doc = "3: External input pin enabled with any change detection"]
_11,
}
impl From<WUPE27_A> for u8 {
#[inline(always)]
fn from(variant: WUPE27_A) -> Self {
match variant {
WUPE27_A::_00 => 0,
WUPE27_A::_01 => 1,
WUPE27_A::_10 => 2,
WUPE27_A::_11 => 3,
}
}
}
#[doc = "Reader of field `WUPE27`"]
pub type WUPE27_R = crate::R<u8, WUPE27_A>;
impl WUPE27_R {
#[doc = r"Get enumerated values variant"]
#[inline(always)]
pub fn variant(&self) -> WUPE27_A {
match self.bits {
0 => WUPE27_A::_00,
1 => WUPE27_A::_01,
2 => WUPE27_A::_10,
3 => WUPE27_A::_11,
_ => unreachable!(),
}
}
#[doc = "Checks if the value of the field is `_00`"]
#[inline(always)]
pub fn is_00(&self) -> bool {
*self == WUPE27_A::_00
}
#[doc = "Checks if the value of the field is `_01`"]
#[inline(always)]
pub fn is_01(&self) -> bool {
*self == WUPE27_A::_01
}
#[doc = "Checks if the value of the field is `_10`"]
#[inline(always)]
pub fn is_10(&self) -> bool {
*self == WUPE27_A::_10
}
#[doc = "Checks if the value of the field is `_11`"]
#[inline(always)]
pub fn is_11(&self) -> bool {
*self == WUPE27_A::_11
}
}
#[doc = "Write proxy for field `WUPE27`"]
pub struct WUPE27_W<'a> {
w: &'a mut W,
}
impl<'a> WUPE27_W<'a> {
#[doc = r"Writes `variant` to the field"]
#[inline(always)]
pub fn variant(self, variant: WUPE27_A) -> &'a mut W {
{
self.bits(variant.into())
}
}
#[doc = "External input pin disabled as wakeup input"]
#[inline(always)]
pub fn _00(self) -> &'a mut W {
self.variant(WUPE27_A::_00)
}
#[doc = "External input pin enabled with rising edge detection"]
#[inline(always)]
pub fn _01(self) -> &'a mut W {
self.variant(WUPE27_A::_01)
}
#[doc = "External input pin enabled with falling edge detection"]
#[inline(always)]
pub fn _10(self) -> &'a mut W {
self.variant(WUPE27_A::_10)
}
#[doc = "External input pin enabled with any change detection"]
#[inline(always)]
pub fn _11(self) -> &'a mut W {
self.variant(WUPE27_A::_11)
}
#[doc = r"Writes raw bits to the field"]
#[inline(always)]
pub fn bits(self, value: u8) -> &'a mut W {
self.w.bits = (self.w.bits & !(0x03 << 6)) | (((value as u8) & 0x03) << 6);
self.w
}
}
impl R {
#[doc = "Bits 0:1 - Wakeup Pin Enable For LLWU_P24"]
#[inline(always)]
pub fn wupe24(&self) -> WUPE24_R {
WUPE24_R::new((self.bits & 0x03) as u8)
}
#[doc = "Bits 2:3 - Wakeup Pin Enable For LLWU_P25"]
#[inline(always)]
pub fn wupe25(&self) -> WUPE25_R {
WUPE25_R::new(((self.bits >> 2) & 0x03) as u8)
}
#[doc = "Bits 4:5 - Wakeup Pin Enable For LLWU_P26"]
#[inline(always)]
pub fn wupe26(&self) -> WUPE26_R {
WUPE26_R::new(((self.bits >> 4) & 0x03) as u8)
}
#[doc = "Bits 6:7 - Wakeup Pin Enable For LLWU_P27"]
#[inline(always)]
pub fn wupe27(&self) -> WUPE27_R {
WUPE27_R::new(((self.bits >> 6) & 0x03) as u8)
}
}
impl W {
#[doc = "Bits 0:1 - Wakeup Pin Enable For LLWU_P24"]
#[inline(always)]
pub fn wupe24(&mut self) -> WUPE24_W {
WUPE24_W { w: self }
}
#[doc = "Bits 2:3 - Wakeup Pin Enable For LLWU_P25"]
#[inline(always)]
pub fn wupe25(&mut self) -> WUPE25_W {
WUPE25_W { w: self }
}
#[doc = "Bits 4:5 - Wakeup Pin Enable For LLWU_P26"]
#[inline(always)]
pub fn wupe26(&mut self) -> WUPE26_W {
WUPE26_W { w: self }
}
#[doc = "Bits 6:7 - Wakeup Pin Enable For LLWU_P27"]
#[inline(always)]
pub fn wupe27(&mut self) -> WUPE27_W {
WUPE27_W { w: self }
}
}
|
The #SLC app is the primary program guide for the 27th Annual National Service-Learning Conference*, Minneapolis, MN, March 30-April 2. Use the app to access details related to conference programming, speakers, maps, event locations, surveys, and social media. Create your own attendee profile and stay engaged throughout the event. |
<gh_stars>1-10
/*-
* #%L
* Axiom Pinpointing Experiments
* $Id:$
* $HeadURL:$
* %%
* Copyright (C) 2017 - 2018 Live Ontologies Project
* %%
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
* #L%
*/
package com.github.joergschwabe;
import java.io.BufferedReader;
import java.io.File;
import java.io.FileNotFoundException;
import java.io.FileReader;
import java.io.IOException;
import java.io.PrintStream;
import java.io.PrintWriter;
import java.util.ArrayList;
import java.util.Collections;
import java.util.HashMap;
import java.util.HashSet;
import java.util.List;
import java.util.Map;
import java.util.Set;
import java.util.SortedMap;
import java.util.TreeMap;
import org.apache.commons.io.output.NullOutputStream;
import org.liveontologies.puli.Inference;
import org.liveontologies.puli.InferenceJustifier;
import org.semanticweb.elk.owl.interfaces.ElkAxiom;
import org.semanticweb.elk.owl.interfaces.ElkClassAxiom;
import org.semanticweb.elk.owl.interfaces.ElkObject;
import org.semanticweb.elk.owl.iris.ElkFullIri;
import org.semanticweb.owlapi.apibinding.OWLManager;
import org.semanticweb.owlapi.model.OWLOntologyManager;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import com.github.joergschwabe.ConvertToElSatKrssInput.ElSatPrinterVisitor;
import com.github.joergschwabe.experiments.CsvQueryDecoder;
import com.github.joergschwabe.experiments.ExperimentException;
import com.github.joergschwabe.proofs.CsvQueryProofProvider;
import com.github.joergschwabe.proofs.ElkProofProvider;
import com.github.joergschwabe.proofs.JustificationCompleteProof;
import com.github.joergschwabe.proofs.ProofProvider;
import com.google.common.base.Function;
import com.google.common.base.Functions;
import com.google.common.collect.Iterables;
import net.sourceforge.argparse4j.ArgumentParsers;
import net.sourceforge.argparse4j.annotation.Arg;
import net.sourceforge.argparse4j.impl.Arguments;
import net.sourceforge.argparse4j.inf.ArgumentParser;
import net.sourceforge.argparse4j.inf.ArgumentParserException;
/**
* Exports proofs to CNF files as produced by EL+SAT.
* <p>
* Proof of each query from the query file is exported into its query directory.
* This query directory will be placed inside of the output directory and its
* name will be derived from the query by {@link Utils#sha1hex(String)}.
* <p>
* Each conclusion and axiom occurring in a proof of a query is given a unique
* positive integer that is called its atom. The files placed into the directory
* of this query are:
* <ul>
* <li>{@value #FILE_NAME}+{@value #SUFFIX_H} - header of the CNF file.
* <li>{@value #FILE_NAME}+{@value #SUFFIX_CNF} - inferences encoded as clauses
* where the premises are negated atom and the conclusion is positive atom.
* <li>{@value #FILE_NAME}+{@value #SUFFIX_Q} - atom of the goal conclusion.
* <li>{@value #FILE_NAME}+{@value #SUFFIX_QUESTION} - negated atom of the goal
* conclusion followed by 0.
* <li>{@value #FILE_NAME}+{@value #SUFFIX_QUERY} - query as read from the query
* file.
* <li>{@value #FILE_NAME}+{@value #SUFFIX_PPP} - atoms of axioms.
* <li>{@value #FILE_NAME}+{@value #SUFFIX_PPP_G_U} - sorted atoms of axioms.
* <li>{@value #FILE_NAME}+{@value #SUFFIX_ASSUMPTIONS} - atoms of axioms
* separated by " " and followed by 0.
* <li>{@value #FILE_NAME}+{@value #SUFFIX_ZZZ} - conclusions with their atoms.
* <li>{@value #FILE_NAME}+{@value #SUFFIX_ZZZ_GCI} - GCI axioms with their
* atoms.
* <li>{@value #FILE_NAME}+{@value #SUFFIX_ZZZ_RI} - RI axioms with their atoms.
* </ul>
*
* @author <NAME>
*/
public class DirectSatEncodingUsingElkCsvQuery {
public static final String FILE_NAME = "encoding";
public static final String SUFFIX_H = ".h";
public static final String SUFFIX_CNF = ".cnf";
public static final String SUFFIX_Q = ".q";
public static final String SUFFIX_QUESTION = ".question";
public static final String SUFFIX_QUERY = ".query";
public static final String SUFFIX_PPP = ".ppp";
public static final String SUFFIX_PPP_G_U = ".ppp.g.u";
public static final String SUFFIX_ASSUMPTIONS = ".assumptions";
public static final String SUFFIX_ZZZ = ".zzz";
public static final String SUFFIX_ZZZ_GCI = ".zzz.gci";
public static final String SUFFIX_ZZZ_RI = ".zzz.ri";
private static final Logger LOG_ = LoggerFactory
.getLogger(DirectSatEncodingUsingElkCsvQuery.class);
public static final String OPT_ONTOLOGY = "ontology";
public static final String OPT_QUERIES = "queries";
public static final String OPT_OUTDIR = "outdir";
public static final String OPT_MINIMAL = "minimal";
public static final String OPT_PROGRESS = "progress";
public static class Options {
@Arg(dest = OPT_ONTOLOGY)
public File ontologyFile;
@Arg(dest = OPT_QUERIES)
public File queriesFile;
@Arg(dest = OPT_OUTDIR)
public File outDir;
@Arg(dest = OPT_MINIMAL)
public boolean minimal;
@Arg(dest = OPT_PROGRESS)
public boolean progress;
}
public static void main(final String[] args) {
final ArgumentParser parser = ArgumentParsers
.newArgumentParser(
DirectSatEncodingUsingElkCsvQuery.class.getSimpleName())
.description(
"Export proofs into CNF files as produced by EL+SAT.");
parser.addArgument(OPT_ONTOLOGY)
.type(Arguments.fileType().verifyExists().verifyCanRead())
.help("ontology file");
parser.addArgument(OPT_QUERIES)
.type(Arguments.fileType().verifyExists().verifyCanRead())
.help("query file");
parser.addArgument(OPT_OUTDIR).type(File.class)
.help("output directory");
parser.addArgument("--" + OPT_MINIMAL).action(Arguments.storeTrue())
.help("generate only necessary files");
parser.addArgument("--" + OPT_PROGRESS).action(Arguments.storeTrue())
.help("print progress to stdout");
final OWLOntologyManager manager = OWLManager
.createOWLOntologyManager();
BufferedReader queryReader = null;
try {
final Options opt = new Options();
parser.parseArgs(args, opt);
if (!Utils.cleanDir(opt.outDir)) {
LOG_.error("Could not prepare the output directory!");
System.exit(2);
}
final ElkProofProvider elkProofProvider = new ElkProofProvider(
opt.ontologyFile, manager);
final ElkObject.Factory factory = elkProofProvider.getReasoner()
.getElkFactory();
final CsvQueryDecoder.Factory<ElkAxiom> decoder = new CsvQueryDecoder.Factory<ElkAxiom>() {
@Override
public ElkAxiom createQuery(final String subIri,
final String supIri) {
return factory.getSubClassOfAxiom(
factory.getClass(new ElkFullIri(subIri)),
factory.getClass(new ElkFullIri(supIri)));
}
};
final ProofProvider<String, Object, Inference<Object>, ElkAxiom> proofProvider = new CsvQueryProofProvider<>(
decoder, elkProofProvider);
queryReader = new BufferedReader(new FileReader(opt.queriesFile));
int queryCount = 0;
String line;
while ((line = queryReader.readLine()) != null) {
queryCount++;
}
queryReader.close();
final Progress progress;
if (opt.progress) {
progress = new Progress(System.out, queryCount);
} else {
progress = new Progress(new PrintStream(new NullOutputStream()),
queryCount);
}
queryReader = new BufferedReader(new FileReader(opt.queriesFile));
int queryIndex = 0;
while ((line = queryReader.readLine()) != null) {
LOG_.debug("Encoding {} of {}: {}", queryIndex, queryCount,
line);
encode(line, proofProvider, opt.outDir, opt.minimal, queryCount,
queryIndex++);
progress.update();
}
progress.finish();
} catch (final FileNotFoundException e) {
LOG_.error("File Not Found!", e);
System.exit(2);
} catch (final ExperimentException e) {
LOG_.error("Could not classify the ontology!", e);
System.exit(2);
} catch (final IOException e) {
LOG_.error("I/O error!", e);
System.exit(2);
} catch (final ArgumentParserException e) {
parser.handleError(e);
System.exit(2);
} finally {
Utils.closeQuietly(queryReader);
}
}
private static <C, I extends Inference<? extends C>, A> void encode(
final String line,
final ProofProvider<String, C, I, A> proofProvider,
final File outputDirectory, final boolean minimal,
final int queryCount, final int queryIndex)
throws IOException, ExperimentException {
final String queryName = Utils.sha1hex(line);
// @formatter:off
// final String queryName = Utils.toFileName(line);
// final String queryName = String.format(
// "%0" + Integer.toString(queryCount).length() + "d", queryIndex);
// @formatter:on
final File outDir = new File(outputDirectory, queryName);
final File hFile = new File(outDir, FILE_NAME + SUFFIX_H);
final File cnfFile = new File(outDir, FILE_NAME + SUFFIX_CNF);
final File qFile = new File(outDir, FILE_NAME + SUFFIX_Q);
final File questionFile = new File(outDir, FILE_NAME + SUFFIX_QUESTION);
final File queryFile = new File(outDir, FILE_NAME + SUFFIX_QUERY);
final File pppFile = new File(outDir, FILE_NAME + SUFFIX_PPP);
final File pppguFile = new File(outDir, FILE_NAME + SUFFIX_PPP_G_U);
final File assumptionsFile = new File(outDir,
FILE_NAME + SUFFIX_ASSUMPTIONS);
final File zzzFile = new File(outDir, FILE_NAME + SUFFIX_ZZZ);
final File zzzgciFile = new File(outDir, FILE_NAME + SUFFIX_ZZZ_GCI);
final File zzzriFile = new File(outDir, FILE_NAME + SUFFIX_ZZZ_RI);
outDir.mkdirs();
PrintWriter cnfWriter = null;
PrintWriter hWriter = null;
try {
cnfWriter = new PrintWriter(cnfFile);
hWriter = new PrintWriter(hFile);
final PrintWriter cnf = cnfWriter;
final JustificationCompleteProof<C, I, A> proof = proofProvider
.getProof(line);
final InferenceJustifier<? super I, ? extends Set<? extends A>> justifier = proof
.getJustifier();
final Set<A> axioms = new HashSet<A>();
final Set<C> conclusions = new HashSet<C>();
Utils.traverseProofs(proof.getQuery(), proof.getProof(), justifier,
Functions.<I> identity(), new Function<C, Void>() {
@Override
public Void apply(final C expr) {
conclusions.add(expr);
return null;
}
}, new Function<A, Void>() {
@Override
public Void apply(final A axiom) {
axioms.add(axiom);
return null;
}
});
final Utils.Counter literalCounter = new Utils.Counter(1);
final Utils.Counter clauseCounter = new Utils.Counter();
final Map<A, Integer> axiomIndex = new HashMap<A, Integer>();
for (final A axiom : axioms) {
axiomIndex.put(axiom, literalCounter.next());
}
final Map<C, Integer> conclusionIndex = new HashMap<C, Integer>();
for (final C conclusion : conclusions) {
conclusionIndex.put(conclusion, literalCounter.next());
}
// cnf
Utils.traverseProofs(proof.getQuery(), proof.getProof(), justifier,
new Function<I, Void>() {
@Override
public Void apply(final I inf) {
LOG_.trace("processing {}", inf);
for (final A axiom : justifier
.getJustification(inf)) {
cnf.print(-axiomIndex.get(axiom));
cnf.print(" ");
}
for (final C premise : inf.getPremises()) {
cnf.print(-conclusionIndex.get(premise));
cnf.print(" ");
}
cnf.print(conclusionIndex.get(inf.getConclusion()));
cnf.println(" 0");
clauseCounter.next();
return null;
}
}, Functions.<C> identity(), Functions.<A> identity());
final int lastLiteral = literalCounter.next();
// h
hWriter.println(
"p cnf " + (lastLiteral - 1) + " " + clauseCounter.next());
// ppp
if (!minimal) {
writeLines(axiomIndex.values(), pppFile);
}
// ppp.g.u
final List<Integer> orderedAxioms = new ArrayList<Integer>(
axiomIndex.values());
Collections.sort(orderedAxioms);
writeLines(orderedAxioms, pppguFile);
// assumptions
writeSpaceSeparated0Terminated(orderedAxioms, assumptionsFile);
// q
writeLines(Collections
.singleton(conclusionIndex.get(proof.getQuery())), qFile);
// question
writeSpaceSeparated0Terminated(
Collections
.singleton(-conclusionIndex.get(proof.getQuery())),
questionFile);
// query
if (!minimal) {
writeLines(Collections.singleton(line), queryFile);
}
// zzz
if (!minimal) {
final SortedMap<Integer, A> gcis = new TreeMap<Integer, A>();
final SortedMap<Integer, A> ris = new TreeMap<Integer, A>();
for (final Map.Entry<A, Integer> entry : axiomIndex
.entrySet()) {
final A expr = entry.getKey();
final int lit = entry.getValue();
if (expr instanceof ElkClassAxiom) {
gcis.put(lit, expr);
} else {
ris.put(lit, expr);
}
}
final SortedMap<Integer, C> lemmas = new TreeMap<Integer, C>();
for (final Map.Entry<C, Integer> entry : conclusionIndex
.entrySet()) {
lemmas.put(entry.getValue(), entry.getKey());
}
final Function<Map.Entry<Integer, A>, String> print = new Function<Map.Entry<Integer, A>, String>() {
@Override
public String apply(final Map.Entry<Integer, A> entry) {
final StringBuilder result = new StringBuilder();
result.append(entry.getKey()).append(" ");
final A axiom = entry.getValue();
if (axiom instanceof ElkAxiom) {
((ElkAxiom) axiom)
.accept(new ElSatPrinterVisitor(result));
// Remove the last line end.
result.setLength(result.length() - 1);
} else {
result.append(axiom);
}
return result.toString();
}
};
writeLines(Iterables.transform(gcis.entrySet(), print),
zzzgciFile);
writeLines(Iterables.transform(ris.entrySet(), print),
zzzriFile);
writeLines(Iterables.transform(lemmas.entrySet(),
new Function<Map.Entry<Integer, C>, String>() {
@Override
public String apply(
final Map.Entry<Integer, C> entry) {
final StringBuilder result = new StringBuilder();
result.append(entry.getKey()).append(" ")
.append(entry.getValue());
return result.toString();
}
}), zzzFile);
}
} finally {
Utils.closeQuietly(cnfWriter);
Utils.closeQuietly(hWriter);
}
}
private static void writeLines(final Iterable<?> lines, final File file)
throws FileNotFoundException {
PrintWriter writer = null;
try {
writer = new PrintWriter(file);
for (final Object line : lines) {
writer.println(line);
}
} finally {
if (writer != null) {
writer.close();
}
}
}
private static void writeSpaceSeparated0Terminated(
final Iterable<?> iterable, final File file)
throws FileNotFoundException {
PrintWriter writer = null;
try {
writer = new PrintWriter(file);
for (final Object object : iterable) {
writer.print(object);
writer.print(" ");
}
writer.print("0");
} finally {
if (writer != null) {
writer.close();
}
}
}
}
|
Safety management of a patient undergoing thoracic aortic surgery by spinal evoked potential monitoring. We have presented a patient (48 year-old male) in whom the evoked spinal potential monitor detected impending spinal ischemia after aortic cross-clamping, which allowed surgical intervention to be modified so as to restore the decaying evoked spinal potential. The patient received replacement of the dissecting aneurysm in the thoracic descending aorta with clamping of the aorta at the sites immediately proximal and distal to the aneurysm and initiation of femoro-femoral venoarterial bypass under normothermia. The evoked spinal potential was recorded via the T12/L1 epidural electrode in response to transdural electric stimulation of the spinal cord at the C7/T1 level. As the evoked spinal potential gradually decreased in amplitude without latency prolongation after aortic cross-clamping, the distal clamp was moved from the T6 vertebral level to the T4. The reduced spinal potential then returned to the baseline amplitude. This episode was repeated twice from surgical necessity. The patient was free from any neurological disorders postoperatively. |
def local_rate_from_national_rate(data, localpop):
scale = UK_POP_2011 / localpop
data.Rate = data.Rate * scale
return data |
#ifdef CH_LANG_CC
/*
* _______ __
* / ___/ / ___ __ _ / / ___
* / /__/ _ \/ _ \/ V \/ _ \/ _ \
* \___/_//_/\___/_/_/_/_.__/\___/
* Please refer to Copyright.txt, in Chombo's root directory.
*/
#endif
#include "BoxIterator.H"
#include "EBArith.H"
#include "PolyGeom.H"
#include "DirichletPoissonDomainBC.H"
#include "DirichletPoissonDomainBCF_F.H"
#include "NamespaceHeader.H"
DirichletPoissonDomainBC::DirichletPoissonDomainBC()
: m_onlyHomogeneous(true),
m_isFunctional(false),
m_value(12345.6789),
m_func(RefCountedPtr<BaseBCValue>()),
m_ebOrder(2)
{
}
DirichletPoissonDomainBC::~DirichletPoissonDomainBC()
{
}
void DirichletPoissonDomainBC::setValue(Real a_value)
{
m_onlyHomogeneous = false;
m_isFunctional = false;
m_value = a_value;
m_func = RefCountedPtr<BaseBCValue>();
}
void DirichletPoissonDomainBC::setFunction(RefCountedPtr<BaseBCValue> a_func)
{
m_value = 12345.6789;
m_func = a_func;
m_onlyHomogeneous = false;
m_isFunctional = true;
}
void DirichletPoissonDomainBC::setEBOrder(int a_ebOrder)
{
CH_assert(a_ebOrder >= 1);
CH_assert(a_ebOrder <= 2);
m_ebOrder = a_ebOrder;
}
///
/**
This never gets called. InflowOutflowPoissonDomainBC::getFaceVel takes care of it.
*/
void
DirichletPoissonDomainBC::
getFaceVel(Real& a_faceFlux,
const FaceIndex& a_face,
const EBFluxFAB& a_vel,
const RealVect& a_probLo,
const RealVect& a_dx,
const int& a_idir,
const int& a_icomp,
const Real& a_time,
const Side::LoHiSide& a_side)
{
CH_assert(a_idir == a_face.direction());
Real value;
if (m_isFunctional)
{
RealVect pt;
IntVect iv = a_face.gridIndex(Side::Hi);
for (int idir = 0; idir < SpaceDim; idir++)
{
if (idir != a_face.direction())
{
Real ptval = a_dx[a_idir]*(Real(iv[idir]) + 0.5);
pt[idir] = ptval;
}
else
{
pt[idir] = a_dx[a_idir]*(Real(iv[idir]));
}
}
RealVect normal = EBArith::getDomainNormal(a_idir, a_side);
value = m_func->value(pt, normal, a_time,a_icomp);
}
else
{
value = m_value;
}
a_faceFlux = value;
}
///
/**
This is called by InflowOutflowHelmholtzDomainBC::getFaceFlux,
which is called by EBAMRPoissonOp::applyDomainFlux for applyOp in reg cells.
For higher-order Dirichlet boundary conditions on velocity and viscous operator.
*/
void DirichletPoissonDomainBC::getHigherOrderFaceFlux(BaseFab<Real>& a_faceFlux,
const BaseFab<Real>& a_phi,
const RealVect& a_probLo,
const RealVect& a_dx,
const int& a_idir,
const Side::LoHiSide& a_side,
const DataIndex& a_dit,
const Real& a_time,
const bool& a_useHomogeneous)
{
for (int comp=0; comp<a_phi.nComp(); comp++)
{
const Box& box = a_faceFlux.box();
int iside;
if (a_side == Side::Lo)
{
iside = 1;
}
else
{
iside = -1;
}
if (a_useHomogeneous)
{
Real value = 0.0;
FORT_SETHODIRICHLETFACEFLUX(CHF_FRA1(a_faceFlux,comp),
CHF_CONST_FRA1(a_phi,comp),
CHF_CONST_REAL(value),
CHF_CONST_REALVECT(a_dx),
CHF_CONST_INT(a_idir),
CHF_CONST_INT(iside),
CHF_BOX(box));
}
else
{
if (m_isFunctional)
{
Real ihdx;
ihdx = 2.0 / a_dx[a_idir];
BoxIterator bit(box);
for (bit.begin(); bit.ok(); ++bit)
{
IntVect iv = bit();
IntVect ivNeigh = iv;
ivNeigh[a_idir] += sign(a_side);
const VolIndex vof = VolIndex(iv, 0);
const VolIndex vofNeigh = VolIndex(ivNeigh,0);
const FaceIndex face = FaceIndex(vof,vofNeigh,a_idir);
const RealVect point = EBArith::getFaceLocation(face,a_dx,a_probLo);
const RealVect normal = EBArith::getDomainNormal(a_idir,a_side);
Real value = m_func->value(face,a_side,a_dit,point,normal,a_time,comp);
Real phi0 = a_phi(iv,comp);
iv[a_idir] += iside;
Real phi1 = a_phi(iv,comp);
iv[a_idir] -= iside;
a_faceFlux(iv,comp) = -iside*(8.0*value + phi1 - 9.0*phi0)/(3.0*a_dx[a_idir]);
}
}
else
{
if (m_onlyHomogeneous)
{
MayDay::Error("DirichletPoissonDomainBC::getFaceFlux called with undefined inhomogeneous BC");
}
Real value = m_value;
FORT_SETHODIRICHLETFACEFLUX(CHF_FRA1(a_faceFlux,comp),
CHF_CONST_FRA1(a_phi,comp),
CHF_CONST_REAL(value),
CHF_CONST_REALVECT(a_dx),
CHF_CONST_INT(a_idir),
CHF_CONST_INT(iside),
CHF_BOX(box));
}
}
}
}
///
/**
This is called by InflowOutflowHelmholtzDomainBC::getFaceFlux,
which is called by EBAMRPoissonOp::applyOp when EB x domain.
Used for higher-order velocity boundary conditions in viscous operator.
This routine iterates over possible multiple faces at a domain face
and calls getHigherOrderFaceFlux(...) below for the individual face extrapolation
*/
void DirichletPoissonDomainBC::getHigherOrderFaceFlux(Real& a_faceFlux,
const VolIndex& a_vof,
const int& a_comp,
const EBCellFAB& a_phi,
const RealVect& a_probLo,
const RealVect& a_dx,
const int& a_idir,
const Side::LoHiSide& a_side,
const DataIndex& a_dit,
const Real& a_time,
const bool& a_useHomogeneous)
{
const EBISBox& ebisBox = a_phi.getEBISBox();
const ProblemDomain& domainBox = ebisBox.getDomain();
a_faceFlux = 0.0;
Vector<FaceIndex> faces = ebisBox.getFaces(a_vof,a_idir,a_side);
if (faces.size() > 0)
{
if (faces.size()==1)
{
IntVectSet cfivs;
FaceStencil faceSten = EBArith::getInterpStencil(faces[0],
cfivs,
ebisBox,
domainBox);
for (int isten=0; isten < faceSten.size(); isten++)
{
const Real& weight = faceSten.weight(isten);
const FaceIndex& face = faceSten.face(isten);
Real thisFaceFlux;
const RealVect centroid = ebisBox.centroid(face);
getHigherOrderFaceFlux(thisFaceFlux,face,a_comp,a_phi,a_probLo,a_dx,a_idir,
a_side,a_dit,a_time,false,centroid,a_useHomogeneous);
a_faceFlux += thisFaceFlux*weight;
}
a_faceFlux *= ebisBox.areaFrac(faces[0]);
}
else
{
MayDay::Error("DirichletPoissonDomainBC::getHigherOrderFaceFlux has multi-valued faces (or could be 0 faces)");
//could have done something like before by adding a division by number of faces but this does not use faceStencil above
// int numFaces = 0;
// for (int i = 0; i < faces.size(); i++)
// {
// const RealVect centroid = ebisBox.centroid(faces[i]);
// Real thisFaceFlux;
// getHigherOrderFaceFlux(thisFaceFlux,faces[i],a_comp,a_phi,a_probLo,
// a_dx,a_idir,a_side,a_dit,a_time,false,centroid,
// a_useHomogeneous);
// a_faceFlux += thisFaceFlux;
// numFaces ++;
// }
// if (numFaces > 1){a_faceFlux /= Real(numFaces);}
}
}
}
///
/**
This is called by DirichletPoissonDomain::getHigherOrderFaceflux
which is called by InflowOutflowHelmholtzDomainBC::getFaceFlux.
a_useHomogeneous now indicates if using whole stencil or not
if true, then just get the inhomogeneous value;
if false, then use whole stencil
should not matter since this is only called for the velocity at operator constructor time (defineStencils)
*/
void DirichletPoissonDomainBC::getHigherOrderFaceFlux(Real& a_faceFlux,
const FaceIndex& a_face,
const int& a_comp,
const EBCellFAB& a_phi,
const RealVect& a_probLo,
const RealVect& a_dx,
const int& a_idir,
const Side::LoHiSide& a_side,
const DataIndex& a_dit,
const Real& a_time,
const bool& a_useAreaFrac,
const RealVect& a_centroid,
const bool& a_useHomogeneous)
{
int iside = -sign(a_side);
const Real ihdx = 2.0 / a_dx[a_idir];
const EBISBox& ebisBox = a_phi.getEBISBox();
Real value = -1.e99;
if (a_useHomogeneous)
{
value = 0.0;
}
else if (m_isFunctional)
{
const RealVect normal = EBArith::getDomainNormal(a_idir,a_side);
RealVect point = EBArith::getFaceLocation(a_face,a_dx,a_probLo);
value = m_func->value(a_face,a_side,a_dit,point,normal,a_time,a_comp);
}
else
{
if (m_onlyHomogeneous)
{
MayDay::Error("DirichletPoissonDomainBC::getHigherOrderFaceFlux called with undefined inhomogeneous BC");
}
value = m_value;
}
const VolIndex& vof = a_face.getVoF(flip(a_side));
//higher-order Dirichlet bc
Vector<FaceIndex> facesInsideDomain = ebisBox.getFaces(vof,a_idir,flip(a_side));
if (facesInsideDomain.size() == 1)
{
const VolIndex& vofNextInsideDomain = facesInsideDomain[0].getVoF(flip(a_side));
a_faceFlux = iside*(9.0*a_phi(vof,a_comp) - a_phi(vofNextInsideDomain,a_comp) - 8.0*value)/(3.0*a_dx[a_idir]);
}
else
{
a_faceFlux = iside * ihdx * (a_phi(vof,a_comp) - value);
}
}
///
/**
This is called by InflowOutflowHelmholtzDomainBC::getFaceFlux,
which is called by EBAMRPoissonOp::applyOp when EB x domain.
Used for higher-order velocity boundary conditions in viscous operator.
This routine iterates over possible multiple faces at a domain face
and calls getHigherOrderFaceFlux(...) below for the individual face extrapolation
*/
void DirichletPoissonDomainBC::getHigherOrderInhomFaceFlux(Real& a_faceFlux,
const VolIndex& a_vof,
const int& a_comp,
const EBCellFAB& a_phi,
const RealVect& a_probLo,
const RealVect& a_dx,
const int& a_idir,
const Side::LoHiSide& a_side,
const DataIndex& a_dit,
const Real& a_time)
{
const EBISBox& ebisBox = a_phi.getEBISBox();
const ProblemDomain& domainBox = ebisBox.getDomain();
a_faceFlux = 0.0;
Vector<FaceIndex> faces = ebisBox.getFaces(a_vof,a_idir,a_side);
if (faces.size() > 0)
{
if (faces.size()==1)
{
IntVectSet cfivs;
FaceStencil faceSten = EBArith::getInterpStencil(faces[0],
cfivs,
ebisBox,
domainBox);
for (int isten=0; isten < faceSten.size(); isten++)
{
const Real& weight = faceSten.weight(isten);
const FaceIndex& face = faceSten.face(isten);
Real thisFaceFlux;
const RealVect centroid = ebisBox.centroid(face);
getHigherOrderInhomFaceFlux(thisFaceFlux,face,a_comp,a_phi,a_probLo,a_dx,a_idir,
a_side,a_dit,a_time,false,centroid);
a_faceFlux += thisFaceFlux*weight;
}
a_faceFlux *= ebisBox.areaFrac(faces[0]);
}
else
{
MayDay::Error("DirichletPoissonDomainBC::getHigherOrderFaceFlux has multi-valued faces (or could be 0 faces)");
//could have done something like before by adding a division by number of faces but this does not use faceStencil above
// int numFaces = 0;
// for (int i = 0; i < faces.size(); i++)
// {
// const RealVect centroid = ebisBox.centroid(faces[i]);
// Real thisFaceFlux;
// getHigherOrderFaceFlux(thisFaceFlux,faces[i],a_comp,a_phi,a_probLo,
// a_dx,a_idir,a_side,a_dit,a_time,false,centroid,
// a_useHomogeneous);
// a_faceFlux += thisFaceFlux;
// numFaces ++;
// }
// if (numFaces > 1){a_faceFlux /= Real(numFaces);}
}
}
}
///
/**
This is called by DirichletPoissonDomain::getHigherOrderFaceflux
which is called by InflowOutflowHelmholtzDomainBC::getFaceFlux.
a_useHomogeneous now indicates if using whole stencil or not
if true, then just get the inhomogeneous value;
if false, then use whole stencil
should not matter since this is only called for the velocity at operator constructor time (defineStencils)
*/
void DirichletPoissonDomainBC::getHigherOrderInhomFaceFlux(Real& a_faceFlux,
const FaceIndex& a_face,
const int& a_comp,
const EBCellFAB& a_phi,
const RealVect& a_probLo,
const RealVect& a_dx,
const int& a_idir,
const Side::LoHiSide& a_side,
const DataIndex& a_dit,
const Real& a_time,
const bool& a_useAreaFrac,
const RealVect& a_centroid)
{
int iside = -sign(a_side);
const Real ihdx = 2.0 / a_dx[a_idir];
const EBISBox& ebisBox = a_phi.getEBISBox();
Real value = -1.e99;
if (m_onlyHomogeneous)
{
value = 0.0;
}
else if (m_isFunctional)
{
const RealVect normal = EBArith::getDomainNormal(a_idir,a_side);
RealVect point = EBArith::getFaceLocation(a_face,a_dx,a_probLo);
value = m_func->value(a_face,a_side,a_dit,point,normal,a_time,a_comp);
}
else
{
if (m_onlyHomogeneous)
{
MayDay::Error("DirichletPoissonDomainBC::getHigherOrderInhomFaceFlux called with undefined inhomogeneous BC");
}
value = m_value;
}
const VolIndex& vof = a_face.getVoF(flip(a_side));
//higher-order Dirichlet bc
Vector<FaceIndex> facesInsideDomain = ebisBox.getFaces(vof,a_idir,flip(a_side));
if (facesInsideDomain.size() == 1)
{
a_faceFlux = iside*(- 8.0*value)/(3.0*a_dx[a_idir]);
}
else
{
a_faceFlux = iside * ihdx * (- value);
}
}
///
/**
This is called by InflowOutflowPoissonDomainBC::getFaceFlux,
which is called by EBAMRPoissonOp::applyDomainFlux for applyOp in reg cells.
For Dirichlet boundary conditions on pressure (usually at outflow) in projection.
*/
void DirichletPoissonDomainBC::getFaceFlux(BaseFab<Real>& a_faceFlux,
const BaseFab<Real>& a_phi,
const RealVect& a_probLo,
const RealVect& a_dx,
const int& a_idir,
const Side::LoHiSide& a_side,
const DataIndex& a_dit,
const Real& a_time,
const bool& a_useHomogeneous)
{
for (int comp=0; comp<a_phi.nComp(); comp++)
{
const Box& box = a_faceFlux.box();
int iside;
if (a_side == Side::Lo)
{
iside = 1;
}
else
{
iside = -1;
}
if (a_useHomogeneous)
{
Real value = 0.0;
// FORT_SETHODIRICHLETFACEFLUX(CHF_FRA1(a_faceFlux,comp),
// CHF_CONST_FRA1(a_phi,comp),
// CHF_CONST_REAL(value),
// CHF_CONST_REALVECT(a_dx),
// CHF_CONST_INT(a_idir),
// CHF_CONST_INT(iside),
// CHF_BOX(box));
FORT_SETDIRICHLETFACEFLUX(CHF_FRA1(a_faceFlux,comp),
CHF_CONST_FRA1(a_phi,comp),
CHF_CONST_REAL(value),
CHF_CONST_REALVECT(a_dx),
CHF_CONST_INT(a_idir),
CHF_CONST_INT(iside),
CHF_BOX(box));
}
else
{
if (m_isFunctional)
{
Real ihdx;
ihdx = 2.0 / a_dx[a_idir];
BoxIterator bit(box);
// for (bit.begin(); bit.ok(); ++bit)
// {
// IntVect iv = bit();
// IntVect ivNeigh = iv;
// IntVect ivNextNeigh = iv;
// ivNeigh[a_idir] += sign(a_side);
// ivNextNeigh[a_idir] += 2*sign(a_side);
// const VolIndex vof = VolIndex(iv, 0);
// const VolIndex vofNeigh = VolIndex(ivNeigh,0);
// const VolIndex vofNextNeigh = VolIndex(ivNextNeigh,0);
// const FaceIndex face = FaceIndex(vof,vofNeigh,a_idir);
// const FaceIndex faceNeigh = FaceIndex(vofNeigh,vofNextNeigh,a_idir);
// const RealVect point = EBArith::getFaceLocation(face,a_dx,a_probLo);
// const RealVect normal = EBArith::getDomainNormal(a_idir,a_side);
// Real value = m_func->value(face,a_side,a_dit,point,normal,a_time,comp);
// Real phi0 = a_phi(iv,comp);
// iv[a_idir] += iside;
// Real phi1 = a_phi(iv,comp);
// iv[a_idir] -= iside;
// a_faceFlux(iv,comp) = -iside*(8.0*value + phi1 - 9.0*phi0)/(3.0*a_dx[a_idir]);
// }
for (bit.begin(); bit.ok(); ++bit)
{
IntVect iv = bit();
IntVect ivNeigh = iv;
ivNeigh[a_idir] += sign(a_side);
const VolIndex vof = VolIndex(iv, 0);
const VolIndex vofNeigh = VolIndex(ivNeigh,0);
const FaceIndex face = FaceIndex(vof,vofNeigh,a_idir);
const RealVect point = EBArith::getFaceLocation(face,a_dx,a_probLo);
const RealVect normal = EBArith::getDomainNormal(a_idir,a_side);
Real value = m_func->value(face,a_side,a_dit,point,normal,a_time,comp);
Real phiVal = a_phi(iv,comp);
a_faceFlux(iv,comp) = iside * ihdx * (phiVal - value);
}
}
else
{
if (m_onlyHomogeneous)
{
MayDay::Error("DirichletPoissonDomainBC::getFaceFlux called with undefined inhomogeneous BC");
}
Real value = m_value;
// FORT_SETHODIRICHLETFACEFLUX(CHF_FRA1(a_faceFlux,comp),
// CHF_CONST_FRA1(a_phi,comp),
// CHF_CONST_REAL(value),
// CHF_CONST_REALVECT(a_dx),
// CHF_CONST_INT(a_idir),
// CHF_CONST_INT(iside),
// CHF_BOX(box));
FORT_SETDIRICHLETFACEFLUX(CHF_FRA1(a_faceFlux,comp),
CHF_CONST_FRA1(a_phi,comp),
CHF_CONST_REAL(value),
CHF_CONST_REALVECT(a_dx),
CHF_CONST_INT(a_idir),
CHF_CONST_INT(iside),
CHF_BOX(box));
}
}
}
}
///
/**
This is called by InflowOutflowPoissonDomainBC::getFaceFlux,
which is called by EBAMRPoissonOp::applyOp when EB x domain.
For domain boundary conditions on pressure in projections.
*/
void DirichletPoissonDomainBC::getFaceFlux(Real& a_faceFlux,
const VolIndex& a_vof,
const int& a_comp,
const EBCellFAB& a_phi,
const RealVect& a_probLo,
const RealVect& a_dx,
const int& a_idir,
const Side::LoHiSide& a_side,
const DataIndex& a_dit,
const Real& a_time,
const bool& a_useHomogeneous)
{
const EBISBox& ebisBox = a_phi.getEBISBox();
const ProblemDomain& domainBox = ebisBox.getDomain();
a_faceFlux = 0.0;
Vector<FaceIndex> faces = ebisBox.getFaces(a_vof,a_idir,a_side);
if (faces.size() > 0)
{
if (faces.size()==1)
{
IntVectSet cfivs;
FaceStencil faceSten = EBArith::getInterpStencil(faces[0],
cfivs,
ebisBox,
domainBox);
for (int isten=0; isten < faceSten.size(); isten++)
{
const Real& weight = faceSten.weight(isten);
const FaceIndex& face = faceSten.face(isten);
Real thisFaceFlux;
const RealVect centroid = ebisBox.centroid(face);
getFaceGradPhi(thisFaceFlux,face,a_comp,a_phi,a_probLo,a_dx,a_idir,
a_side,a_dit,a_time,false,centroid,a_useHomogeneous);
a_faceFlux += thisFaceFlux*weight;
}
a_faceFlux *= ebisBox.areaFrac(faces[0]);
}
else
{
MayDay::Error("DirichletPoissonDomainBC::getHigherOrderFaceFlux has multi-valued faces (or could be 0 faces)");
//could have done something like before by adding a division by number of faces but this does not use faceStencil above
// int numFaces = 0;
// for (int i = 0; i < faces.size(); i++)
// {
// const RealVect centroid = ebisBox.centroid(faces[i]);
// Real thisFaceFlux;
// getFaceGradPhi(thisFaceFlux,faces[i],a_comp,a_phi,a_probLo,
// a_dx,a_idir,a_side,a_dit,a_time,false,centroid,a_useHomogeneous);
// a_faceFlux += thisFaceFlux;
// numFaces ++;
// }
// if (numFaces > 1){a_faceFlux /= Real(numFaces);}
}
}
}
///
/**
This is called by InflowOutflowPoissonDomainBC::getFaceGradPhi
when enforcing boundary gradients in projection (called by macEnforceGradientBC).
if true, then just get the inhomogeneous value;
if false, then use whole stencil
does matter because this is not only called for the pressure at operator constructor time (defineStencils)
but also by macEnforceGradientBC which needs the whole stencil (can stencilize this later)
*/
void DirichletPoissonDomainBC::getFaceGradPhi(Real& a_faceFlux,
const FaceIndex& a_face,
const int& a_comp,
const EBCellFAB& a_phi,
const RealVect& a_probLo,
const RealVect& a_dx,
const int& a_idir,
const Side::LoHiSide& a_side,
const DataIndex& a_dit,
const Real& a_time,
const bool& a_useAreaFrac,
const RealVect& a_centroid,
const bool& a_useHomogeneous)
{
int iside = -sign(a_side);
const Real ihdx = 2.0 / a_dx[a_idir];
Real value = -1.e99;
if (a_useHomogeneous)
{
value = 0.0;
}
else if (m_isFunctional)
{
const RealVect normal = EBArith::getDomainNormal(a_idir,a_side);
RealVect point = EBArith::getFaceLocation(a_face,a_dx,a_probLo);
value = m_func->value(a_face,a_side,a_dit,point,normal,a_time,a_comp);
}
else
{
if (m_onlyHomogeneous)
{
MayDay::Error("DirichletPoissonDomainBC::getFaceFlux called with undefined inhomogeneous BC");
}
value = m_value;
}
const VolIndex& vof = a_face.getVoF(flip(a_side));
//higher-order Dirichlet bc
// Vector<FaceIndex> facesInsideDomain = ebisBox.getFaces(vof,a_idir,flip(a_side));
// if (facesInsideDomain.size() == 1)
// {
// const VolIndex& vofNextInsideDomain = facesInsideDomain[0].getVoF(flip(a_side));
// a_faceFlux = iside*(9.0*a_phi(vof,a_comp) - a_phi(vofNextInsideDomain,a_comp) - 8.0*value)/(3.0*a_dx[a_idir]);
// }
// else
// {
// a_faceFlux = iside * ihdx * (a_phi(vof,a_comp) - value);
// }
a_faceFlux = iside * ihdx * (a_phi(vof,a_comp) - value);
}
///
/**
This is called by InflowOutflowPoissonDomainBC::getFaceFlux,
which is called by EBAMRPoissonOp::applyOp when EB x domain.
For domain boundary conditions on pressure in projections.
*/
void DirichletPoissonDomainBC::getInhomFaceFlux(Real& a_faceFlux,
const VolIndex& a_vof,
const int& a_comp,
const EBCellFAB& a_phi,
const RealVect& a_probLo,
const RealVect& a_dx,
const int& a_idir,
const Side::LoHiSide& a_side,
const DataIndex& a_dit,
const Real& a_time)
{
const EBISBox& ebisBox = a_phi.getEBISBox();
const ProblemDomain& domainBox = ebisBox.getDomain();
a_faceFlux = 0.0;
Vector<FaceIndex> faces = ebisBox.getFaces(a_vof,a_idir,a_side);
if (faces.size() > 0)
{
if (faces.size()==1)
{
IntVectSet cfivs;
FaceStencil faceSten = EBArith::getInterpStencil(faces[0],
cfivs,
ebisBox,
domainBox);
for (int isten=0; isten < faceSten.size(); isten++)
{
const Real& weight = faceSten.weight(isten);
const FaceIndex& face = faceSten.face(isten);
Real thisFaceFlux;
const RealVect centroid = ebisBox.centroid(face);
getInhomFaceGradPhi(thisFaceFlux,face,a_comp,a_phi,a_probLo,a_dx,a_idir,
a_side,a_dit,a_time,false,centroid);
a_faceFlux += thisFaceFlux*weight;
}
a_faceFlux *= ebisBox.areaFrac(faces[0]);
}
else
{
MayDay::Error("DirichletPoissonDomainBC::getHigherOrderFaceFlux has multi-valued faces (or could be 0 faces)");
//could have done something like before by adding a division by number of faces but this does not use faceStencil above
// int numFaces = 0;
// for (int i = 0; i < faces.size(); i++)
// {
// const RealVect centroid = ebisBox.centroid(faces[i]);
// Real thisFaceFlux;
// getFaceGradPhi(thisFaceFlux,faces[i],a_comp,a_phi,a_probLo,
// a_dx,a_idir,a_side,a_dit,a_time,false,centroid,a_useHomogeneous);
// a_faceFlux += thisFaceFlux;
// numFaces ++;
// }
// if (numFaces > 1){a_faceFlux /= Real(numFaces);}
}
}
}
///
/**
This is called by InflowOutflowPoissonDomainBC::getFaceGradPhi
when enforcing boundary gradients in projection (called by macEnforceGradientBC).
if true, then just get the inhomogeneous value;
if false, then use whole stencil
does matter because this is not only called for the pressure at operator constructor time (defineStencils)
but also by macEnforceGradientBC which needs the whole stencil (can stencilize this later)
*/
void DirichletPoissonDomainBC::getInhomFaceGradPhi(Real& a_faceFlux,
const FaceIndex& a_face,
const int& a_comp,
const EBCellFAB& a_phi,
const RealVect& a_probLo,
const RealVect& a_dx,
const int& a_idir,
const Side::LoHiSide& a_side,
const DataIndex& a_dit,
const Real& a_time,
const bool& a_useAreaFrac,
const RealVect& a_centroid)
{
int iside = -sign(a_side);
const Real ihdx = 2.0 / a_dx[a_idir];
Real value = -1.e99;
if (m_onlyHomogeneous)
{
value = 0.0;
}
else if (m_isFunctional)
{
const RealVect normal = EBArith::getDomainNormal(a_idir,a_side);
RealVect point = EBArith::getFaceLocation(a_face,a_dx,a_probLo);
value = m_func->value(a_face,a_side,a_dit,point,normal,a_time,a_comp);
}
else
{
value = m_value;
}
//higher-order Dirichlet bc
// Vector<FaceIndex> facesInsideDomain = ebisBox.getFaces(vof,a_idir,flip(a_side));
// if (facesInsideDomain.size() == 1)
// {
// const VolIndex& vofNextInsideDomain = facesInsideDomain[0].getVoF(flip(a_side));
// a_faceFlux = iside*(9.0*a_phi(vof,a_comp) - a_phi(vofNextInsideDomain,a_comp) - 8.0*value)/(3.0*a_dx[a_idir]);
// }
// else
// {
// a_faceFlux = iside * ihdx * (a_phi(vof,a_comp) - value);
// }
a_faceFlux = iside * ihdx * (- value);
}
void DirichletPoissonDomainBC::getFluxStencil( VoFStencil& a_stencil,
const VolIndex& a_vof,
const int& a_comp,
const RealVect& a_dx,
const int& a_idir,
const Side::LoHiSide& a_side,
const EBISBox& a_ebisBox)
{
const ProblemDomain& domainBox = a_ebisBox.getDomain();
Vector<FaceIndex> faces = a_ebisBox.getFaces(a_vof, a_idir, a_side);
if (faces.size() > 0)
{
if (faces.size()==1)
{
CH_assert(faces[0].isBoundary());
IntVectSet cfivs;
FaceStencil faceSten = EBArith::getInterpStencil(faces[0],
cfivs,
a_ebisBox,
domainBox);
for (int isten=0; isten < faceSten.size(); isten++)
{
const Real& weight = faceSten.weight(isten);
const FaceIndex& face = faceSten.face(isten);
VoFStencil thisStencil;
getFluxStencil(thisStencil, face, a_comp, a_dx, a_idir, a_side, a_ebisBox);
thisStencil *= weight;
a_stencil += thisStencil;
}
a_stencil *= a_ebisBox.areaFrac(faces[0]);
}
else
{
MayDay::Error("DirichletPoissonDomainBC::getFluxStencil has multi-valued faces");
}
}
a_stencil *= 1./a_dx[a_idir];
a_stencil *= Real(sign(a_side));//this sign is for which side is differenced
}
void DirichletPoissonDomainBC::getFluxStencil( VoFStencil& a_stencil,
const FaceIndex& a_face,
const int& a_comp,
const RealVect& a_dx,
const int& a_idir,
const Side::LoHiSide& a_side,
const EBISBox& a_ebisBox)
{
if (m_ebOrder == 1)//pressure
{
getFirstOrderFluxStencil(a_stencil, a_face, a_comp,
a_dx, a_idir, a_side,
a_ebisBox);
}
else if (m_ebOrder == 2)//velocity
{
getSecondOrderFluxStencil(a_stencil, a_face, a_comp,
a_dx, a_idir, a_side,
a_ebisBox);
}
else
{
MayDay::Error("DirichletPoissonDomainBC::getFluxStencil -- bad BC order");
}
}
void DirichletPoissonDomainBC::getFirstOrderFluxStencil( VoFStencil& a_stencil,
const FaceIndex& a_face,
const int& a_comp,
const RealVect& a_dx,
const int& a_idir,
const Side::LoHiSide& a_side,
const EBISBox& a_ebisBox)
{
const VolIndex& vof = a_face.getVoF(flip(a_side));
const Real isign = Real(sign(a_side));//this sign is for the extrapolation direction
const Real weight = -isign*2.0/a_dx[a_idir];
a_stencil.add(vof, weight, a_comp);
}
void DirichletPoissonDomainBC::getSecondOrderFluxStencil( VoFStencil& a_stencil,
const FaceIndex& a_face,
const int& a_comp,
const RealVect& a_dx,
const int& a_idir,
const Side::LoHiSide& a_side,
const EBISBox& a_ebisBox)
{
const VolIndex& vof = a_face.getVoF(flip(a_side));
const Real isign = Real(sign(a_side));//this sign is for the extrapolation direction
const Real weight = isign*1.0/(3.0*a_dx[a_idir]);
Vector<FaceIndex> facesInsideDomain = a_ebisBox.getFaces(vof,a_idir,flip(a_side));
if (facesInsideDomain.size() == 1)
{
const VolIndex& vofNextInsideDomain = facesInsideDomain[0].getVoF(flip(a_side));
a_stencil.add(vofNextInsideDomain, weight, a_comp);
a_stencil.add( vof, -9.0*weight, a_comp);
}
else
{
a_stencil.add(vof, -6.0*weight, a_comp);
}
}
DirichletPoissonDomainBCFactory::DirichletPoissonDomainBCFactory()
: m_onlyHomogeneous(true),
m_isFunctional(false),
m_value(12345.6789),
m_func(RefCountedPtr<BaseBCValue>()),
m_ebOrder(2)
{
}
DirichletPoissonDomainBCFactory::~DirichletPoissonDomainBCFactory()
{
}
void DirichletPoissonDomainBCFactory::setValue(Real a_value)
{
m_value = a_value;
m_func = RefCountedPtr<BaseBCValue>();
m_onlyHomogeneous = false;
m_isFunctional = false;
}
void DirichletPoissonDomainBCFactory::setFunction(RefCountedPtr<BaseBCValue> a_func)
{
m_value = 12345.6789;
m_func = a_func;
m_onlyHomogeneous = false;
m_isFunctional = true;
}
//use this for order of domain boundary
void DirichletPoissonDomainBCFactory::setEBOrder(int a_ebOrder)
{
CH_assert(a_ebOrder >= 1);
CH_assert(a_ebOrder <= 2);
m_ebOrder = a_ebOrder;
}
DirichletPoissonDomainBC*
DirichletPoissonDomainBCFactory::
create(const ProblemDomain& a_domain,
const EBISLayout& a_layout,
const RealVect& a_dx)
{
DirichletPoissonDomainBC* newBC = new DirichletPoissonDomainBC();
newBC->setEBOrder(m_ebOrder);
if (!m_onlyHomogeneous)
{
if (m_isFunctional)
{
newBC->setFunction(m_func);
}
else
{
newBC->setValue(m_value);
}
}
return newBC;
}
#include "NamespaceFooter.H"
|
Various applications may involve transferring power and data through a galvanic isolation barrier of, for example, several kilovolts. Systems used in the industrial field (e.g., high-side gate drivers), the medical field (e.g., implantable devices), in isolated sensor interfaces and lighting may be exemplary of such applications.
These isolated systems may be designed to provide galvanic isolation in the range of 2-10 kV. Dynamic isolation in the order of several tens of kV/μs may also be a desired feature, for example in order to handle rapid shifts in the ground references. Parasitic capacitive couplings between isolated units, such as interfaces, may lead these ground shifts to result in the injection of a common-mode current, ICM. The injected current may in turn produce dangerous overvoltages and/or data transmission errors so that undesired degradation of Bit Error Rate (BER) may ensue.
Such a current ICM may include a dc component proportional both to the parasitic capacitance of the isolation barrier, CP, and to the (maximum) voltage slew rate, dVCM/dt, between two isolated units, i.e., ICM=CP dVCM/dt. The current ICM may also convey high-frequency harmonics.
Common-mode transient immunity (CMTI) defines the maximum voltage slew rate (dV/dt) between two isolated interfaces that an isolated system is able to withstand.
There is a need in the art to address the drawbacks outlined in the foregoing, by facilitating achieving constant common mode transient immunity (CMTI) performance, for example, at increasing data rates. |
The Installation Support and Programs Management Directorate partners with U.S. Army Corps of Engineers’ Major Subordinate Commands and Districts, and other government organizations to support Department of Defense installations and other federal agencies worldwide through a wide range technical and contracting resources.
Our programs are executed in accordance with Project Management Business Process/ Project Management Community of Practice and focus on Sustainment, Restoration, and Modernization/reimbursable projects and programs.
As the Corps' Installation Support Center of Expertise (IS-CX), Installation Support partners with Corps of Engineers' districts and labs to provide support in public works business processes, systems, training and contingency support.
We provide quality engineering and design services in areas such as conforming storage facilities, intrusion detection systems, utility monitoring and control systems, and ranges and training lands management. We also provides innovative services in privatization and competitive sourcing; facility operation, maintenance, repair and rehabilitation; fire protection; furniture and furnishings; and environmental planning.
Read more about our programs on the fact sheets page or click on the program link to the right.
For more information, call 256-895-1230.
A recently renovated service control point at Barksdale Air Force Base, Louisiana, ensures fuel is delivered to the Air Force 2nd Bomb Wing’s stable of B-52 Stratofortresses (seen in background), a long-range heavy bomber serving as a vital component to the Air Force Global Strike Command. Huntsville Center’s Installation and Support and Programs Management DLA-Fuels program manages a maintenance and repair service program sustaining worldwide fueling capability to the Department of Defense and other agencies.
The U.S. Air Force Personnel Center at Joint Base San Antonio-Randolph in Texas has been restored to its historic look. The Facilities Repair and Renewal Project was the first comprehensive renovation of the building since it was built in the 1930s.
Technicians install information technology equipment at the new U.S. Army Corps of Engineers Western Processing Center in Hillsboro, Oregon. Due to weight limitations at the Western Processing Center’s original location, all equipment had to be moved out by March 19 and in its new location and operational by March 22. Huntsville Center’s Information Technology Services ACE-IT branch supported the requirement awarding a single $1.3 million Firm Fixed Price contract to a Portland, Oregon area IT firm on Dec. 24. The work was completed within the allotted timeline and met all operational requirements.
An exterior view of the newly-unveiled Martin Army Community Hospital located at Fort Benning in Georgia. Since opening in November 2014, the hospital’s 745,000 square-foot, state-of-the-art facility improves the area’s medical capacity to provide inpatient, outpatient and ancillary services to a military community of more than 90,000 Soldiers, family members and retirees.
Bhate employees work to deconstruct a chapel - one of three buildings selected for the deconstruction pilot program - on Fort Leonard Wood, Missouri. Materials from the chapel were salvaged for reuse or recycle.
Leslie Yarbrough (left), program manager, and Sara Cook, a project manager, U.S. Army Engineering and Support Center Huntsville Furnishings Program, complete a quality assurance inspection on equipment purchased as part of a $385,000 project within the Gray Eagle Unmanned Aerial System (UAS) cantonment area that included a company operations facility, tactical equipment maintenance facility, operations and storage facility, and control tower.
The Fort Irwin, California, REM developed and coordinated a 1MW concentrated solar array through the Environmental Security Test Certification Program (ESTCP). The concentrated solar array system tracks the sun to optimize exposure, which increases the efficiency of normal roof top or stand-alone photovoltaic. This project will save Fort Irwin around 2.4M kWh per year at a value of $310,000 per year and equivalent 8,392 MBTU savings.
Gate 1 at Fort Carson, Colorado, was part of a Huntsville Center Access Control Point Program project.
Huntsville Center’s Communication Infrastructure and Systems Support program, or CIS2, coordinated with Norfolk District to provide technical engineering, project management and acquisition support to equip the DoDEA Europe South District’s Sigonella Elementary School and Sigonella Middle High School with Uninterruptible Power Supply systems.
Huntsville Center’s Fuels team had a very short period of time to clean, inspect and repair a fuel storage tank used to hold and dispense Jet Propellant Thermally Stable, a jet fuel created specifically for the U-2 reconnaissance aircraft.
Providing information technology and communications infrastructure for any military facility requires specialized technical expertise, but doing so for military medical treatment facilities requires an even more distinct set of skills.
The Center’s Ordnance and Explosives Directorate, its Installation Support and Programs Management Directorate’s Military Support, Facilities and Electronic Technology divisions and its Engineering Directorate’s Environmental Protection, Utilities and Geosciences offices are remaining behind in leased office suites near the Center’s old main facility at 4820 University Square.
Huntsville Center’s Energy Savings Performance Contracting program managers and contracting specialists coordinated the project with AECOM, the energy service contractor, and the garrison’s directorate of public works.
The U.S. Army Engineering and Support Center, Huntsville does more than assist global stakeholders in accomplishing their project goals. Huntsville Center helps organizations reach their vision.
Inhabiting virtually every Department of Defense facility is a hidden world of processes and operations, but like many vital functions, they go unnoticed until the moment they are interrupted.
The National Association of Ordnance Contractors honored William Sargent, director of Ordnance and Explosives at the U.S. Army Engineering and Support Center, Huntsville, with the NAOC Lifetime Leadership Award at their 2018 Membership Meeting in St. Petersburg, Florida, Dec. 5.
Huntsville Center’s MOT is providing complete turn-key project support for the equipping and transitioning of staff and patients into the Brian Allgood Army Community Hospital and Ambulatory Care Center at Camp Humphrey, Republic of Korea. The new 772,000 square foot facility is set to open in November, 2019.
In fiscal 2018, the U.S. Army Engineering and Support Center, Huntsville executed more than $3 billion in contract actions, a 50 percent increase from what the Center accomplished in the previous fiscal year. |
. OBJECTIVE To analyze and compare the physicians' characteristics, their remuneration, the compliance with regulation and the services offered between clinics adjacent to pharmacies (CAF) and independent medical clinics (CMI). MATERIALS AND METHODS Questionnaire applied to 239 physicians in 18 states including the Federal District, in Mexico in 2012. RESULTS Physicians in CAF had less professional experience (5 versus 12 years), less postgraduate studies (61.2 versus 81.8%) and lower average monthly salaries (USD 418 versus USD 672) than their peers in CMI. In CAF there was less compliance in relation to medical record keeping and prescribing. CONCLUSIONS The employment situation of physicians in CAF is more precarious than in CMI. It is necessary to strengthen the enforcement of existing regulations and develop policies according to the monitoring of its performance, particularly, but not exclusively, in CAF. |
extern crate rocket;
extern crate rocket_contrib;
use rocket_contrib::serve::StaticFiles;
fn main() {
rocket::ignite()
.mount("/examples", StaticFiles::from("examples/"))
.mount("/slides", StaticFiles::from("slides/"))
.launch();
}
|
package com.billow.common.redis;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.beans.factory.annotation.Qualifier;
import org.springframework.boot.autoconfigure.data.redis.RedisProperties;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.context.annotation.Primary;
import org.springframework.data.redis.connection.RedisPassword;
import org.springframework.data.redis.connection.RedisStandaloneConfiguration;
import org.springframework.data.redis.connection.lettuce.LettuceClientConfiguration;
import org.springframework.data.redis.connection.lettuce.LettuceConnectionFactory;
import org.springframework.data.redis.core.RedisTemplate;
import org.springframework.data.redis.serializer.StringRedisSerializer;
/**
* redis 配置文件
*
* @author LiuYongTao
* @date 2019/7/29 9:13
*/
@Configuration
public class RedisConfig {
private static Logger logger = LoggerFactory.getLogger(RedisConfig.class);
@Autowired
private RedisProperties redisProperties;
@Bean
@Primary
public LettuceConnectionFactory lettuceConnectionFactory() {
RedisStandaloneConfiguration redisStandaloneConfiguration = new RedisStandaloneConfiguration();
redisStandaloneConfiguration.setDatabase(redisProperties.getDatabase());
redisStandaloneConfiguration.setHostName(redisProperties.getHost());
redisStandaloneConfiguration.setPort(redisProperties.getPort());
redisStandaloneConfiguration.setPassword(RedisPassword.of(redisProperties.getPassword()));
LettuceClientConfiguration.LettuceClientConfigurationBuilder lettuceClientConfigurationBuilder = LettuceClientConfiguration.builder();
LettuceConnectionFactory factory = new LettuceConnectionFactory(redisStandaloneConfiguration,
lettuceClientConfigurationBuilder.build());
logger.info("LettuceConnectionFactory bean init success.");
return factory;
}
@Bean
@Primary
public RedisTemplate<?, ?> redisTemplate(@Qualifier("lettuceConnectionFactory") LettuceConnectionFactory connectionFactory) {
RedisTemplate<String, Object> redisTemplate = new RedisTemplate<>();
initDomainRedisTemplate(redisTemplate, connectionFactory);
return redisTemplate;
}
/**
* 设置数据存入 redis 的序列化方式
*
* @param template
* @param factory
*/
private void initDomainRedisTemplate(RedisTemplate<String, Object> template, LettuceConnectionFactory factory) {
// 定义 key 的序列化方式为 string
StringRedisSerializer stringRedisSerializer = new StringRedisSerializer();
FastJsonRedisSerializer fastJsonRedisSerializer = new FastJsonRedisSerializer(Object.class);
//string 的序列化
template.setKeySerializer(stringRedisSerializer);
//hash 的序列化
template.setHashKeySerializer(stringRedisSerializer);
template.setValueSerializer(fastJsonRedisSerializer);
template.setHashValueSerializer(fastJsonRedisSerializer);
// 开启事务
template.setEnableTransactionSupport(true);
template.setConnectionFactory(factory);
template.afterPropertiesSet();
}
}
|
<reponame>azra1l/mage<filename>Mage.Sets/src/mage/cards/h/HeartwarmingRedemption.java<gh_stars>1-10
package mage.cards.h;
import mage.abilities.Ability;
import mage.abilities.effects.OneShotEffect;
import mage.cards.CardImpl;
import mage.cards.CardSetInfo;
import mage.constants.CardType;
import mage.constants.Outcome;
import mage.game.Game;
import mage.players.Player;
import java.util.UUID;
/**
* @author TheElk801
*/
public final class HeartwarmingRedemption extends CardImpl {
public HeartwarmingRedemption(UUID ownerId, CardSetInfo setInfo) {
super(ownerId, setInfo, new CardType[]{CardType.INSTANT}, "{2}{R}{W}");
// Discard all the cards in your hand, then draw that many cards plus one. You gain life equal to the number of cards in your hand.
this.getSpellAbility().addEffect(new HeartwarmingRedemptionEffect());
}
private HeartwarmingRedemption(final HeartwarmingRedemption card) {
super(card);
}
@Override
public HeartwarmingRedemption copy() {
return new HeartwarmingRedemption(this);
}
}
class HeartwarmingRedemptionEffect extends OneShotEffect {
HeartwarmingRedemptionEffect() {
super(Outcome.Benefit);
staticText = "Discard all the cards in your hand, then draw that many cards plus one. " +
"You gain life equal to the number of cards in your hand.";
}
private HeartwarmingRedemptionEffect(final HeartwarmingRedemptionEffect effect) {
super(effect);
}
@Override
public HeartwarmingRedemptionEffect copy() {
return new HeartwarmingRedemptionEffect(this);
}
@Override
public boolean apply(Game game, Ability source) {
Player player = game.getPlayer(source.getControllerId());
if (player == null) {
return false;
}
int discarded = player.discard(player.getHand().size(), false, source, game).size();
player.drawCards(discarded + 1, game);
player.gainLife(player.getHand().size(), game, source);
return true;
}
}
// Good night, sweet prince :( |
So where's Waldo?
Still, there had to be some way to make it up to all of you, dear readers. And wouldn’t you know it, the kind folks at Marvel have delivered to us Spidey-fans a potential gold mine of possibilities, theories, and guesses that your humble prognosticator just couldn’t pass up: Amazing Spider-Man #700.
We’ve been told that the aftermath of “Ends of the Earth” will lead into this story, that every story from now to December is leading up to this one issue, and that the repercussions of aftermath of this issue will forever change Spider-Man as we know him and have “seismic changes” on the entire Marvel Universe. If that weren’t enough hyperbole, both Spider-Man editor Steve Wacker and Marvel’s Editor-In-Chief Axel Alonso have suggested that this year, the 50th Anniversary of Spider-Man, may be Spider-Man’s "final year” and that maybe Avengers vs. X-Men over the course of October 2012 to February 2013, many are presuming the Amazing Spider-Man will be one of those titles--especially when
DYING WISH
, Dan Slott revealed that this would be the name for the story in Amazing Spider-Man #698 - #699. Naturally, this has caused all kinds of speculation on various message boards that someone, perhaps a major character, is going to die in this story, and guesses have ranged from Aunt May to Mary Jane to Julia Carpenter, aka the new Madame Web. However, there is already a character who is dying who has already expressed his “dying wish.” That person is none other than Doctor Octopus. It certainly has been quite some time since my last series of far-fetched predictions , Spidey fans. Maybe some of you are wondering why I never kept track of what I got right and wrong about “Ends of the Earth” like I did “Spider-Island.” Chalk this up to an internet connection problem that wasn’t resolved until the story was already well underway, as well as a growing disinterest in the overall story in general. Besides, it was becoming apparent that virtually everything I had guessed about that story was utterly wrong, save for how Doc Ock’s plan would have something to do with accelerating global warming. In any case, I sincerely apologize.Still, there had to be some way to make it up to all of you, dear readers. And wouldn’t you know it, the kind folks at Marvel have delivered to us Spidey-fans a potential gold mine of possibilities, theories, and guesses that your humble prognosticator just couldn’t pass up:#700.We’ve been told that the aftermath of “Ends of the Earth” will lead into this story, that every story from now to December is leading up to this one issue, and that the repercussions of aftermath of this issue will forever change Spider-Man as we know him and have “seismic changes” on the entire Marvel Universe. If that weren’t enough hyperbole, both Spider-Man editor Steve Wacker and Marvel’s Editor-In-Chief Axel Alonso have suggested that this year, the 50th Anniversary of Spider-Man, may be Spider-Man’s "final year” and that maybe “Pete has gone as far as he can go as Spider-Man.” And with the announcement of Marvel NOW!, that several titles are going to be re-launched with new #1 issues in the wake ofover the course of October 2012 to February 2013, many are presuming the Amazing Spider-Man will be one of those titles--especially when Marvel Director of Communications Arune Singh stated at this year’s Sand Diego Comic Con that Amazing Spider-Man would indeed be affected by Marvel NOW! So what do I think is going to happen? Well, to properly answer that question, I first have to talk about what I think is going to take place in the story that is apparently going to take place just before issue 700, something that is titled… In an interview with MTV , Dan Slott revealed that this would be the name for the story in#698 - #699. Naturally, this has caused all kinds of speculation on various message boards that someone, perhaps a major character, is going to die in this story, and guesses have ranged from Aunt May to Mary Jane to Julia Carpenter, aka the new Madame Web. However, there is already a character who is dying who has already expressed his “dying wish.” That person is none other than Doctor Octopus.
Because shiny metalic surfaces and LED lighting
make for the perfect underwater camflouge.
BTW, no truer words have been spoken by MJ
than what she tells Peter in this scene.
Not a fair staring contest for Doc Ock if you ask me
since the "eyes" on Spidey's mask can't blink.
Funny how little Slott has talked about him after "Spider-Island."
Then again, it IS the Jackal, so...
Julia Carpenter--still the most useless psychic ever.
At the conclusion of “Ends of the Earth,” Doc Ock has been apprehended, imprisoned and put on a life-support machine that, supposedly, blocks any telepathic commands he can send to his robotic arms or octobots. However, at the end of#8, we can see that one of the octobots is still clearly active. How? Well, remember way back in Amazing Spider-Man #600 when the octobots tried to sabotage electronically Aunt May J. Jonah Jameson Sr.’s wedding, and later kidnapped Jameson Sr., despite the fact Doc Ock stated that his personal feelings for May had no bearing on his plans? That established that not only are Doc Ock’s octobots controlled by him mentally but that they also act on his subconscious desires. Stands to reason there would be some residual brain patterns from Doc Ock pre-programmed into the octobots that would allow them to keep functioning despite Doc Ock’s higher brainwaves being blocked. Also, if they are acting purely on his subconscious desires, their first order of business would be to free Doc Ock, which, given their ability to possess people, could be easily done.Now why would Doc Ock want to be free if he’s just mere moments away from death? Because gradually wasting away in Ryker’s hospital ward is not how Doc Ock wants to go out; his ultimate goal, as he stated in “Ends of the Earth,” is engage in a “battle to the death” with his arch-enemy Spider-Man, either to kill Spidey before he dies, or for Spidey to kill him. That very well could be the explanation behind the titular “Dying Wish.”But surely Spidey wouldn’t go as far as killing Doc Ock, would he? After all, we know that Spidey has a strict “no killing” policy when it comes to his enemies, and as he told Julia Carpenter/Madame Web back in Amazing Spider-Man #666 , the moment he takes a life “he’d stop being Spider-Man.” And that’s exactly the point.As he told Mary Jane in Amazing Spider-Man #688 , Peter adopted his vow of “When I’m around, no one dies” because he was growing sick and tired of people that he knew and cared about being killed because of him, that “believing he could [save everyone all the time] was all that kept him from going crazy.” Now, becasue he was forced to sacrifice Silver Sable to stop Doc Ock (if in fact, given what Julia says in Amazing Spider-Man #690 , Sable actually is dead) his failure is already starting to take an emotional toll and affects how he fights bad guys. As seen in the latest storyline “No Turning Back,” Peter has developed a very short fuse, becoming more ruthless, less inhibited, and doesn‘t crack as many jokes; and in light of his promising that “When [he’s] done with Morbius, he’s not going to resemble a vampire anymore, living or otherwise,” this appears to show Peter is now willing to cross the line into potentially killing his enemies. So, given the right circumstances, Peter can now be easily pushed into taking Doc Ock’s life no less than by Doc Ock himself.It would take much, either. Not only would Peter still be furious at Doc Ock for what he has done, but imagine if a newly freed Doc Ock is now acting on what once were subconscious desires and thoughts? He could be searching for his lady love, Aunt May, tearing up New York in an effort to find her, which is more than enough for Spidey to go after him. Moreover, considering Spidey was able to mentally control the octobots on two separate occasions (once during Amazing Spider-Man #600 and another time during “Spider-Island”) Doc Ock could learn from them that Peter Parker is Spider-Man. And because he now knows everything about Spidey, he could have the octobots attack everyone Peter as they are fighting, and Doc Ock could say that Mary Jane Watson is about to die and there‘s nothing Peter can do to stop it--unless Spidey kills him, thereby shutting down the octobots. This would make Spidey go ballistic, and once he took Doc Ock’s life in pure blind rage, Spidey would realize the horror of what he has done, that Doc Ock did win in the end, and that he has betrayed everything he was supposed to stand for.Thus, when issue #700 starts, Peter will, of course, be facing the repercussions of his actions and, because he has been driven to take a life, is likely entertaining the possibility of quitting being Spider-Man “for good.” Now I know what you’re thinking: wouldn’t Peter giving up being Spider-Man be an abdication of his responsibility? However, there are actually three answers for this. First is the obvious one that Mayor J. Jonah Jameson would certainly cite this as proof he was right about Spider-Man being a “menace” all along, and considering how anti-Spider-Man the new police chief Pratchett is, he’d be more than happy to carry out Jameson’s will in issuing a warrant for Spidey “dead or alive.“ Second, Peter would believe the greater responsibility would be for him to quit because he would know that he could easily get into a situation again in which he would be forced to take a life to save lives, and Spider-Man has never been about that. And if Peter cannot live up to that ideal, then he shouldn’t be Spider-Man.That leads to the final answer: Andy Macguire, aka Alpha.Marvel hasn’t been the least shy about talking about the obvious parallels between Peter and Alpha even before#692 (Alpha’s debut) hits the stands. His origin story is very similar to that of Peter’s; he’s a 15-year old high school student just like Peter used to be when he first became Spider-Man; he, according to Dan Slott , is “everything Peter wished he could have been back in high school”; and finally, his name is a mash-up of both Andrew Garfield and Tobey Maguire, the two actors who have played Spider-Man in the movies. It’s everything but showing Alpha dressed in the Spider-Man costume sans mask at this point. Perhaps we’ve already seen Alpha as the new Spider-Man in the Marvel NOW! teaser, which would also explain why Spidey’s costume has been slightly altered, and it wouldn’t surprise me if the last page of#700 is Alpha swinging over the Manhattan skyline in that very costume.It’s also been revealed that, given how Alpha gets his powers, Peter feels responsible for him. Passing the Spider-Man mantle onto Alpha and continuing to train and mentor him would be an extension of that idea, creating a similar dynamic as seen among Bruce Wayne and Terry McGuinnis in. It wouldn’t even matter that Alpha has different powers, which appear to be energy-based in nature; in fact, energy-based powers would make it easier for him to replicate Spidey’s. He presumably would have super-strength and speed, and, if he can fly, that would allow him to still leap great distances and wall crawl. If he can sense energy signatures, that would give him a “spider-sense.” Plus, he would be able to have new “spider powers” like energize the webbing, magnetize objects, “sting” his opponents with a touch. That would also get Alpha out of the apparent jam of having a public identity, as also revealed by Dan Slott and others.Relinquishing his identity as Spider-Man will also allow Peter to be able to get back together with Mary Jane, something which has been teased as far back as Amazing Spider-Man #648 . “One Moment in Time” also clearly established that, while Peter continued to be Spider-Man, he and MJ could never be more than just friends despite their love for each other. With Peter no longer having to be Spider-Man, that would no longer be a problem, and would allow both Peter and MJ to confess to that they still have feelings for one another and take another chance on love. Thus, we get the answer for what Dan Slott said about us getting “The Monkey’s Paw” in that we’re “getting what we want, just not the way we want it.”Of course, Peter deciding to quit being Spider-Man and passing it onto Alpha cannot be the only story in the 700th issue, since it will, undoubtedly be extra-sized. So what could that other story be? Well, remember how I mentioned Doc Ock looking for Aunt May? Well, what if he’s looking for her because Aunt May has been abducted…by the guy who his Aunt May’s new husband that has been claiming to be J. Jonah Jameson’s dad all this time?In Madame Web’s vision in Amazing Spider-Man #689 , we see someone wearing a Spider-Man like half-mask who as a white mustache and goatee. The only character who comes to mind who has that specific facial hair is J. Jonah Jameson, sr., but the mask, as others have pointed out, evoke images of a Spider-Man villain long thought to be dead-- The Tarantula . Based on the other images in that vision, they all appears to be linked to what could be the events that are to happen in the upcoming Hobgoblin/Kingpin “Danger Zone” (#695 - #697). If the solicits for that particular story are anything to go by, it seems that the Kingpin will acquire the means to create the spider-sense jammers but, instead of shutting off Peter’s spider-sense, it will overload it, thus making it so he’s not sure where the danger is coming from while also giving him a splitting headache in the process. But the jammers would also affect anyone who is connected to the “web of life” or who has spider-like powers; thus a consequence would be that the man who thought he was Jameson Sr. would regain the memories and powers of the Tarantula.Now you might ask “What would Jonah's dad be doing in New York? Didn’t he and Aunt May move to Boston?” Well, that can easily be explained with, given how Aunt May couldn’t get in touch with Peter during “Ends of the Earth” and he still hasn’t gotten back in touch with them, this would make them want to visiting Peter for a few days to see what is going on, and the jammers get triggered right at that moment. As for why the Tarantula would look like Jameson’s dad, this could be the result of genetic tampering. And who is capable of genetic tampering? The same person who first hired Tarantula way back during the original Clone Saga--the Jackal.Remember, the Jackal was still alive at the end of “Spider-Island” and in possession of the Spider-Queen's remains to harvest her DNA. But what if, because of what the Spider-Queen had mutated into, she cannot be cloned, meaning that the Jackal has to use another host body? So he decides to settle on using Aunt May as a guinea pig? This of course means that, just before he gives up the webs, Peter has to save his Aunt May before the Jackal carries out his plans.My short answer: probably. Along with the Marvel NOW! teaser, admitted that Spider-Man would indeed be affected by the upcoming Marvel comics re-launch. And, we also have been told that there is at least one title not affected bythat would be getting the Marvel NOW treatment as a result of the changes that happen in that particular title. And if the “Alpha becomes the new Spider-Man” theory holds water, then it makes sense forto be re-launched with a new number #1, since it’s would be a new story about a brand-new Spider-Man as opposed to Peter Parker’s.This has also lead to some wondering whether or not Dan Slott would also step down as lead writer of, as well. Interestingly enough, Marvel hasn’t exactly given a straight-forward “yes” or “no” answer for this question, but that after the 700th issue “Dan [Slott] will definitely have to hide.” Then there is this tweet from Steve Wacker in the wake of the miniseriesgetting a last minute cancellation before the first issue is even out in print: “[Joe Keatinage] and Rich Elson both have projects coming up in the Spidey office that we’ll be announcing soon. They're bloody great!” Hmm…is this the new creative team of? It could very well be just a story forand Dan Slott will still be on boardThen again, maybe, as usual, I’m reading too much into all this. Unless there really is something to that dubious Mayan Apocalypse, we are sure to find out what's really going on over the next five months when Spider-Man closes out his 50th year. Until then, I’ll be keeping much better track over upcoming developments this time around. After all, we may be following the adventures of a different webhead this time next year.
Comments |
# --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for license information.
# --------------------------------------------------------------------------------------------
from __future__ import unicode_literals
import uuid
import logging
import time
from typing import List
from uamqp import types, errors # type: ignore
from uamqp import ReceiveClient, Source # type: ignore
from azure.eventhub.common import EventData, EventPosition
from azure.eventhub.error import _error_handler
from ._consumer_producer_mixin import ConsumerProducerMixin
log = logging.getLogger(__name__)
class EventHubConsumer(ConsumerProducerMixin): # pylint:disable=too-many-instance-attributes
"""
A consumer responsible for reading EventData from a specific Event Hub
partition and as a member of a specific consumer group.
A consumer may be exclusive, which asserts ownership over the partition for the consumer
group to ensure that only one consumer from that group is reading the from the partition.
These exclusive consumers are sometimes referred to as "Epoch Consumers."
A consumer may also be non-exclusive, allowing multiple consumers from the same consumer
group to be actively reading events from the partition. These non-exclusive consumers are
sometimes referred to as "Non-Epoch Consumers."
Please use the method `create_consumer` on `EventHubClient` for creating `EventHubConsumer`.
"""
_timeout = 0
_epoch_symbol = b'com.microsoft:epoch'
_timeout_symbol = b'com.microsoft:timeout'
def __init__(self, client, source, **kwargs):
"""
Instantiate a consumer. EventHubConsumer should be instantiated by calling the `create_consumer` method
in EventHubClient.
:param client: The parent EventHubClient.
:type client: ~azure.eventhub.client.EventHubClient
:param source: The source EventHub from which to receive events.
:type source: str
:param prefetch: The number of events to prefetch from the service
for processing. Default is 300.
:type prefetch: int
:param owner_level: The priority of the exclusive consumer. An exclusive
consumer will be created if owner_level is set.
:type owner_level: int
"""
event_position = kwargs.get("event_position", None)
prefetch = kwargs.get("prefetch", 300)
owner_level = kwargs.get("owner_level", None)
keep_alive = kwargs.get("keep_alive", None)
auto_reconnect = kwargs.get("auto_reconnect", True)
super(EventHubConsumer, self).__init__()
self._running = False
self._client = client
self._source = source
self._offset = event_position
self._messages_iter = None
self._prefetch = prefetch
self._owner_level = owner_level
self._keep_alive = keep_alive
self._auto_reconnect = auto_reconnect
self._retry_policy = errors.ErrorPolicy(max_retries=self._client._config.max_retries, on_error=_error_handler) # pylint:disable=protected-access
self._reconnect_backoff = 1
self._link_properties = {}
self._redirected = None
self._error = None
partition = self._source.split('/')[-1]
self._partition = partition
self._name = "EHConsumer-{}-partition{}".format(uuid.uuid4(), partition)
if owner_level:
self._link_properties[types.AMQPSymbol(self._epoch_symbol)] = types.AMQPLong(int(owner_level))
link_property_timeout_ms = (self._client._config.receive_timeout or self._timeout) * 1000 # pylint:disable=protected-access
self._link_properties[types.AMQPSymbol(self._timeout_symbol)] = types.AMQPLong(int(link_property_timeout_ms))
self._handler = None
def __iter__(self):
return self
def __next__(self):
retried_times = 0
last_exception = None
while retried_times < self._client._config.max_retries: # pylint:disable=protected-access
try:
self._open()
if not self._messages_iter:
self._messages_iter = self._handler.receive_messages_iter()
message = next(self._messages_iter)
event_data = EventData._from_message(message) # pylint:disable=protected-access
self._offset = EventPosition(event_data.offset, inclusive=False)
retried_times = 0
return event_data
except Exception as exception: # pylint:disable=broad-except
last_exception = self._handle_exception(exception)
self._client._try_delay(retried_times=retried_times, last_exception=last_exception, # pylint:disable=protected-access
entity_name=self._name)
retried_times += 1
log.info("%r operation has exhausted retry. Last exception: %r.", self._name, last_exception)
raise last_exception
def _create_handler(self):
alt_creds = {
"username": self._client._auth_config.get("iot_username") if self._redirected else None, # pylint:disable=protected-access
"password": self._client._auth_config.get("iot_password") if self._redirected else None # pylint:disable=protected-access
}
source = Source(self._source)
if self._offset is not None:
source.set_filter(self._offset._selector()) # pylint:disable=protected-access
self._handler = ReceiveClient(
source,
auth=self._client._get_auth(**alt_creds), # pylint:disable=protected-access
debug=self._client._config.network_tracing, # pylint:disable=protected-access
prefetch=self._prefetch,
link_properties=self._link_properties,
timeout=self._timeout,
error_policy=self._retry_policy,
keep_alive_interval=self._keep_alive,
client_name=self._name,
properties=self._client._create_properties( # pylint:disable=protected-access
self._client._config.user_agent)) # pylint:disable=protected-access
self._messages_iter = None
def _redirect(self, redirect):
self._messages_iter = None
super(EventHubConsumer, self)._redirect(redirect)
def _open(self):
"""
Open the EventHubConsumer using the supplied connection.
If the handler has previously been redirected, the redirect
context will be used to create a new handler before opening it.
"""
# pylint: disable=protected-access
self._redirected = self._redirected or self._client._iothub_redirect_info
if not self._running and self._redirected:
self._client._process_redirect_uri(self._redirected)
self._source = self._redirected.address
super(EventHubConsumer, self)._open()
def _open_with_retry(self):
return self._do_retryable_operation(self._open, operation_need_param=False)
def _receive(self, timeout_time=None, max_batch_size=None, **kwargs):
last_exception = kwargs.get("last_exception")
data_batch = []
self._open()
remaining_time = timeout_time - time.time()
if remaining_time <= 0.0:
if last_exception:
log.info("%r receive operation timed out. (%r)", self._name, last_exception)
raise last_exception
return data_batch
remaining_time_ms = 1000 * remaining_time
message_batch = self._handler.receive_message_batch(
max_batch_size=max_batch_size,
timeout=remaining_time_ms)
for message in message_batch:
event_data = EventData._from_message(message) # pylint:disable=protected-access
self._offset = EventPosition(event_data.offset)
data_batch.append(event_data)
return data_batch
def _receive_with_retry(self, timeout=None, max_batch_size=None, **kwargs):
return self._do_retryable_operation(self._receive, timeout=timeout,
max_batch_size=max_batch_size, **kwargs)
@property
def queue_size(self):
# type:() -> int
"""
The current size of the unprocessed Event queue.
:rtype: int
"""
# pylint: disable=protected-access
if self._handler._received_messages:
return self._handler._received_messages.qsize()
return 0
def receive(self, max_batch_size=None, timeout=None):
# type: (int, float) -> List[EventData]
"""
Receive events from the EventHub.
:param max_batch_size: Receive a batch of events. Batch size will
be up to the maximum specified, but will return as soon as service
returns no new events. If combined with a timeout and no events are
retrieve before the time, the result will be empty. If no batch
size is supplied, the prefetch size will be the maximum.
:type max_batch_size: int
:param timeout: The maximum wait time to build up the requested message count for the batch.
If not specified, the default wait time specified when the consumer was created will be used.
:type timeout: float
:rtype: list[~azure.eventhub.common.EventData]
:raises: ~azure.eventhub.AuthenticationError, ~azure.eventhub.ConnectError, ~azure.eventhub.ConnectionLostError,
~azure.eventhub.EventHubError
Example:
.. literalinclude:: ../examples/test_examples_eventhub.py
:start-after: [START eventhub_client_sync_receive]
:end-before: [END eventhub_client_sync_receive]
:language: python
:dedent: 4
:caption: Receive events from the EventHub.
"""
self._check_closed()
timeout = timeout or self._client._config.receive_timeout # pylint:disable=protected-access
max_batch_size = max_batch_size or min(self._client._config.max_batch_size, self._prefetch) # pylint:disable=protected-access
return self._receive_with_retry(timeout=timeout, max_batch_size=max_batch_size)
def close(self, exception=None):
# type:(Exception) -> None
"""
Close down the handler. If the handler has already closed,
this will be a no op. An optional exception can be passed in to
indicate that the handler was shutdown due to error.
:param exception: An optional exception if the handler is closing
due to an error.
:type exception: Exception
Example:
.. literalinclude:: ../examples/test_examples_eventhub.py
:start-after: [START eventhub_client_receiver_close]
:end-before: [END eventhub_client_receiver_close]
:language: python
:dedent: 4
:caption: Close down the handler.
"""
if self._messages_iter:
self._messages_iter.close()
self._messages_iter = None
super(EventHubConsumer, self).close(exception)
next = __next__ # for python2.7
|
<reponame>tencentyun/qcloud-sdk-android-samples
package com.tencent.qcloud.costransferpractice.object;
import android.content.Intent;
import android.graphics.Color;
import android.os.Bundle;
import android.support.annotation.Nullable;
import android.text.TextUtils;
import android.view.Gravity;
import android.view.Menu;
import android.view.MenuInflater;
import android.view.MenuItem;
import android.widget.AbsListView;
import android.widget.ListView;
import android.widget.TextView;
import com.tencent.cos.xml.CosXmlService;
import com.tencent.cos.xml.exception.CosXmlClientException;
import com.tencent.cos.xml.exception.CosXmlServiceException;
import com.tencent.cos.xml.listener.CosXmlResultListener;
import com.tencent.cos.xml.model.CosXmlRequest;
import com.tencent.cos.xml.model.CosXmlResult;
import com.tencent.cos.xml.model.bucket.GetBucketRequest;
import com.tencent.cos.xml.model.bucket.GetBucketResult;
import com.tencent.cos.xml.model.object.DeleteObjectRequest;
import com.tencent.qcloud.costransferpractice.BuildConfig;
import com.tencent.qcloud.costransferpractice.CosServiceFactory;
import com.tencent.qcloud.costransferpractice.R;
import com.tencent.qcloud.costransferpractice.common.base.BaseActivity;
import com.tencent.qcloud.costransferpractice.transfer.DownloadActivity;
import com.tencent.qcloud.costransferpractice.transfer.UploadActivity;
/**
* Created by jordanqin on 2020/6/18.
* 对象列表页
* <p>
* Copyright (c) 2010-2020 Tencent Cloud. All rights reserved.
*/
public class ObjectActivity extends BaseActivity implements AbsListView.OnScrollListener, ObjectAdapter.OnObjectListener {
public final static String ACTIVITY_EXTRA_BUCKET_NAME = "bucket_name";
public final static String ACTIVITY_EXTRA_FOLDER_NAME = "folder_name";
public final static String ACTIVITY_EXTRA_REGION = "bucket_region";
public final static String ACTIVITY_EXTRA_DOWNLOAD_KEY = "download_key";
private final int REQUEST_UPLOAD = 10001;
private CosXmlService cosXmlService;
private ListView listview;
private ObjectAdapter adapter;
private TextView footerView;
private String bucketName;
private String folderName;
private String bucketRegion;
//是否到底部
private boolean isBottom;
//分页标示
private String marker;
//是否截断(用来判断分页数据是否完全加载)
private boolean isTruncated;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.base_list_activity);
bucketName = getIntent().getStringExtra(ACTIVITY_EXTRA_BUCKET_NAME);
folderName = getIntent().getStringExtra(ACTIVITY_EXTRA_FOLDER_NAME);
bucketRegion = getIntent().getStringExtra(ACTIVITY_EXTRA_REGION);
if (getSupportActionBar() != null) {
if (TextUtils.isEmpty(folderName)) {
getSupportActionBar().setTitle(bucketName);
} else {
getSupportActionBar().setTitle(folderName);
}
}
listview = findViewById(R.id.listview);
listview.setOnScrollListener(this);
footerView = new TextView(this);
AbsListView.LayoutParams params = new AbsListView.LayoutParams(AbsListView.LayoutParams.MATCH_PARENT, AbsListView.LayoutParams.WRAP_CONTENT);
footerView.setPadding(0, 30, 0, 30);
footerView.setLayoutParams(params);
footerView.setGravity(Gravity.CENTER);
footerView.setTextColor(Color.parseColor("#666666"));
footerView.setTextSize(16);
listview.setFooterDividersEnabled(false);
listview.addFooterView(footerView);
if (TextUtils.isEmpty(BuildConfig.COS_SECRET_ID) || TextUtils.isEmpty(BuildConfig.COS_SECRET_KEY) ||
TextUtils.isEmpty(bucketRegion)) {
finish();
} else {
cosXmlService = CosServiceFactory.getCosXmlService(this, bucketRegion, BuildConfig.COS_SECRET_ID, BuildConfig.COS_SECRET_KEY, true);
getObject();
}
}
@Override
public boolean onCreateOptionsMenu(Menu menu) {
MenuInflater inflater = getMenuInflater();
inflater.inflate(R.menu.object, menu);
return super.onCreateOptionsMenu(menu);
}
@Override
public boolean onOptionsItemSelected(MenuItem item) {
if (item.getItemId() == R.id.upload) {
Intent intent = new Intent(this, UploadActivity.class);
intent.putExtra(ACTIVITY_EXTRA_REGION, bucketRegion);
intent.putExtra(ACTIVITY_EXTRA_BUCKET_NAME, bucketName);
intent.putExtra(ACTIVITY_EXTRA_FOLDER_NAME, folderName);
startActivityForResult(intent, REQUEST_UPLOAD);
return true;
}
return super.onOptionsItemSelected(item);
}
@Override
protected void onActivityResult(int requestCode, int resultCode, @Nullable Intent data) {
super.onActivityResult(requestCode, resultCode, data);
if (resultCode == RESULT_OK && requestCode == REQUEST_UPLOAD) {
// TODO: 2020/6/19 添加实体还是整体刷新 查看客户端实现方式
//临时全量刷新
marker=null;
getObject();
}
}
private void getObject() {
String bucketName = this.bucketName;
final GetBucketRequest getBucketRequest = new GetBucketRequest(bucketName);
// 前缀匹配,用来规定返回的对象前缀地址
if (!TextUtils.isEmpty(folderName)) {
getBucketRequest.setPrefix(folderName);
}
// 如果是第一次调用,您无需设置 marker 参数,COS 会从头开始列出对象
// 如果需列出下一页对象,则需要将 marker 设置为上次列出对象时返回的 GetBucketResult.listBucket.nextMarker 值
// 如果返回的 GetBucketResult.listBucket.isTruncated 为 false,则说明您已经列出了所有满足条件的对象
if (!TextUtils.isEmpty(marker)) {
getBucketRequest.setMarker(marker);
}
// 单次返回最大的条目数量,默认1000
getBucketRequest.setMaxKeys(100);
// 定界符为一个符号,如果有 Prefix,
// 则将 Prefix 到 delimiter 之间的相同路径归为一类,定义为 Common Prefix,
// 然后列出所有 Common Prefix。如果没有 Prefix,则从路径起点开始
getBucketRequest.setDelimiter("/");
//首页加载弹窗loading 非首页底部loading
if (TextUtils.isEmpty(marker)) {
setLoading(true);
} else {
footerView.setText("正在加载数据...");
}
// 使用异步回调请求
cosXmlService.getBucketAsync(getBucketRequest, new CosXmlResultListener() {
@Override
public void onSuccess(CosXmlRequest request, CosXmlResult result) {
final GetBucketResult getBucketResult = (GetBucketResult) result;
isTruncated = getBucketResult.listBucket.isTruncated;
uiAction(new Runnable() {
@Override
public void run() {
//首页加载弹窗loading 非首页底部loading
if (TextUtils.isEmpty(marker)) {
setLoading(false);
}
if (!isTruncated) {
footerView.setText("无更多数据");
}
marker = getBucketResult.listBucket.nextMarker;
if (adapter == null) {
adapter = new ObjectAdapter(ObjectEntity.listBucket2ObjectList(getBucketResult.listBucket, folderName),
ObjectActivity.this, ObjectActivity.this, folderName);
listview.setAdapter(adapter);
} else {
//首页加载弹窗loading 非首页底部loading
if (TextUtils.isEmpty(marker)) {
adapter.setDataList(ObjectEntity.listBucket2ObjectList(getBucketResult.listBucket, folderName));
} else {
adapter.addDataList(ObjectEntity.listBucket2ObjectList(getBucketResult.listBucket, folderName));
}
}
}
});
}
@Override
public void onFail(CosXmlRequest cosXmlRequest, CosXmlClientException clientException, CosXmlServiceException serviceException) {
//首页加载弹窗loading 非首页底部loading
if (TextUtils.isEmpty(marker)) {
setLoading(false);
toastMessage("获取对象列表失败");
} else {
footerView.setText("获取对象列表失败");
}
clientException.printStackTrace();
serviceException.printStackTrace();
}
});
}
@Override
public void onScrollStateChanged(AbsListView view, int scrollState) {
//滑动停止后开始请求数据
if (scrollState == AbsListView.OnScrollListener.SCROLL_STATE_IDLE) {
if (isBottom && isTruncated && !TextUtils.isEmpty(marker)) {
getObject();
}
}
}
@Override
public void onScroll(AbsListView view, int firstVisibleItem, int visibleItemCount, int totalItemCount) {
//是否滑动到底部
if (firstVisibleItem + visibleItemCount == totalItemCount) {
isBottom = true;
} else {
isBottom = false;
}
}
@Override
public void onFolderClick(String prefix) {
Intent intent = new Intent(this, ObjectActivity.class);
intent.putExtra(ObjectActivity.ACTIVITY_EXTRA_BUCKET_NAME, bucketName);
intent.putExtra(ObjectActivity.ACTIVITY_EXTRA_REGION, bucketRegion);
intent.putExtra(ObjectActivity.ACTIVITY_EXTRA_FOLDER_NAME, prefix);
startActivity(intent);
}
@Override
public void onDownload(final ObjectEntity object) {
Intent intent = new Intent(this, DownloadActivity.class);
intent.putExtra(ObjectActivity.ACTIVITY_EXTRA_BUCKET_NAME, bucketName);
intent.putExtra(ObjectActivity.ACTIVITY_EXTRA_REGION, bucketRegion);
intent.putExtra(ObjectActivity.ACTIVITY_EXTRA_DOWNLOAD_KEY, object.getContents().key);
startActivity(intent);
}
@Override
public void onDelete(final ObjectEntity object) {
String bucket = this.bucketName;
DeleteObjectRequest deleteObjectRequest = new DeleteObjectRequest(bucket, object.getContents().key);
setLoading(true);
cosXmlService.deleteObjectAsync(deleteObjectRequest, new CosXmlResultListener() {
@Override
public void onSuccess(CosXmlRequest cosXmlRequest, CosXmlResult result) {
uiAction(new Runnable() {
@Override
public void run() {
setLoading(false);
adapter.delete(object);
}
});
}
@Override
public void onFail(CosXmlRequest cosXmlRequest, CosXmlClientException clientException, CosXmlServiceException serviceException) {
setLoading(false);
toastMessage("删除对象失败");
clientException.printStackTrace();
serviceException.printStackTrace();
}
});
}
}
|
Homicide investigators found gun accessories, including a full magazine, as well as syringes in the apartment of the University City man who shot seven people before San Diego police killed him, court documents show.
A search warrant affidavit also reveals that a sister of gunman Peter Selis rushed to the Judicial Drive apartment complex on April 30, fearing that he might be involved.
Eve Selis told officers that her brother lived there and was “extremely distraught and depressed” over a recent break-up with his girlfriend, the document says.
The affidavit by San Diego police homicide Detective David Spitzer set out facts of the case to support the need for a warrant to search Peter Selis’ apartment. A Vista judge authorized the warrant about 1 a.m. on May 1.
Among the items seized that day from a bedroom were a full gun magazine, found in the pocket of a chair, and a gun-cleaning kit in the closet. A gun lock was on a desk.
Lying on the bed were a box for the lock, a bag containing a holster and gloves, and a cardboard box containing syringes.
A computer tablet lay on the floor. A black bag of miscellaneous pills was in the bathroom.The affidavit did not describe the syringes or their possible purpose.
Selis, a 49-year-old auto mechanic, was deeply in debt after two bankruptcies and had been threatened with jail for an unpaid debt. Late on Sunday afternoon, he went to the pool at the La Jolla Crossroads Apartments with a duffel bag and sat in a lounge chair.
The area was crowded, and many of the people were attending a birthday party.
Selis drank from a beer bottle, then pulled a pistol from the bag, the affidavit says. He started shooting shortly after 6 p.m., and after firing two rounds, he called his ex-girlfriend and told her what he was doing and that police were coming.
One of his victims, Monique Clark, 35, a mother of three, died. The search warrant document said she was shot once in the chest.
Victims were sprawled everywhere, bleeding. Others in the crowd scattered, screaming, and called 911 at 6:06 p.m. A police helicopter got there seven minutes later.
The helicopter crew told the first officers to head to the pool immediately. They did, and when the shooter saw them “a gun battle began,” the affidavit says.
Sgt. Michael McEwen and officers Jonathan Ferraro and Luke Hammond fired at Selis and he fired back. He was wounded several times and died there. No officers were wounded.
As news reports of the shooting were broadcast, Eve Selis rushed to the complex and described her brother for police, the court record says. Her description matched the shooter’s. She also gave police his apartment number.
“Fearing Selis may have potentially harmed any roommates or his ex-girlfriend, the decision was made to enter” his apartment, the affidavit says. Seven officers went in, but found no victims and left right away. Officers guarded the unit overnight until the warrant could be obtained. |
This 19 acre tract would make the perfect home site or a wonderful mini farm. Centrally located just 35 minutes north of downtown Greenville and one hour south of Asheville, NC. Inside this tucked away property holds mature hardwoods, flat open pastures, and a free flowing creek. |
Everyone, meet Ötzi the Iceman. Well, this is a reconstruction of Ötzi the Iceman — you'll find photos of his mummified corpse after the jump. And while Ötzi's remains might look rather run down to you and me, the truth is that any one of us would be lucky to look as good as he does after 5,300 years. That's how long scientists estimate Ötzi went undiscovered in the mountains bordering Italy and Austria before he was discovered there in 1991.
Now, over twenty years later, Europe's oldest natural human mummy has had its genome sequenced, revealing previously unknown details about the Iceman's health, life and death. They've even found clues that point to the location of his closest living relatives.
Advertisement
The findings are the product of an international collaboration led by researcher Albert Zink, head of the institute for Mummies and the Iceman in Bolzano, Italy. In the latest issue of Nature Communications, Zink and his colleagues use Ötzi's genomic data to conclude that the Iceman probably had type-O blood, brown eyes, and suffered from lactose intolerance.
And while researchers think Ötzi probably died from an arrow to the back, their analyses show that the Iceman had his fair share of everyday health problems, as well. Previous investigations had revealed, for example, that the mummy had hardened arteries — an unusual physiological feature for a man as young and active as researchers believe Ötzi was when he died. But the latest data suggest Ötzi's cardiovascular woes may have been the result of a genetic predisposition to heart disease. The researchers also found traces of DNA from the bacteria Borrelia burgdorferi, meaning Ötzi may well be the earliest human case of Lyme disease on record. Lucky, lucky Ötzi.
Advertisement
But the coolest results from this study are the ones linking the Iceman to his modern day descendants. Surprisingly, when Zink and his colleagues compared Ötzi's genome with that of modern day European populations, they found he was most closely related not to people from Northern Italy (where he was discovered), but "present-day inhabitants of the Tyrrhenian Sea," specifically men from the islands of Sardinia and Corsica.
These islands (labeled here in red) are separated from Ötzi's final resting place (marked here with a red circle) by over three hundred miles and a sizable body of water. That's pretty incredible, if you think about it. On one hand, it suggests that Ötzi's descendents may have once inhabited a much larger portion of mainland Europe, only to die out — or become part of a much more diverse genetic pool — save for the inhabitants of these two, isolated islands. It also points to the evolutionarily isolating effects that islands can have on a population's genetic makeup. The fact that these islanders have "remained moored to their genetic past, enough so that a 5,300 year old individual clearly can exhibit affinities with them," as Gene Expression's Razib Khan puts it, is pretty extraordinary.
Advertisement
The researchers' findings are published in Nature Communications
Additional Resources
The European Academy of Bozen/Bolzano has an awesome interactive app that lets you explore the mummified body of Ötzi the Iceman in stunning detail. You can check it out at ICEMAN Photoscan
Those interested in the hardcore genetics linking Ötzi to modern-day Sardinians and Corsicans can read more over on Gene Expression
Advertisement
Top image via South Tyrol Museum of Archaeology/EURAC/Marco Samadelli-Gregor Staschitz; Ötzi's remains via EURAC; map via Wikimedia Commons |
Psalms by category, from Verbum Bible Software. The Psalms in blue are the laments.
There are many ways through the dark valleys of depression, and prayer must be one of them. The problem is that the depressive often cannot even stir to perform regular prayer. Simple tasks become a burden. Prayer routines fall by the wayside. Commitments are left undone.
And with each little failure, the walls close in tighter and the sufferer sinks deeper.
Perhaps one way through the dark valley is to follow the trail laid down by our ancestors with the inspiration of the Holy Spirit. The Psalms have within them the entire range of human emotion and experience. If the regular routine of prayer no longer works for us, if the liturgy of the hours or the rosary or whatever discipline we try to follow has become a drudgery and burden, then we have to find a new way to speak to God and, more importantly, let God speak to us.
Fifty-nine psalms—more one-third of them—have elements of Lament. It would appear, on the surface, to be utter folly for the person in the midst of a soul-crushing sorrow to enter deeper into a world of despair. That appearance is deceiving, however.
The laments are more than that. They are a dialog between man and God. They are a howl of sorrow, yes, but they are also a plea, a thanksgiving, and a sigh of hope and trust.
1 How long, O Lord? Wilt thou forget me for ever?
You can see in the repetition of “How long” the urgency and pain of the psalmist, while parallelism builds gradually sharper accusations against the Lord. A common feature of Hebrew poetry is to say the same thing twice in different ways, which has the effect of both echoing and amplifying points.
At first, the speaker asks how long the Lord will “forget” him, suggesting that his distress is not the direct action of God, but mere neglect.
In the next line, however, God has chosen to actively “hide thy face.” God has not forgotten the speaker: God has deliberately turned away from him. These mounting accusations are a more circumspect way to accuse Yahweh for his present plight.
The pain in our souls and the sorrow in our hearts is as good a depiction of depression as you’re likely to find. This is De profundis clamavi: a cry from the depths.
Finally, we have the characterization of the enemy who is exalted over us. We can read “enemy” in many ways. It could be the world, the flesh, and the devil (mundus, caro, et diabolus): the inverted Trinity that leads us into darkness. Any one of these things may be at the root of our depression: stresses of life present and past (the world), challenges from our body from illness or temptation (the flesh), or the evil one himself, who must not be discounted as the root of some cases of severe depression.
But I think it may be more useful for those who suffer from true clinical depression to see the enemy as the depression itself. It’s not unusual to give personality and even character to depression. That’s why it’s called the Black Dog. This runs the risk of placing the depression outside ourselves rather than something that springs from within, but it can also be useful for understanding and confronting the forces pulling us down. In prayer, confronting our depression as an Other, an enemy, is just one way to reach out to God and articulate our pain. It says that we are not our illness, and it will not define us, and it allows for us to plead for this enemy to be driven away.
The next section is a prayer for help. We are asking for deliverance from the enemy, in this case, depression.
The Hebrew word translated as “lighten my eyes” means to shine, dawn, or give light. It is a powerful way of asking for darkness to be driven away, but it also suggests enlightenment and spiritual wisdom. We are asking God not merely to drive away the pain, but to give wisdom to our souls so we may live better, and to bring the Light Himself to dwell within us.
And to make the stakes clear, God is warned that our very lives are at risk. Major depressive disorders are, in fact, a serious, life-threatening condition. Too many people today fail to understand this.
Again, we have the personification our of our pain as our “enemy” and our “foes.” We are pleading with God not to let them triumph. Depression is the enemy within. Like Satan, it lies. It whispers things that sound like truth and try to strip us of our God-given dignity and even our being.
Our souls are greater than these lies. Our souls are light, one with the Divine Light. The clouds of depression obscure that ligh, but the important thing to remember is that the light remains there. When night comes, the sun does not die. We merely lose sight of it for a little while. Our light and the light of God continue to shine even when the cataract of depression films over the soul.
The haunting failures, doubts, anxieties, and sorrows are rarely as bad as we feel them to be, but the nature of depression is that they grow outsized and heavy. They feel like a suffocating miasma, like a crushing weight that can never lift.
my heart shall rejoice in thy salvation.
This is key. Trust and rejoice! Whatever bears us down, the Lord bears us up. The Lord’s love is steadfast, and He will save us. We cannot despair. |
Photo: SDG&E Part of a 30-megawatt lithium-ion battery energy storage system in Escondido, Calif.
Advertisement Editor’s Picks Gas and Power Grid Link Means L.A. Could Be Gigawatts Short This Summer
It’s the stuff of an action-hero movie: An accident threatens an unsuspecting metropolis. Electricity supplies face disruptions and millions are at risk of being without electricity as blackouts roll across the city. Faced with the prospect of escalating chaos, officials gather on the steps of government buildings and implore, “Who can help us?”
But let’s leave that cliffhanger for a moment, knowing that reality was not quite so—shall we say—Hollywood.
The significance of the Aliso Canyon response almost cannot be overstated, storage advocates say.
Even so, this movie-quality crisis is based in fact and has energy storage as its action hero. The increasingly mainstream zero-emission technology helped ease a real-life crisis that had all the makings of a major catastrophe.
Official records say that on 23 October 2015, a significant natural gas leak in well SS25 was detected at the Aliso Canyon natural gas storage facility in the San Fernando Valley north of Los Angeles. Repeated attempts by Southern California Gas Co., the owner, to “kill”—plug up—the well and stop the leak failed.
SoCalGas relies on Aliso Canyon to provide gas for core customers—homes and small businesses—as well as non-core customers, including hospitals, local governments, oil refineries, and 17 natural gas-fired power plants with a combined generating capacity of nearly 10,000 megawatts.
As part of a multi-part response to the crisis, the California Public Utilities Commission in May 2016 fast-tracked approval of 104.5 MW of battery-based energy storage systems within the service areas of Southern California Edison (SCE) and San Diego Gas & Electric (SDG&E).
Those utilities, along with the Los Angeles Department of Water and Power—the nation’s largest municipal utility—provide gas and electric service to most of southern California. By the end of February 2017, seven of eight fast-tracked Aliso Canyon–related energy storage projects were online, helping the region’s energy grid regain stability.
The significance of the Aliso Canyon energy storage deployment is “hard to overstate,” says Alex Morris, director of policy and regulatory affairs for the California Energy Storage Alliance, an advocacy group.
For one thing, the stakes were high. After all, rolling blackouts seemed likely across much of the region. For another, the deployment was rapid: roughly six months elapsed between the time the procurement order was issued and when the bulk of the storage resources came online.
Part of what made that rapid deployment possible was California’s growing familiarity with energy storage technology and procurement. That familiarity stems in part from legislation passed in 2013. Known as AB 2514, the state law set an aggressive goal for California’ three regulated utilities, Pacific Gas & Electric, SCE, and SDG&E: Procure energy storage capable of delivering a total of 1,300 MW by 2020 and have it online by 2024.
Much of the learning curve had already been climbed by the time the Aliso Canyon crisis occurred. As a result, energy storage systems could be procured and designed almost simultaneously, with delivery and installation largely a function of the supply chain’s ability to respond.
California’s energy storage goal helped to move ahead of states in the mid-Atlantic and Midwest as the nation’s hotbed for energy storage deployment.
Encouraged by the technology’s declining cost and rapid deployment, California regulators in late April ordered the three big utilities to secure an additional 166 MW of storage capacity each, for a total of around 500 MW. Precisely how that capacity is deployed remains to be seen, but the technology may be called on to support a range of services, including frequency regulation and local substation support.
Indeed, energy storage is a something of a Swiss Army knife, capable of handling multiple duties depending on how and where it is deployed and how it is compensated.
For example, in a 2016 energy storage cost analysis the financial advisory firm Lazard identifies grid-scale applications that include peaking generator unit replacement, frequency regulation, and distribution system support. So-called “behind the meter” uses include microgrids, commercial and industrial applications, and residential deployment.
Energy storage can provide instantaneous bursts of electricity for grid support or extended periods of power to meet peak load demand. It even can replace some forms of fossil-fired power generation that traditionally have produced electricity at times of high demand.
Among the Aliso Canyon energy storage facilities now in place is the 20-MW/80-MWh Mira Loma Battery Storage Facility, deployed by Tesla in less than three months. The installation includes two 10 MW systems that contain almost 200 Tesla Powerpacks and 24 inverters. The equipment was manufactured at the recently opened Tesla Gigafactory in Nevada.
Meanwhile, SDG&E and AES Energy Storage deployed two Advancion energy storage units, totaling 37.5 MW. At one of the sites—an SDG&E substation in Escondido, Calif.—a 30 MW, four-hour-duration lithium-ion battery array represents what AES says is one of the world’s largest such storage deployments.
AES Energy Storage deployed one of the world’s largest lithium-ion battery systems
The arrays will provide 37.5 MW of power and serve as a 75 MW resource for grid support. Combined, the arrays can provide enough capacity to power roughly 25,000 homes for four hours. Components include batteries by Samsung SDI and power conversion systems by Parker Hannifin.
The four-hour duration means that the system can provide energy during times of peak demand on the system. Gas-fired turbines often fill that role. But the battery storage system means that energy produced by wind and solar facilities can be stored and then released into the grid in response to demand signals issued by the California Independent System Operator. That helps to advance California’s environmental goals, because battery storage is a zero-emission source.
The 2014 SCE solicitation was for all sources. That means that energy storage competed directly against natural gas on price, among other factors, says Kate McGinnis, Market Director for Western U.S. at AES Energy Storage. The storage system was priced “competitively to a gas peaker,” she says.
The Lazard analysis from late 2016 suggests that lithium-ion batteries used in peaking applications can have a capital cost that ranges from around $420/kWh to almost $950/kWh. On an unsubsidized basis, Lazard pegs the levelized cost of storage using lithium-ion batteries as between $285/MWh and $813/MWh.
California environmental advocates have been increasingly frustrated as renewable energy resources have been curtailed during periods of excess generation. “There is a sense of ‘what a waste,” says Buck Endemann, a San Francisco-based partner in the energy and environmental law practice of K&L Gates. Storage offers an opportunity for excess renewable energy to be captured and used as needed rather than as generated.
As California’s utilities continue to work toward the goals of AB 2514 and the more recent AB 2868, McGinnis says that AES has additional contracts for energy storage projects: a further 40 MW of storage capacity with SDG&E and 100 MW for SCE.
And while no one hopes that a crisis similar to Aliso Canyon happens again, energy storage advocates seem ready to take on a leading role should an action-hero sequel come along. |
Preoperative and Postoperative Bone Mineral Density Change and Risk Factor Analysis in Patients with a GH-Secreting Pituitary Adenoma Purpose This study analysed changes in bone mineral density (BMD) at different sites in patients with acromegaly and postoperative BMD changes and explored risk factors associated with BMD. Methods Clinical data of 39 patients with growth hormone- (GH-) secreting pituitary adenomas and 29 patients with nonfunctioning pituitary adenomas who were newly diagnosed in neurosurgery from January 2016 to December 2018 were retrospectively analysed, including measurements of preoperative and postoperative BMD, serum GH glucose inhibition, random GH and IGF-1, and other anterior pituitary hormones. Results The average patient age and disease duration were 43.74 (33.4154.07) years and 72.15 (22.82121.48) months, respectively. Compared with patients with nonfunctioning adenomas, patients with GH-secreting pituitary adenomas had significantly higher BMDs at L1, L2, femoral neck, Ward triangle, trochanter, femoral shaft, and total hip sites (p < 0.05). The BMD Z score at L1 and femoral neck sites significantly increased (p < 0.05). Thirteen patients underwent re-examination of BMD 1 year postsurgery, and the BMD Z score was reduced to normal levels at L1, L2, L3, L4, L1-L4, and L2-L4 compared with preoperative levels (p < 0.05). Postoperative BMD Z scores in the femoral neck and total hip were significantly increased (p < 0.05). Disease duration was negatively correlated with the lumbar-spine BMD Z score. IGF-1 burden was negatively correlated with the BMD Z score at L1 and L1L4. Multiple regression analysis showed that IGF-1 burden was a risk factor for a BMD Z score decrease at L1 and L1L4. Conclusion BMD in patients with GH-secreting pituitary adenomas (compared with nonfunctional adenomas) increased at L1, L2, femoral neck, Ward triangle, trochanter, femoral shaft, and total hip sites. Lumbar-spine BMD Z score recovered to normal levels postsurgically when GH and IGF-1 levels were controlled. BMD Z score was negatively correlated with disease duration and IGF-1 burden in patients with GH-secreting pituitary adenomas, and IGF-1 burden was an independent risk factor for reduced lumbar-spine BMD Z score. Introduction Acromegaly is an endocrine and metabolic disorder syndrome caused by the oversecretion of growth hormone (GH) and insulin-like growth factor-1 (IGF-1), which results in excessive growth of bones, soft tissues, and internal organs. e structure and function of bones in patients with acromegaly are affected. Among patients with acromegaly, 58% have osteoporosis of the lumbar spine, 74% have osteoporosis of the femoral neck, and the incidence of vertebral fracture is approximately 39%. More than 95% of acromegaly cases are caused by a GH-secreting pituitary adenoma. GH and IGF-1 have anabolic effects on bone metabolism that can lead to increased bone formation and resorption. However, the changes in bone mineral density (BMD) in patients with acromegaly have been controversial in the past few decades. Most studies have shown that patients have increased BMD, and some studies have revealed no significant changes; however, a few studies have reported decreased bone mineral density. We suppose that, after surgery, the BMD should change, as the elevated GH and IGF-1 levels are controlled. To date, there is a lack of conclusive evidence that the BMD in patients with acromegaly decreases after surgery. erefore, we aimed to investigate the preoperative BMD and to determine the postoperative BMD changes in patients with acromegaly in the Chinese population. In addition, GH mediates bone metabolism through IGF-1, which is produced in the liver. e IGF-1 level is considered positively correlated with BMD in patients with acromegaly. e effect of IGF-1 on cortical bone seems to be more dominant than that on trabecular bone in mouse models. erefore, we aimed to examine the correlation between IGF-1 levels and BMD in humans and to explore other factors that affect BMD in patients with acromegaly. e purpose of this study was to investigate the preoperative BMD, to analyse the effect of acromegaly cure on BMD through successful surgery, and to explore risk factors associated with BMD in patients with acromegaly. Study Population. Patients first diagnosed with a GHsecreting pituitary adenoma or nonfunctioning pituitary adenoma at Peking Union Medical College Hospital (PUMCH) from January 2016 to December 2018 were studied. e inclusion criteria were as follows: endocrine diagnostic criteria: elevated serum IGF-1 level and lack of suppression of GH to <1 g/L following documented hyperglycaemia during an oral glucose load ; pituitary enhanced MRI confirming a space-occupying lesion in the sellar region; typical clinical manifestations of acromegaly; pathologically confirmed GH-secreting adenoma; performance of a BMD examination through DXA; and no history of medical treatment or radiation. Patients with a nonfunctioning pituitary adenoma were selected as the control group. e inclusion criteria of control group were as follows: endocrine diagnostic criteria: without hormone oversecretion for each specific pituitary tumour type ; clinically diagnosed with a nonfunctioning adenoma; and performance of a BMD examination through DXA before surgery. e exclusion criteria were diagnosis of neoplastic disease; diagnosis of osteoporosis-related disease, such as PCOS, hyperthyroidism, and hyperparathyroidism; use of drugs that affect BMD or bone metabolism; or chronic liver or kidney diseases. Disease control was defined as random GH <1 g/L or nadir GH after OGTT <0.4 g/L and age-sex normalized IGF-1. Informed consent was obtained from each patient prior to enrolment. is study was approved by the Ethics Committee of PUMCH at the Chinese Academy of Medical Sciences and Peking Union Medical College. Biochemical Measurement. e methods for hormone evaluation are described in our previously published article. e T-score is defined as the difference between the measured BMD and the bone peak in young people of the same sex. e Z score is defined as the difference between the measured BMD and the average BMD of people of the same age and race. e BMD examination was performed after hospital admission but before the surgery or at least 3 months after surgery during clinical re-examination. Statistical Analysis. All data are expressed as the mean ± SD. Student's T and Mann-Whitney's U tests were used to compare the GH-secreting pituitary adenoma group and the nonfunctional group depending on whether it was normally distributed. A paired T test was used to compare the preoperative and postoperative group BMD values. Pearson's correlation coefficient or Spearman's rank order was assessed to determine the correlations between BMD and other parameters. Stepwise multiple linear regression was conducted to identify potential predictive factors for BMD at each site. Statistical significance was accepted with a p value <0.05. Patient Characteristics. irty-nine patients with GHsecreting pituitary adenomas and 29 patients with nonfunctioning pituitary adenomas were included. irteen patients with GH-secreting pituitary adenomas had controlled disease and underwent re-examination of BMD 1 year after surgery. Characteristics of the study population are shown in Table 1. e age and body mass index (BMI) of the two groups were matched. e average disease duration of patients with GH-secreting pituitary adenomas was 72.15 (22.82-121.48) months, and the average adenoma size was 15.97 (9.1-22.84) mm. GH and IGF-1 levels in patients with GH-secreting pituitary adenomas were significantly higher than those in the control group (both p values <0.001). In addition, FT3 (free triiodothyronine), FT4 (free thyroxine), and phosphorus levels were increased significantly (p < 0.05, respectively). BMD in the GH-Secreting Pituitary Adenoma Group and Nonfunctioning Adenoma Group. e BMD comparison between the GH-secreting pituitary adenoma group and controls is shown in Table 2. In the GH-secreting pituitary adenoma group, BMD was significantly elevated at the L1, L2, femoral neck, Ward triangle, trochanter, femoral shaft, and total hip sites compared with the corresponding values in patients with nonfunctioning adenomas (p � 0.006, 0.032, 0.001, 0.036, 0.005, 0.037, and 0.008, respectively). ere were no significant differences at other sites. In terms of the Z score, the BMD at L1 and the femoral neck were significantly elevated in patients with a GH-secreting pituitary adenoma compared with controls (p � 0.048 and 0.022, respectively). ere were no significant differences at other sites. Preoperative and Postoperative BMD in Patients with GH-Secreting Pituitary Adenomas. irteen patients with GH-secreting pituitary adenomas underwent re-examination of BMD 1 year after surgery, and the comparison of the preoperative and postoperative BMD is shown in Table 3. Compared with that of the preoperative BMD, the Z score of the postoperative BMD decreased significantly at L1, L2, L3, Analysis of the Potential Determinants of BMD. Pearson's correlation analysis was performed, and the results are shown in Table 4. A negative correlation was found between the disease duration and Z score of the lumbar spine, including L1, L2, L3, and L4 sites (all p < 0.05). In addition, the disease duration was also negatively correlated with the Z score in the trochanter and total hip (p � 0.048 and 0.044, respectively). IGF-1 burden was negatively correlated with the Z score at L1 and L1-L4 (p � 0.042 and 0.048, respectively). e PRL level was negatively correlated with the Z score of the femoral neck and trochanter (all p < 0.05). GH nadir, GH burden, T3, T4, TSH, PTH, ACTH, TC, TG, HDL, and LDL were also included in the correlation analysis and are not listed in Table 4. ere were no significant correlations found between these parameters and the BMD Z score for any site (all p > 0.05). Discussion Acromegalic osteopathy is a disease that seriously affects the quality of life of patients, and there has been no consistent conclusion regarding the effects of excess GH and IGF-1 on bone structure and bone metabolism, especially in terms of BMD changes. Our study revealed elevated BMD in patients with acromegaly, and the elevated lumbar-spine BMD recovered to a normal level after surgery. IGF-1 burden is concluded to be an independent risk factor for BMD reduction at L1. We adopted more sites for BMD measurement to obtain more comprehensive information than that yielded by the previous study. Patients with nonfunctioning pituitary adenomas were used as the control group, which could effectively reduce the interference of other potential factors. e BMD in patients with active acromegaly was significantly elevated at L1, L2, femoral neck, Ward triangle, trochanter, femoral shaft, and total hip sites compared with that in patients with nonfunctioning pituitary adenomas. Conflicting results concerning the BMD changes in patients with acromegaly have remained over the past several decades. Previous studies by Zgliczynski et al. and Kaji et al. showed that BMD in the middle tibia, lumbar spine, femoral neck, and trochanter were significantly elevated in patients with acromegaly. Tuzcu et al. reported that the BMD values in the lumbar spine and femoral neck of acromegaly patients were not significantly different from those in normal controls. Nevertheless, a few researchers have reported decreased BMD in patients with acromegaly. On the one hand, patients with acromegaly are more likely to exhibit osteoarthritis in the spine and hip, resulting in structural changes. ese changes may affect the measurement of BMD in the spine and hip. On the other hand, the increase in bone area in patients with acromegaly may also lead to a decreased BMD, especially when using the DXA method. e results of this study are consistent with those of most previous studies. We found that BMD in patients with acromegaly was elevated, and this change was likely to be the result of long-term chronic effects of high levels of GH and IGF-1. In previous investigations, the lumbar spine was regarded as a whole site, and the average BMD was considered. In our study, the BMD of the four lumbar vertebrae from L1 to L4 were evaluated in detail. In addition, the BMD at L1 and L2 in patients with acromegaly was elevated significantly. By comparing the Z score, we were more convinced that the BMD at L1 was significantly elevated, while the BMD values at L3 and L4 were not changed. Our study also found that after 1 year of surgery, the BMD Z scores of the lumbar vertebrae in patients with acromegaly were significantly lower than the preoperative values, and there was no significant difference compared with the control group. It can be considered that the lumbarspine BMD recovers to a normal level after surgery. However, the BMD Z score at the femoral neck continued to increase. Tamada et al. showed that there was no significant difference in BMD between the timepoint of 3 years after surgery and the preoperative period in patients with acromegaly. Mazziotti et al. showed that there was no significant difference in lumbar-spine BMD between the timepoint of 3 years after surgery and the preoperative period, while the femoral neck BMD was significantly reduced. In both of the above studies, hormone replacement therapy was used in some patients to treat hypopituitarism before or after TSS, while GH replacement therapy may have promoted bone formation and increased BMD, which could have covered up the BMD decrease trend. In our study, there were no patients who underwent hormone replacement therapy; thus, we can exclude the effect of hormone replacement therapy on BMD, which can reflect the changes in postoperative BMD more accurately. Since we do not have data from longer follow-up, the BMD changes over the subsequent few years are unclear. Further study will continue to focus on the postoperative BMD changes. erefore, it can be concluded that the elevated lumbarspine BMD in patients with acromegaly recovered to normal levels after surgery when the GH and IGF-1 levels were controlled. In addition, the results of our study are different from those of previous studies. e postoperative BMD Z score decreased at the lumbar vertebrae and was elevated at the femoral neck. e difference in structure and mechanics can account for this difference. In terms of structure, the lumbar spine is rich in trabecular bone, while cortical bone is prevalent at the femoral neck. BMD at trabecular bone has been reported to decrease in patients with acromegaly. Other studies have shown that excessive GH and IGF-1 have a deleterious effect on trabecular bone, while GH-induced periosteal ossification increases the BMD in cortical bone. erefore, we believe that the different effects of excessive GH and IGF-1 on cortical and trabecular bones result in differences in the trend of BMD between the lumbar spine and the femoral neck. We found that the disease duration and IGF-1 burden were negatively correlated with BMD Z score. e multiple linear regression also showed that IGF-1 burden is an independent risk factor for reduced preoperative L1 BMD Z score. Previous studies have shown a negative correlation between BMD and disease duration, and lumbar-spine BMD is positively correlated with IGF-1 levels. Some other studies have shown that lumbar-spine BMD is negatively correlated with the duration of hypogonadism, positively correlated with the duration of acromegaly, but not correlated with the IGF-1 level. Femoral neck BMD is positively correlated with the IGF-1 level and not correlated with the duration of hypogonadism and the duration of acromegaly. By analysing previous studies, we believe that the increase in BMD may be due to the long-term changes in bone structure caused by excess GH and IGF-1. Our study showed that IGF-1 burden (IGF-1 level disease duration) was negatively correlated with BMD Z score at the lumbar spine. erefore, we concluded that IGF-1 burden was an excellent indicator of BMD. However, we only came to this conclusion for the BMD Z score at L1, and further research is waiting to be verified by using more samples and by adopting additional measurement sites. e serum phosphorus level was significantly increased in patients with acromegaly in our study. Hyperphosphatemia is frequently observed in patients with acromegaly. e elevated phosphorus level was slightly higher than the upper normal range. e correlation analysis showed that phosphorus level was not significantly correlated with preoperative BMD in patients with GH-secreting pituitary adenoma. erefore, the effect of a higher phosphorus level on elevated BMD in patients with GH-secreting pituitary adenoma may not be dominant in our study. In addition, the FT3 and FT4 levels were significantly higher in patients with acromegaly than in patients with nonfunctioning pituitary adenomas in our study. Excessive GH has been reported to have effects on thyroid function, resulting in changes in thyroid hormone. Pearson's correlation analysis showed that the thyroid hormone levels were not significantly correlated with BMD in patients with GH-secreting pituitary adenoma. e elevated FT3 and FT4 levels were still within the upper normal range. e T3 and T4 levels were also within normal range. A previous study on postmenopausal women showed that elevated thyroid hormones levels within the upper normal range were associated with decreased BMD. erefore, we believe that the variation in thyroid hormones levels contributed little to the increase in the BMD of patients with GH-secreting pituitary adenoma. Our investigation has its limitations. First, this is a retrospective study, and the randomness was not fully satisfied. erefore, the conclusions drawn from the study population did not fully represent the overall situation of patients with pituitary GH adenomas. Second is the limitation of the DXA method for BMD measurement. is method cannot well distinguish cortical bone and trabecular bone. As mentioned above, there are differences in the effects of excessive GH and IGF-1 on cortical bone and trabecular bone. erefore, the results did not fully reflect the detailed changes in BMD. In addition, the DXA method cannot detect changes in bone microarchitecture and is easily interfered with by factors such as periosteal ossification and osteoarthritis. In contrast, qCT can measure BMD in three dimensions and can also distinguish between cortical bone and trabecular bones. However, considering radiation exposure and medical expenses, the DXA method is still a good tool for observing the general trend of BMD change. Finally, the size of the study population was very limited. Due to medical expenses, traffic problems, and concerns about radiation exposure, many patients did not have regular follow-up and BMD re-examinations. Despite the above limitations, our study can help us better understand the pathophysiological influence of excessive GH and IGF-1 on bone metabolism. As the disease duration and IGF-1 burden were negatively correlated with the preoperative BMD in patients with GH-secreting pituitary adenoma, clinical intervention on BMD should be conducted in the acromegalic patients who have higher levels of IGF-1 or longer disease duration. It is hypothesized that the BMD may be a biomarker of disease activity in patients with acromegalic osteopathy, but further studies are needed to confirm this hypothesis. Furthermore, BMD may be considered a routine examination in acromegaly patients. Conclusions In conclusion, the preoperative BMD in patients with acromegaly is significantly elevated at L1, L2, femoral neck, Ward triangle, trochanter, femoral shaft, and total hip sites. e lumbar-spine BMD recovered to normal levels after successful surgery when GH and IGF-1 levels were controlled. Preoperative lumbar-spine BMD Z score was negatively correlated with disease duration and IGF-1 burden, and IGF-1 burden was an independent risk factor for preoperative BMD Z score decrease at L1 in patients with acromegaly. Data Availability All data included in this study are available upon request by contact with the corresponding author. Conflicts of Interest e authors declare that there are no conflicts of interest regarding the publication of this paper. |
Long-term field studies in bat research: importance for basic and applied research questions in animal behavior Animal species differ considerably in longevity. Among mammals, short-lived species such as shrews have a maximum lifespan of about a year, whereas long-lived species such as whales can live for more than two centuries. Because of their slow pace of life, long-lived species are typically of high conservation concern and of special scientific interest. This applies not only to large mammals such as whales, but also to small-sized bats and mole-rats. To understand the typically complex social behavior of long-lived mammals and protect their threatened populations, field studies that cover substantial parts of a species maximum lifespan are required. However, long-term field studies on mammals are an exception because the collection of individualized data requires considerable resources over long time periods in species where individuals can live for decades. Field studies that span decades do not fit well in the current career and funding regime in science. This is unfortunate, as the existing long-term studies on mammals yielded exciting insights into animal behavior and contributed data important for protecting their populations. Here, I present results of long-term field studies on the behavior, demography, and life history of bats, with a particular focus on my long-term studies on wild Bechsteins bats. I show that long-term studies on individually marked populations are invaluable to understand the social system of bats, investigate the causes and consequences of their extraordinary longevity, and assess their responses to changing environments with the aim to efficiently protect these unique mammals in the face of anthropogenic global change. General introduction In this review, I will address the significance of long-term field studies on bats for basic and applied research questions in animal behavior. For the general importance of long-term field studies in animals, I refer to the reviews by Clutton-Brock and Sheldon and Reinke et al.. Notably, in a recent series of papers on long-term field studies in mammals (summarized in Hayes and Schradin 2017), bats have not been covered. The reason for this lack of coverage may be that long-term field studies on bats are rare, even more so than long-term field studies on other mammalian taxa. Being small, nocturnal and able to fly, bats present particular challenges to long-term research. At the same time, bats offer multiple reasons why long-term field studies investigating their behavior, demography, and population structure are highly rewarding. Bats are of high interest for research on aging (;) and have diverse and sometimes complex social systems (McCracken and Wilkinson 2000;Kerth 2008;;). They also provide multiple ecosystem services, including plant pollination, seed dispersal, and consumption of pest insects (). Finally, bats can carry viruses of high zoonotic potential such as Ebola, Marburg, or Corona viruses (;;Mollentze and Streicker 2020), but little is known about virus transmission pathways within and between bat colonies (). Bats have a life history that is quite different from that of other mammals: The vast majority of the more than 1400 bat species are group living, they are the only mammals capable of active flight, and most bat species use sophisticated Communicated by F. Trillmich echo-orientation systems, which allow them to navigate in complete darkness (Norberg and Rayner 1987;Kunz and Fenton 2007;Kerth 2008;Simmons and Cirranello 2020). Moreover, given their small size (ca. 3-1500 g), bats are extraordinary long-lived, with many species reaching a maximum age of 20 years and more (Barclay and Harder 2003;Munshi-South and Wilkinson 2010;;). Their peculiar life history makes bats very interesting for behavioral ecologists. At the same time, many bat species are of high conservation concern, with populations declining in most parts of the world (;Racey and Entwistle 2003;). In the face of the ongoing global biodiversity crisis, studying the behavior of bats can help to protect these extraordinary animals by addressing applied research questions and collecting conservation relevant data. The general relevance of studies on animal behavior for the protection of threatened species gave rise to the field of "Conservation Behavior" about 15 years ago (Buchholz 2007;Blumstein and Fernndez-Juricic 2010). One focus of this invited review will be on our long-term studies on Bechstein's bats (Myotis bechsteinii), which we perform since 1993 with RFID-tagged and DNA-genotyped populations that live in deciduous forests in Southern Germany (Kerth and Knig 1996;Kerth and van Schaik 2012). But I will also report and discuss the results of other field studies on bats at the interface between conservation and behavior (Table 1). This includes two long-term studies that, like ours, run since several decades: One study investigates an English population of greater horseshoe bats (Rhinolophus ferrumequinum) for more than 60 years, which makes it the longest continuous study on a population of marked bats (Ransome 1989;;;). The other study is on the critically endangered New Zealand long-tailed bat (Chalinolobus tuberculatus), living in temperate rain-forests in Fjordland, New Zealand, and which runs for three decades (Sedgeley and O'Donnell 1999;O'Donnell 2000;;O'). What do I mean, when I speak of a long-term field study in bats? The definition of a long-term field study depends on the study organism and the research question tackled (). Reinke and co-workers state "a key aspect of what makes a study long-term is the ability to observe and understand temporal variation." For example, if occasional disease outbreaks or rare catastrophic weather conditions shape the population dynamics of a long-lived species, even a study covering 20 years may not allow enough replicates to identify the factors causing population crashes. Thus, in the context of responses to climate change or rare catastrophic events, a study should probably cover more than two decades in order to be truly termed "long-term." In contrast, a 10-to 15-year study on the social behavior of long-lived species typically will allow to unravel the dynamics of the social structure and to document even rare behaviors such as occasional dispersal events in a highly philopatric species. For the remainder of this paper, I will refer to "long-term" field studies if they cover at least three generations or twice the average lifespan (= life expectancy) of the population under study. For most bats, that means that a field study needs to cover at least 10-15 years to justify it being called "long-term" (for comparison: the life expectancy for Bechstein's bats is about 5-6 years, ; generation time is about 4-5 years, ). My rationale for this definition is that it only becomes possible to measure the life-time reproductive success of individuals and the temporal variation in population dynamics and social structure if a study covers at least such a time frame. Moreover, I will only address long-term field studies where bats have been individually marked, and most marked individuals have been observed/recaptured many times during the study period. Thus, I will not discuss the results of largescale banding studies on bat migration, which sometimes span several decades but where recapture rates are typically very low (< 5%) and where the bats often cannot be assigned to a specific population (). In large-scale banding-studies of this kind, individuals are recaptured on too few occasions to study the causes and consequences of behaviors at the individual level. Table 1 provides an overview of long-term studies on bats I am aware of. Why do we need long-term field studies on bats? Due to the longevity of bats, certain research topics such as changes in the phenology, life history, or social systems in response to environmental change can only be addressed with individualized long-term field data (compare Clutton-Brock and Sheldon 2010). However, it is important to note that many other interesting research topics do not require long-term field studies. Examples include studies on the echo-orientation of wild bats, which can be carried out during one or two field trips (e.g., von Helversen and von Helversen 1999; Siemers and Schnitzler 2004;). Studies using population genetic tools to investigate cryptic behavior such as dispersal or mate choice typically require field work of maximally a few years to obtain the required DNA samples (e.g., Petit and Mayer 1999;;;). Moreover, experimental field studies that cover a few seasons can reveal fascinating insights into the behavior of wild Table 1 Overview of the long-term studies covered in this review. In these studies, bats have been individually marked, and their populations have been continuously monitored for at least 10 years (see text for more details). Colony/population sizes give an approximate range of the number of adult bats per colony/hibernation site. N of marked bats give a range of bats marked in each study, based on the given references or based on personal communication with the respective researchers Species bats. This includes studies on communication and cognition in bats (e.g., Page and Ryan 2005;;). Finally, even field studies addressing the effect of anthropogenic changes of the environment on bats, such as the impact of light pollution on habitat use, do not always require individualized long-term data (e.g., ). Long-term individualized field studies are, however, essential whenever measuring survival and lifetime reproductive success of individual bats is required for addressing a research question. Lifetime reproductive success (total progeny produced by an individual) is a well-established but difficult to obtain measure of individual fitness in wild mammals (Clutton-Brock 1988). Information on individual fitness is required for evaluating the ultimate causes of behavior as well as for understanding the drivers of population dynamics, which is crucial for conservation (Clutton-Brock and Sheldon 2010). Because of their longevity and low annual reproductive output, measuring lifetime reproductive success in a number of bats large enough to allow for statistical analyses will often require the monitoring of dozens of individuals for a decade or more (e.g., Ransome 1995;;;). Finally, to understand the factors shaping the dynamics of wild bat populations in response to changing environments, such as global warming or predation by introduced predators, long-term data are typically needed (;;;Macdonald 2018, 2020;Mundinger et al., 2022. What are the challenges of long-term field studies on bats? Researchers working on free-ranging bats face numerous challenges, some of which are multiplied if one attempts to work on the same population for a long time. The nocturnal activity of bats, in combination with the fact that many species spend the day hidden in often inaccessible roosts (Kunz and Fenton 2007), often requires the use of technical devices such as radio-and RFID-tags or infra-red videorecording that allow for monitoring the behavior of bats in roosts without disturbing the animals (Kunz and Parsons 2009). Further obstacles to the study of bats are the fact that their vocalizations are typically in the range of ultrasound, which again requires special devices to study their orientation and communication system (Kunz and Parsons 2009). Finally, because bats can fly, their home-ranges, dispersal distances, or seasonal migrations often exceed the detection range of small bio-logging devices such as radio-telemetrytransmitters. At the same time, most bat species are too small to carry GPS-tags, which if applicable are a fantastic tool to monitor the movements of bats over large distances (e.g., ). Current recommendations are that tagweight should not exceed 10% of the bats' body mass and be less than 5% if bats are tagged for longer than just a few days (O'). This means that for most bat species, tag-weight should not exceed 1 g. In this context, it is important to emphasize that in Europe and many other countries, bats are strictly protected by law (Barova Streit 2018). In addition to the permits for working with protected species, animal welfare permits are required in many countries for marking bats with RFID-tags or other bio-logging tags. The same applies for taking wing-tissue samples for assessing the genetic relatedness among the study animals. The need to obtain permits for working with animals is of course not restricted to bats, but their small size limits what kind of techniques are applicable without interfering with their wellbeing. To overcome the difficulties of directly observing the behavior of bats, population genetic tools can be used to address research questions dealing with dispersal, gene flow, kin selection, mating behavior, reproductive success, and seasonal migrations (e.g., Petit and Mayer 1999;;Rossiter et al., 2005;;). For example, as direct observations of mating events and lactation events of marked individuals are difficult to obtain in wild bats, population genetic tools are typically needed for constructing pedigrees, which are relevant for quantifying fitnessrelated parameters and individual traits. As a consequence of the difficulties of obtaining both, detailed demographic data and comprehensive DNA-samples from wild bat populations, multi-generational pedigree data are available for only a few bat species (e.g., b;;). While being scientifically highly rewarding, the combination of technology-driven monitoring and population genetic tools makes field studies on bats often cost-intensive. For long-term field studies this is even more relevant, as these costs must be covered over long time periods. Clearly, this does go well with the early phase of an academic career and the typically short-term funding regime in science. Consequently, most long-term studies were not intended to become "long-term" when they were initiated by early-career researchers. Another inherent challenge to longterm field studies is that it is often logistically difficult to intensively monitor more than one study population, which makes it very hard to achieve replicates (compare Table 1). The fact that long-term studies typically develop from studies that had originally been designed to address a specific research question causes problems of its own. Most importantly, the consistency of data collection and longterm data storage is not always guaranteed over the course of an "emerging" long-term study, e.g., due to the almost inevitable personnel turnover. Maintaining standardized data collection is difficult when it is passed on every few years, for example, to a new cohort of doctoral students. Protocols that describe how to collect and store field data can certainly facilitate standardization, but some variance in data collection is unavoidable in long-term field studies. For example, field experiments carried out to address certain research questions can complicate the analyses of long-term data on other research questions if the experiments interfere with the standard data collection or the setup of field sites. Some of these issues may be resolved by a careful data cleaning and the use of modern statistical tools such as generalized linear mixed models (GLMMs) that can deal with noisy and heterogeneous data (). Nevertheless, data heterogeneity remains a major challenge for long-term studies. How do current methodological advances facilitate long-term field studies on bats? The study of bat behavior has profited enormously from the breath-taking range and speed of technological advancement over the last decades. One striking example is the automatic monitoring of bat roosts with RFID-tag loggers. Implanted RFID-tags have first been used to mark bats in the 1990s (Kerth and Knig 1996;Brooke 1997) and are now a well-established method for the automatic monitoring of bat roosts (Kerth and Reckardt 2003;;;Burns and Broders 2015;;van ). The biggest advantage of this method -compared to classical banding studies that rely on the capture-mark-recapture of animals -is that it allows to document an individual's arrival and emergence without interfering with the animals at the roost. However, this works only if the roost entrance is small enough for placing an antenna (typically in the range of less than 0.5 m 2 ). Moreover, the automatic identification of individuals passing through the antenna is most accurate if bats crawl instead of flying through. This is probably the reason why many long-term field-studies on bats that used the RFIDtechnology work with species that roost in tree cavities and bat boxes (Table 1). In more recent years, novel bio-logging technologies, such as proximity tags, enable bat researchers to monitor the social behavior of bats even outside of their roosts. The miniaturization of bio-logging tags nowadays allows for documenting behavior even in small bat species. However, while passive induced RFID-tags allow monitoring the behavior of bats over their entire lifespan (), radio-transmitters, GPS-tags, and proximity tags require batteries and thus can only be fitted to bats for a limited time. At the same time, more traditional behavioral observation methods of bats, such as infrared videomonitoring and thermography, continue to be important (Kunz and Parsons 2009). Overall, no single method will allow to address all research questions arising during a longterm study. Thus, a careful evaluation of the advantages and disadvantages of each method for the particular research question and study species is always required. What did we learn from three decades of studying wild Bechstein's bats? Information on whether or not individuals disperse from their natal area or their social group is essential for understanding the social systems of animals (e.g., Clutton-Brock 2016) and for protecting their populations. In bats of the temperate zone, population genetic studies as well as capture-mark-recapture studies revealed that females are typically philopatric to their natal colony (e.g., Petit and Mayer 1999;;;). Female Bechstein's bats take this to the extreme: During 25 years of intense roost-monitoring with RFID-loggers in combination with genetic mother-offspring assignment in four colonies (Kerth and van Schaik 2012), only two females' immigration events were documented. Population genetic studies of a large number of colonies, living in Germany and Bulgaria, confirmed that females are highly philopatric, independently from the region where their colonies are living ((Kerth et al., 2002bKerth and van Schaik 2012). Why are Bechstein's bat colonies closed societies? In our study sites, no ecological barriers exist that would restrict females from moving between the monitored colonies. Adult males, who live solitarily during summer, always disperse from their natal colony and often settle in the area of another maternity colony (a;Kerth and Morf 2004). Moreover, using confrontation tests, we could show that females can discriminate between members of their own colony and females that belong to foreign colonies. While females showed no aggression toward colony mates, they attacked foreign females that entered their roost during our confrontation tests (b). This suggests that colony members prevent the immigration of foreign females at times. The recognition of foreign females is probably based on olfactory cues and possibly also on vocal cues (Safi and Kerth 2003;Siemers and Kerth 2006). However, in multiple years of nightly infra-red video-recording in day roosts (e.g., a), we never observed aggressive behavior indicative of an immigration attempt. Thus, even unsuccessful immigration attempts seem to be rare. Why should colony members prevent foreign females from entering their day roost? In many mammal societies, group members show some degree of territoriality and xenophobic behavior (Clutton-Brock 2016). The reason for aggression toward foreigners is typically the defense of crucial resources, such as feeding areas or mates, as found in meerkats (Suricata suricatta; ). In Bechstein's bats, however, it is unclear which critical resources a colony could defend. Unlike in many other bat species (McCracken and Wilkinson 2000), mate defense plays no role, as there are no adult males present in the colonies and mating takes place apart from the summer habitat (b;Kerth and Morf 2004). It is also unlikely that colony members are able to defend their foraging sites by defending communal day roosts. Colony members forage in individual areas that are widespread in the forest and are typically located several hundred meters from the colony's day roosts (a;Kerth and Melber 2009;). Do colonies defend their roosts because suitable roosts themselves are a limited resource? In one of our study sites, Natterer's bats (Myotis nattereri) preferentially occupy roosts recently used by brown-long eared bats (Plecotus auritus) but not by Bechstein's bats. This suggests there is some competition about day roosts between forest-living bat species. However, female Bechstein's bats use up to 50 communal roosts (bat boxes and tree cavities) during their 5-month breeding season (Kerth and Knig 1999;). Moreover, during their almost daily roost-switching, colonies often split into subgroups that use separate day roosts before they mix or fuse again (Kerth and Knig 1999;). Fission-fusion behavior and frequent switching of roosts is widespread in forest-living bats (Kerth and Knig 1999;O'Donnell 2000;Willis and Brigham 2004;;). For colonies that switch roosts every other day, a single roost may be of limited value and probably can only be defended when the colony occupies it. Are there costs to accepting foreigners into colonies? Energetic benefits from social thermoregulation are often used to explain why female bats form colonies to communally raise their offspring (Willis and Brigham 2007;Kerth 2008). However, social thermoregulation is unlikely to be the driver for xenophobic behavior in Bechstein's bats. For social thermoregulation to be most efficient, the number of bats that cluster in a roost should be more important than the individual composition of a group (;). Thus, it seems unlikely that by accepting a foreign female, colonies would impair their social thermoregulation capabilities. A similar argument may apply to two other social behaviors of female Bechstein's bats that ensure the functioning of their colonies: information transfer and collective decision-making about communal day roosts (Kerth and Reckardt 2003;;). It remains unclear whether the immigration of foreign females would compromise the integrity of the fission-fusion society of female Bechstein's bats, which strongly depends on a coordinated roost-switching. It seems likely that immigrants at first are unfamiliar with the local area. Thus, they may initially contribute less to the information transfer about communal roosts than the original colony members. But the same applies to the young females born in the colony, which also first need to learn about the roosts in the colonies home range but which are nevertheless allowed to stay in the natal colony. Female Bechstein's bats are egalitarian breeders with low reproductive skew (a). Thus, eviction of females in order to avoid reproductive competition, a behavior which occurs in mammal species with a strong female reproductive skew (Clutton-Brock 2016), also does not apply to Bechstein's bats. Finally, there is mixed evidence for kin selection in Bechstein's bats (Kerth and Reckardt 2003;). Because the females mate promiscuously with males born in foreign colonies, colonies consist of closely related as well as genetically largely un-related females. This leads to an overall low average relatedness in colonies despite strong female philopatry (b). In summary, neither reproductive competition among females nor resource defense or kin-based cooperative behavior may explain why female Bechstein's bats live in closed societies. Could parasite avoidance be the reason for living in closed societies? By avoiding contact with foreign females, Bechstein's bats could limit the introduction of parasites into their colonies. Bats carry a variety of parasites and viruses, many of which are contact transmitted and thus could possibly be avoided by living in closed societies (Kerth and van Schaik 2012). Overall, surprisingly little is known which fitness costs the different parasites and pathogens inflict on their bat hosts under natural conditions, with the most notable exception of the fungus Pseudogymnoascus destructans, causing mass mortality in North-American bats (e.g., ). For Bechstein's bats, we have no direct information on the costs of carrying blood-sucking wing mites (Spinturnix bechsteini) and bat flies (Basilia nana), both of which are largely host-specific. However, we assume such costs exist, as it is known that wing mites can cause energetic costs in mouse-eared bats (Myotis myotis; ) and because female Bechstein's bats show several behaviors that help them to reduce parasite load. For example, females allo-groom each other (a), a behavior also observed in other bat species (Carter and Leffer 2015). Moreover, by regularly switching day roosts and by avoiding previously occupied roosts, Bechstein's bats can reduce the infestation with bat flies whose contagious puparia remain in the roosts after the bats moved on (Reckardt and Kerth 2007). Population genetic studies showed that Bechstein's bat colonies harbor wing mites that are genetically differentiated from those occurring in neighboring colonies (;van ). This suggests that the purely contact-transmitted mites are not exchanged between colonies during summer when the bats live in closed societies. However, this does not prevent parasite transmission during other periods of the bats' yearly life cycle. By comparing the genetic composition of mite populations of the same Bechstein's bat colonies between years, we could show that there is a large genetic temporal turnover in the mite population of a given bat colony (). Bechstein's bats, as many other bats of the temperate zone, mate in autumn at "swarming sites," where males and females from many different colonies meet (b;). There, the bats cannot avoid contact to non-colony members, e.g., males that may function as vectors, and this apparently leads to the transfer of parasites between colonies that have no contact during summer. Thus, living in a closed society during summer does not prevent the spread of contact-transmitted parasites during the autumn mating season. Moreover, although the peculiar social system of Bechstein's bat has an impact on their wing mites and bat flies, parasite life history, such as transmission mode and the mortality toll during winter, is also a major player in the system (van ). Thus, to make 30-year research story short: we still do not fully understand why Bechstein's bat colonies are closed societies. Although we cannot explain why female Bechstein's bats stay in their natal colony throughout their life, the implications for conservation management are clear: each colony needs to be protected as a demographically independent unit (). At the same time, gene flow must be maintained by protecting central mating sites (b). Moreover, the limited dispersal and colony foundation capacities of the species (Kerth and Petit 2005; Kerth and van Schaik 2012) requires that conservation management needs to enable populations to cope with climate change in situ. This first requires that we know which factors shape the local population dynamics of Bechstein's bats. In a recent paper, Mundinger et al. used long-term data to show that juvenile Bechstein's bats born in warmer summers grew to larger body sizes. With an increasing number of warmer summers over the last two decades, juveniles thus grew larger on average. While this is interesting per se, as many species shrink in response to global warming (Sheridan and Bickford 2011), growing larger has also negative consequences for the survival of females. In our study population, larger females have a higher mortality risk (;). Thus, with ongoing climate change, populations may decline if females continue to grow to large sizes and their higher mortality is not offset by a higher reproduction. Interestingly, there is evidence that larger females indeed show a faster pace of life, reproducing at an earlier age than smaller females (). For long-term population persistence, however, mitigating the negative effects of global warming may require an improved local habitat quality with a high food availability and a large number of tree cavities that offer a wide range of micro-climatic conditions (compare b). Future perspectives of long-term field studies on bats With the ongoing anthropogenic global change, long-term studies that investigate the responses of bats to changing environments are more important than ever. In birds, longterm monitoring data revealed shifts in the timing of reproduction and the phenology of seasonal migrations, raising concerns about an increasing mismatch between prey availability and the species' breeding seasons (Cotton 2003;). In contrast, relatively little is known about shifts in the breeding or hibernation phenology of bats in response to global warming (but see Macdonald 2018, 2020;;;). Moreover, a few studies used climateenvelope models to predict the future distribution of bat species (e.g., ). However, such studies often suffer from the lack of data on the capability of bats to cope with changing environments in situ, e.g., by adjusting roost selection or by switching to alternative prey. Clearly, more long-term studies that measure the flexibility of individual behavior of bats are needed to assess whether bats can successfully adapt their hibernation and breeding phenology to changing climatic conditions. In a recent study where we analyzed our long-term monitoring data, we were able to show that rare population crashes drive the population dynamics in Bechstein's bats (). However, it remained unclear what exactly caused the observed increase in mortality in such catastrophic years. We assume that cold weather conditions in autumn when the bats need to accumulate fat for hibernation and in spring at the time when bats emerged from the hibernacula caused the observed high mortality. At the same time, there is evidence from other long-term studies on bats that weather conditions have a species-, age-and sex-specific effect on bats (;Linton and Macdonald 2018;;). All this makes predictions on population persistence in bats even more difficult and warrants for more long-term field studies on additional bat species and in different regions of the world. Overall, basic questions relating to the demography of bats are still largely unanswered: For example, to what degree do intrinsic and extrinsic factors influence mortality risk and reproductive success in bats, and how do both factors interact with each other. The available long-term data suggest that demographic (age), genetic (heterozygosity), morphology (size), and environmental conditions (weather; predation) all affect mortality in bats (;;;;Culina et al., 2019;). However, without individualized long-term data on further species, including bats of the tropics, for which almost no such data exist, an efficient conservation management of bat populations at times of a rapidly changing world is difficult. The long-term study on the New Zealand long-tailed bat by O'Donnell and co-workers (Sedgeley and O'Donnell 1999;O'Donnell 2000;) is a prime example how long-term capture-mark-recapture data, combined with behavioral and environmental data, can help to protect endangered bat populations: In the New Zealand long-tailed bat, increased predation by introduced predators in masting years of southern beeches (Nothofagus spec.) reduces the bats' survival probability and leads to re-occurring population crashes that ultimately will drive the species to extinction (). The authors showed that only a significant reduction of the predation pressure though predator control can save the New Zealand long-tailed bat in the long term. Being exceptionally long-lived, bats are of high interest to researchers studying aging. But even studies that entirely focus on the molecular mechanisms underlying the extraordinary longevity of bats require long-term field data from marked individuals to collect the relevant samples from bats of known (old) ages (;). Moreover, in order to understand the ultimate and proximate causes of bat longevity and the reasons for their negligible senescence (;), long-term field studies are needed that investigate how reproduction decisions influence individual mortality risks and ultimately individual fitness (Ransome 1995;;). The current SARS-CoV-19 pandemic is a fresh reminder that outbreaks of zoonotic viruses are sometimes connected to bats, even though bats are not exceptional in how may zoonotic viruses they carry compared to other mammals after accounting for species richness (Mollentze and Streicker 2020). However, if we want to go beyond a mere description of viruses occurring in bat populations, we need long-term studies that monitor virus dynamics in individually marked bat populations and relate the detected virus load to the individual behavior of bats such as forming close individual roosting associations (e.g., ). One novel and highly rewarding aspect of field studies on bats is the establishment of captive colonies that later are re-introduced into the wild, where they can be investigated for a long time. Such an approach has been highly successfully in case of the "in-house" colony of wild Egyptian fruit bats (Rousettus aegyptiacus) established by Yovel and coworkers (e.g., ). A similar approach has been carried out in vampire bats (Desmodus rotundus) in Panama, again leading to fascinating insights into bat social behavior. Unfortunately, the establishment of captive colonies is only possible in species such as in fruit-eating bats where feeding bats in captivity is feasible. In most insectivorous bats, long-term studies will have to continue to rely on access to wild colonies, with all the inherent logistic problems outlined previously in this review. The emergence of the disastrous white-nose disease in North-America that wiped out many bat populations is a dramatic reminder that long-term studies on bats can become endangered themselves if the study species disappears, and not only because funding may cease at some point. One example of a long-term study that ended after 15 years of continuous monitoring is a Canadian study on forest-living big brown bats (Eptesicus fuscus). This study investigated research topics at the interface of physiology, fission-fusion behavior, colony foundation, and applied conservation (Willis and Brigham 2004;;). Another interesting example is the neotropical greater sac-winged bat Saccopteryx bilineata. The greater sac-winged bat has been studied for 50 years (M. Knrnschild et al., unpublished data), which makes it one of the best studied tropical bat species. This iconic species is well-known for its fascinating mating behavior as well as complex acoustic and olfactory communication (e.g., Nagy et al., 2012). However, because of population crashes in the respective study populations in Costa Rica and Panama (;M. Knrnschild et al., unpubl. data), no single marked population could be continuously monitored for more than 8-10 years, and individualized long-term data that cover the whole live of individual bats are therefore largely lacking. Fortunately, population crashes do not always mean that a long-term study needs to be terminated and instead may provide interesting insights into the drivers of population dynamics (e.g., ;). There is also hope for new long-term data becoming accessible for science: At least in Europe, many citizen scientists and conservation groups run their own banding studies on local bat populations, sometimes already since decades. By combining their long-term data on local bat populations with the statistics, population genetics, and writing skills of professional scientists, those studies can gain more international visibility and will lead to new insights into bat behavior and conservation (e.g., ;;Linton and Macdonald 2020;; see also Table 1). The exceptionally long-term study on greater horseshoe-bats initiated by Roger Ransome 60 years ago is a prime example of how a longterm collaboration between citizen scientists and professional scientists can lead to fantastic insights into the life of bats, ranging from demography, mating behavior, and aging to applied conservation aspects (Ransome 1989;;;;). Conclusions Currently only few long-term field studies on individually marked bats exist that span more than 10 years, as summarized in Table 1. The existing studies have a clear regional (mostly Europe) and taxonomic bias (mostly Vespertilionid species). However, many more field studies on bats exist that have the potential to emerge into long-term studies, if the respective researchers can secure the necessary longterm funding and achieve academic tenure. I am convinced that with the further advance of miniaturized monitoring techniques, those studies will lead to new fascinating insight into the biology of bats. One notable aspect of long-term studies on bats is that they often depend on collaborations between citizen scientists, conservationists, and professional scientists. This is by no means a caveat but instead enables a fruitful exchange of knowledge among bat researchers with a different training background and facilitates the transfer of the obtained scientific results into conservation practice. support at different stages of my career. Moreover, I thank the local conservation authorities and forest departments for their administrative and practical support. My research would not have been possible without the many helpers in the field and the dedicated co-workers that I had the pleasure to work with. I am grateful to Marcus Fritze, Carolin Mundinger, Jaap van Schaik, Alexander Scheuerlein, and two anonymous referees for comments on earlier versions of this manuscript. Funding Open Access funding enabled and organized by Projekt DEAL. Conflict of interest The author declares no competing interests. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/. |
After the flirtation came the fatwa.
With some overly friendly comments to Gov. Sarah Palin at the United Nations, Asif Ali Zardari has succeeded in uniting one of Pakistan's hard-line mosques and its feminists after a few weeks in office.
A radical Muslim prayer leader said the president shamed the nation for "indecent gestures, filthy remarks, and repeated praise of a non-Muslim lady wearing a short skirt."
Feminists charged that once again a male Pakistani leader has embarrassed the country with sexist remarks. And across the board, the Pakistani press has shown disapproval.
What did President Zardari do to draw such scorn? It might have been the "gorgeous" compliment he gave Ms. Palin when the two met at the UN last week during her meet-and-greet with foreign leaders ahead of Thursday's vice presidential debate with opponent Sen. Joe Biden, the Democratic vice presidential nominee.
But the comments from Zardari didn't end there. He went on to tell Palin: "Now I know why the whole of America is crazy about you."
"You are so nice," replied the Republican vice presidential hopeful, smiling. "Thank you."
But what may have really caused Pakistan's radical religious leaders to stew was his comment that he might "hug" Palin if his handler insisted.
Though the fatwa, issued days after the Sept. 24 exchange, carries little weight among most Pakistanis, it's indicative of the anger felt by Pakistan's increasingly assertive conservatives who consider physical contact and flattery between a man and woman who aren't married to each other distasteful. Though fatwas, or religious edicts, can range from advice on daily life to death sentences, this one does not call for any action or violence.
Last year, the mosque that issued the fatwa, Lal Masjid (Red Mosque) in Islamabad, condemned the former tourism minister, Nilofar Bahktiar, after she was photographed being hugged by a male parachuting coach in France.
Clerics declared the act a "great sin" and, though less vocal about it, similar sentiments were shared by many among Pakistani's middle classes. The Red Mosque gained international infamy in July 2007 after becoming the focal point of a Pakistan Army operation.
For the feminists it's less about cozying up to a non-Muslim woman and more about the sexist remarks by Zardari.
"As a Pakistani and as a woman, it was shameful and unacceptable. He was looking upon her merely as a woman and not as a politician in her own right," says Tahira Abdullah, a member of the Women's Action Forum.
Dismissing the mosque's concerns as "ranting," she, however, adds: "He should show some decorum – if he loved his wife so much as to press for a United Nations investigation into her death, he should behave like a mourning widower," in reference to former Pakistani premier Benazir Bhutto, a feminist icon for millions of Pakistani women.
The theme of decorum was picked up by English daily Dawn, whose editorial asked: "Why do our presidents always end up embarrassing us internationally by making sexist remarks?"
The incident bears some resemblance to yet another charm offensive by a senior Pakistani politician. Marcus Mabry's biography of Condoleezza Rice includes a passage in which he relates a meeting between former Pakistani Prime Minister Shaukat Aziz and Ms. Rice, in which Mr. Aziz was said to have stared deeply into the secretary of State's eyes and to have told her he could "conquer any woman in two minutes."
There are some, however, who see things as having been blown out of proportion.
"It was a sweet and innocuous exchange played as an international incident on Pakistani and rascally Indian front-pages with one English daily [writing] it in a scarlet box, half-implying Mrs. Palin would ditch Alaska's First Dude and become Pakistan's First Babe. As if," wrote columnist Fasih Ahmed in the Daily Times.
For most, it will soon be forgotten in a country dealing with terrorism, rising food prices, and a struggling economy. "We don't care that much how they [politicians] behave – what really matters is keeping prices down," says Nazeera Bibi, a maid in Lahore. |
package demo1;
import java.util.concurrent.atomic.AtomicInteger;
/**
* Ticket锁主要解决访问顺序的问题,问题是在多核CPU上
* 票号只会增加,也就是说访问的次数和拿到的票号对应的时候,才能顺利执行,否则就会进入while。编程自旋状态。
* @author <EMAIL>
* @date 2018/3/20
*/
public class SpinLock_TicketLock {
private AtomicInteger serviceNum = new AtomicInteger();
private AtomicInteger ticketNum = new AtomicInteger();
private static final ThreadLocal<Integer> LOCAl = new ThreadLocal<>();
public void lock(){
int myTicket = ticketNum.getAndIncrement();
LOCAl.set(myTicket);
// 每次都回到主存中读取,组织其他CPU修改,影响性能
while(myTicket!=serviceNum.get()){
}
}
public void unlock(){
int myTicket = LOCAl.get();
serviceNum.compareAndSet(myTicket,myTicket+1);
}
}
|
package main
import (
"net/http"
"os"
"testing"
"github.com/fnproject/cli/client"
)
func TestEnvAsHeader(t *testing.T) {
const expectedValue = "v=v"
os.Setenv("k", expectedValue)
cases := [][]string{
nil,
[]string{},
[]string{"k"},
}
for _, selectedEnv := range cases {
req, _ := http.NewRequest("GET", "http://www.example.com", nil)
client.EnvAsHeader(req, selectedEnv)
if found := req.Header.Get("k"); found != expectedValue {
t.Errorf("not found expected header: %v", found)
}
}
}
|
An adaptive antenna selection scheme for transmit diversity in OFDM systems In wireless communication systems, as one of the effective techniques for combating fading, transmit diversity has attracted much attention, especially when receive diversity is expensive or impractical due to the constraint of terminal size. In this study we consider an adaptive carrier-by-carrier basis antenna selection scheme for transmit diversity in OFDM systems under the assumption that a transmitter has priori knowledge of the channel state information (CSI). In the proposed schemes transmit antennas are selected adaptively for each subcarrier according to CSI. |
Taking students on a local field trip just got a lot easier for local teachers with the new field trip city bus program.
In partnership with Kingston Transit, the Teachers Field Trip Pass pilot project allows teachers to sign out a field trip pass from their school’s office and use it to ride city buses for free, getting their class to and from their destination.
With children under the age of 14 riding the bus for free, it’s just one swipe of the pass to cover the teachers and all helpers.
Each school has three teachers field trip bus passes available.
“This is about normalizing and training and making transit the mode of transportation, and right now there are some barriers,” Dan Hendry, sustainable initiative co-ordinator with the Limestone District School Board, said.
Who do you talk to?
Which bus do you take?
Can you eat on the bus?
How do you stop the bus when you want to get off?
Hendry sees using public transit as an important skill to learn and includes proper etiquette and a connection to the community.
As an example, this past Friday, Melissa Buchan took 18 students from her kindergarten class at Centennial Public School to the Miller Museum at Queen’s University, with a Whig-Standard reporter in tow.
For this particular field trip, the kindergarten students, along with three adults — Buchan, an early childhood educator and a student teacher from Queen’s University — walked from their school to the nearest bus stop to catch the No. 2 Kingston Transit bus to the museum.
The bus stop was about a five-to-10-minute walk from the school.
Walking single file on one side of the sidewalk, the young students happily explained that it wasn’t the regular school bus they were taking on their field trip, but the blue-and-white bus.
In this case, the bus was fairly full, but that didn’t stop other riders from offering their seats to the youngsters.
Most of the students spent the ride with their eyes glued out the windows, as the bus drove to St. Lawrence College and then along Union Street to Queen’s University.
Students also answered questions from the other bus riders about their destination.
There are 22 city bus routes that cover the city.
Since the program’s debut at the beginning of September, Hendry has seen a number of teachers use the pass to get students around the city, including getting a small group to a track meet.
Teachers can contact Kingston Transit in advance of the trip for advice and help in planning the trip.
The pilot field trip city bus program is being used in both Limestone District School Board and Algonquin and Lakeshore Catholic District School Board schools in the Kingston area. |
<filename>examples/firdespm_callback_example.c
//
// firdespm_callback_example.c
//
// This example demonstrates finite impulse response filter design
// using the Parks-McClellan algorithm with callback function for
// arbitrary response and weighting function.
//
// SEE ALSO: firdes_kaiser_example.c
#include <stdio.h>
#include <stdlib.h>
#include <math.h>
#include "liquid.h"
#define OUTPUT_FILENAME "firdespm_callback_example.m"
// user-defined callback function defining response and weights
int callback(double _frequency,
void * _userdata,
double * _desired,
double * _weight)
{
// de-reference pointer as floating-point value
unsigned int n = *((unsigned int*)_userdata);
double v = sincf(n*_frequency);
double fc = 1.0f / (float)n;
// inverse sinc
if (_frequency < fc) {
*_desired = 1.0f / v; // inverse of sinc
*_weight = 4.0f;
} else {
*_desired = 0.0f; // stop-band
*_weight = 10*fabs(v) * exp(4.0*_frequency);
}
return 0;
}
int main(int argc, char*argv[])
{
// filter design parameters
unsigned int n = 8; // sinc filter length
unsigned int h_len = 81; // inverse sinc filter length
liquid_firdespm_btype btype = LIQUID_FIRDESPM_BANDPASS;
unsigned int num_bands = 2;
float bands[4] = {0.00f, 0.75f/(float)n, // pass-band
1.05f/(float)n, 0.5f}; // stop-band
// design filter
float h[h_len];
firdespm q = firdespm_create_callback(h_len,num_bands,bands,btype,callback,&n);
firdespm_execute(q,h);
firdespm_destroy(q);
// print coefficients
unsigned int i;
for (i=0; i<h_len; i++)
printf("h(%4u) = %16.12f;\n", i+1, h[i]);
// open output file
FILE*fid = fopen(OUTPUT_FILENAME,"w");
fprintf(fid,"%% %s : auto-generated file\n", OUTPUT_FILENAME);
fprintf(fid,"clear all;\n");
fprintf(fid,"close all;\n\n");
fprintf(fid,"n=%u;\n", n);
fprintf(fid,"h_len=%u;\n", h_len);
for (i=0; i<h_len; i++)
fprintf(fid,"h(%4u) = %20.8e;\n", i+1, h[i]);
fprintf(fid,"nfft=1024;\n");
fprintf(fid,"H0=20*log10(abs(fftshift(fft(ones(1,n)/n,nfft))));\n");
fprintf(fid,"H1=20*log10(abs(fftshift(fft(h, nfft))));\n");
fprintf(fid,"Hc=H0+H1;\n");
fprintf(fid,"f=[0:(nfft-1)]/nfft-0.5;\n");
fprintf(fid,"figure;\n");
fprintf(fid,"hold on;\n");
fprintf(fid,"plot(f,H0,'Color',[0.5 0.5 0.5],'LineWidth',1);\n");
fprintf(fid,"plot(f,H1,'Color',[0.0 0.2 0.5],'LineWidth',1);\n");
fprintf(fid,"plot(f,Hc,'Color',[0.0 0.5 0.2],'LineWidth',2);\n");
fprintf(fid,"hold off;\n");
fprintf(fid,"grid on;\n");
fprintf(fid,"xlabel('normalized frequency');\n");
fprintf(fid,"ylabel('PSD [dB]');\n");
fprintf(fid,"legend('sinc','inverse sinc','composite');\n");
fprintf(fid,"title('Filter design (firdespm), inverse sinc');\n");
fprintf(fid,"axis([-0.5 0.5 -80 20]);\n");
fclose(fid);
printf("results written to %s.\n", OUTPUT_FILENAME);
printf("done.\n");
return 0;
}
|
/*****************************************************************************
* ------------------------------------------------------------------------- *
* Licensed under the Apache License, Version 2.0 (the "License"); *
* you may not use this file except in compliance with the License. *
* You may obtain a copy of the License at *
* *
* http://www.apache.org/licenses/LICENSE-2.0 *
* *
* Unless required by applicable law or agreed to in writing, software *
* distributed under the License is distributed on an "AS IS" BASIS, *
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. *
* See the License for the specific language governing permissions and *
* limitations under the License. *
*****************************************************************************/
package com.google.mu.util.stream;
import static java.util.Objects.requireNonNull;
import java.util.ArrayDeque;
import java.util.ArrayList;
import java.util.Collection;
import java.util.Collections;
import java.util.List;
import java.util.Map;
import java.util.Objects;
import java.util.Optional;
import java.util.Queue;
import java.util.Spliterator;
import java.util.Spliterators.AbstractSpliterator;
import java.util.function.Consumer;
import java.util.function.Function;
import java.util.function.Supplier;
import java.util.stream.Collector;
import java.util.stream.Collectors;
import java.util.stream.IntStream;
import java.util.stream.Stream;
import java.util.stream.StreamSupport;
import com.google.mu.function.CheckedConsumer;
/**
* Static utilities pertaining to {@link Stream} in addition to relevant utilities in Jdk and Guava.
*
* @since 1.1
*/
public final class MoreStreams {
/**
* Returns a Stream produced by iterative application of {@code step} to the initial
* {@code seed}, producing a Stream consisting of seed, elements of step(seed),
* elements of step(x) for each x in step(seed), etc.
* (If the result stream returned by the {@code step} function is null an empty stream is used,
* instead.)
*
* <p>While {@code Stream.generate(supplier)} can be used to generate infinite streams,
* it's not as easy to generate a <em>finite</em> stream unless the size can be pre-determined.
* This method can be used to generate finite streams: just return an empty stream when the
* {@code step} determines that there's no more elements to be generated.
*
* <p>A typical group of use cases are BFS traversal algorithms.
* For example, to stream the tree nodes in BFS order: <pre>{@code
* Stream<Node> bfs(Node root) {
* return generate(root, node -> node.children().stream());
* }
* }</pre>
*
* It's functionally equivalent to the following common imperative code: <pre>{@code
* List<Node> bfs(Node root) {
* List<Node> result = new ArrayList<>();
* Queue<Node> queue = new ArrayDeque<>();
* queue.add(root);
* while (!queue.isEmpty()) {
* Node node = queue.remove();
* result.add(node);
* queue.addAll(node.children());
* }
* return result;
* }
* }</pre>
*
* A BFS 2-D grid traversal algorithm: <pre>{@code
* Stream<Cell> bfs(Cell startingCell) {
* Set<Cell> visited = new HashSet<>();
* visited.add(startingCell);
* return generate(startingCell, c -> c.neighbors().filter(visited::add));
* }
* }</pre>
*
* <p>At every step, 0, 1 or more elements can be generated into the resulting stream.
* As discussed above, returning an empty stream leads to eventual termination of the stream;
* returning 1-element stream is equivalent to {@code Stream.generate(supplier)};
* while returning more than one elements allows a single element to fan out to multiple
* elements.
*
* @since 1.9
*/
public static <T> Stream<T> generate(
T seed, Function<? super T, ? extends Stream<? extends T>> step) {
requireNonNull(step);
Queue<Stream<? extends T>> queue = new ArrayDeque<>();
queue.add(Stream.of(seed));
return whileNotNull(queue::poll)
.flatMap(seeds -> seeds.peek(
v -> {
Stream<? extends T> fanout = step.apply(v);
if (fanout != null) {
queue.add(fanout);
}
}));
}
/**
* Flattens {@code streamOfStream} and returns an unordered sequential stream of the nested
* elements.
*
* <p>Logically, {@code stream.flatMap(fanOut)} is equivalent to
* {@code MoreStreams.flatten(stream.map(fanOut))}.
* Due to this <a href="https://bugs.openjdk.java.net/browse/JDK-8075939">JDK bug</a>,
* {@code flatMap()} uses {@code forEach()} internally and doesn't support short-circuiting for
* the passed-in stream. {@code flatten()} supports short-circuiting and can be used to
* flatten infinite streams.
*
* @since 1.9
*/
public static <T> Stream<T> flatten(Stream<? extends Stream<? extends T>> streamOfStream) {
return mapBySpliterator(streamOfStream.sequential(), 0, FlattenedSpliterator<T>::new);
}
/**
* Iterates through {@code stream} <em>only once</em>. It's strongly recommended
* to avoid assigning the return value to a variable or passing it to any other method because
* the returned {@code Iterable}'s {@link Iterable#iterator iterator()} method can only be called
* once. Instead, always use it together with a for-each loop, as in:
*
* <pre>{@code
* for (Foo foo : iterateOnce(stream)) {
* ...
* if (...) continue;
* if (...) break;
* ...
* }
* }</pre>
*
* The above is equivalent to manually doing:
*
* <pre>{@code
* Iterable<Foo> foos = stream::iterator;
* for (Foo foo : foos) {
* ...
* }
* }</pre>
* except using this API eliminates the need for a named variable that escapes the scope of the
* for-each loop. And code is more readable too.
*
* <p>Note that {@link #iterateThrough iterateThrough()} should be preferred whenever possible
* due to the caveats mentioned above. This method is still useful when the loop body needs to
* use control flows such as {@code break} or {@code return}.
*/
public static <T> Iterable<T> iterateOnce(Stream<T> stream) {
return stream::iterator;
}
/**
* Iterates through {@code stream} sequentially and passes each element to {@code consumer}
* with exceptions propagated. For example:
*
* <pre>{@code
* void writeAll(Stream<?> stream, ObjectOutput out) throws IOException {
* iterateThrough(stream, out::writeObject);
* }
* }</pre>
*/
public static <T, E extends Throwable> void iterateThrough(
Stream<? extends T> stream, CheckedConsumer<? super T, E> consumer) throws E {
requireNonNull(consumer);
for (T element : iterateOnce(stream)) {
consumer.accept(element);
}
}
/**
* Dices {@code stream} into smaller chunks each with up to {@code maxSize} elements.
*
* <p>For a sequential stream, the first N-1 chunk's will contain exactly {@code maxSize}
* elements and the last chunk may contain less (but never 0).
* However for parallel streams, it's possible that the stream is split in roughly equal-sized
* sub streams before being diced into smaller chunks, which then will result in more than one
* chunks with less than {@code maxSize} elements.
*
* <p>This is an <a href="https://docs.oracle.com/javase/8/docs/api/java/util/stream/package-summary.html#StreamOps">
* intermediary operation</a>.
*
* @param stream the source stream to be diced
* @param maxSize the maximum size for each chunk
* @return Stream of diced chunks each being a list of size up to {@code maxSize}
* @throws IllegalStateException if {@code maxSize <= 0}
*/
public static <T> Stream<List<T>> dice(Stream<? extends T> stream, int maxSize) {
requireNonNull(stream);
if (maxSize <= 0) throw new IllegalArgumentException();
return mapBySpliterator(stream, Spliterator.NONNULL, it -> dice(it, maxSize));
}
/**
* Dices {@code spliterator} into smaller chunks each with up to {@code maxSize} elements.
*
* @param spliterator the source spliterator to be diced
* @param maxSize the maximum size for each chunk
* @return Spliterator of diced chunks each being a list of size up to {@code maxSize}
* @throws IllegalStateException if {@code maxSize <= 0}
*/
public static <T> Spliterator<List<T>> dice(Spliterator<? extends T> spliterator, int maxSize) {
requireNonNull(spliterator);
if (maxSize <= 0) throw new IllegalArgumentException();
return new DicedSpliterator<T>(spliterator, maxSize);
}
/**
* Returns a collector that collects {@link Map} entries into a combined map. Duplicate keys cause {@link
* IllegalStateException}. For example:
*
* <pre>{@code
* Map<FacultyId, Account> allFaculties = departments.stream()
* .map(Department::getFacultyMap)
* .collect(uniqueKeys());
* }</pre>
*
* <p>Use {@link BiStream#groupingValuesFrom} if there are duplicate keys.
*
* @since 1.13
*/
public static <K, V> Collector<Map<K, V>, ?, Map<K, V>> uniqueKeys() {
return Collectors.collectingAndThen(
BiStream.groupingValuesFrom(Map::entrySet, (a, b) -> {
throw new IllegalStateException("Duplicate keys not allowed: " + a);
}),
BiStream::toMap);
}
/**
* Returns a collector that collects input elements into a list, which is then arranged by the
* {@code arranger} function before being wrapped as <em>immutable</em> list result.
* List elements are not allowed to be null.
*
* <p>Example usages: <ul>
* <li>{@code stream.collect(toListAndThen(Collections::reverse))} to collect to reverse order.
* <li>{@code stream.collect(toListAndThen(Collections::shuffle))} to collect and shuffle.
* <li>{@code stream.collect(toListAndThen(Collections::sort))} to collect and sort.
* </ul>
*
* @since 4.2
*/
public static <T> Collector<T, ?, List<T>> toListAndThen(Consumer<? super List<T>> arranger) {
requireNonNull(arranger);
Collector<T, ?, List<T>> rejectingNulls =
Collectors.mapping(Objects::requireNonNull, Collectors.toCollection(ArrayList::new));
return Collectors.collectingAndThen(rejectingNulls, list -> {
arranger.accept(list);
return Collections.unmodifiableList(list);
});
}
/**
* Returns an infinite {@link Stream} starting from {@code firstIndex}.
* Can be used together with {@link BiStream#zip} to iterate over a stream with index.
* For example: {@code zip(indexesFrom(0), values)}.
*
* <p>To get a finite stream, use {@code indexesFrom(...).limit(size)}.
*
* <p>Note that while {@code indexesFrom(0)} will eventually incur boxing cost for every integer,
* the JVM typically pre-caches small {@code Integer} instances (by default up to 127).
*
* @since 3.7
*/
public static Stream<Integer> indexesFrom(int firstIndex) {
return IntStream.iterate(firstIndex, i -> i + 1).boxed();
}
/**
* Returns a (potentially infinite) stream of {@code collection} until {@code collection} becomes
* empty.
*
* <p>The returned stream can be terminated by removing elements from the underlying collection
* while the stream is being iterated.
*
* @since 3.8
*/
public static <C extends Collection<?>> Stream<C> whileNotEmpty(C collection) {
requireNonNull(collection);
return whileNotNull(() -> collection.isEmpty() ? null : collection);
}
/**
* Similar to {@link Stream#generate}, returns an infinite, sequential, unordered, and non-null
* stream where each element is generated by the provided Supplier. The stream however will
* terminate as soon as the Supplier returns null, in which case the null is treated as the
* terminal condition and doesn't constitute a stream element.
*
* <p>For sequential iterations, {@code whileNotNll()} is usually more concise than implementing
* {@link AbstractSpliterator} directly. The latter requires boilerplate that looks like this:
*
* <pre>{@code
* return StreamSupport.stream(
* new AbstractSpliterator<T>(MAX_VALUE, NONNULL) {
* public boolean tryAdvance(Consumer<? super T> action) {
* if (hasData) {
* action.accept(data);
* return true;
* }
* return false;
* }
* }, false);
* }</pre>
*
* Which is equivalent to the following one-liner using {@code whileNotNull()}:
*
* <pre>{@code
* return whileNotNull(() -> hasData ? data : null);
* }</pre>
*
* <p>Why null? Why not {@code Optional}? Wrapping every generated element of a stream in an
* {@link Optional} carries considerable allocation cost. Also, while nulls are in general
* discouraged, they are mainly a problem for users who have to remember to deal with them.
* The stream returned by {@code whileNotNull()} on the other hand is guaranteed to never include
* nulls that users have to worry about.
*
* <p>If you already have an {@code Optional} from a method return value, you can use {@code
* whileNotNull(() -> optionalReturningMethod().orElse(null))}.
*
* <p>One may still need to implement {@code AbstractSpliterator} or {@link java.util.Iterator}
* directly if null is a valid element (usually discouraged though).
*
* <p>If you have an imperative loop over a mutable queue or stack:
*
* <pre>{@code
* while (!queue.isEmpty()) {
* int num = queue.poll();
* if (someCondition) {
* ...
* }
* }
* }</pre>
*
* it can be turned into a stream using {@code whileNotNull()}:
*
* <pre>{@code
* whileNotNull(queue::poll).filter(someCondition)...
* }</pre>
*
* @since 4.1
*/
public static <T> Stream<T> whileNotNull(Supplier<? extends T> supplier) {
requireNonNull(supplier);
return StreamSupport.stream(
new AbstractSpliterator<T>(Long.MAX_VALUE, Spliterator.NONNULL) {
@Override public boolean tryAdvance(Consumer<? super T> action) {
T element = supplier.get();
if (element == null) return false;
action.accept(element);
return true;
}
}, false);
}
/**
* Returns a collector that first copies all input elements into a new {@code Stream} and then
* passes the stream to {@code toSink} function, which translates it to the final result.
*
* @since 3.6
*/
static <T, R> Collector<T, ?, R> copying(Function<Stream<T>, R> toSink) {
return Collectors.collectingAndThen(toStream(), toSink);
}
static <F, T> Stream<T> mapBySpliterator(
Stream<F> stream, int characteristics,
Function<? super Spliterator<F>, ? extends Spliterator<T>> mapper) {
requireNonNull(mapper);
Stream<T> mapped = StreamSupport.stream(
() -> mapper.apply(stream.spliterator()), characteristics, stream.isParallel());
mapped.onClose(stream::close);
return mapped;
}
/** Copying input elements into another stream. */
private static <T> Collector<T, ?, Stream<T>> toStream() {
return Collector.of(
Stream::<T>builder,
Stream.Builder::add,
(b1, b2) -> {
b2.build().forEachOrdered(b1::add);
return b1;
},
Stream.Builder::build);
}
private static <F, T> T splitThenWrap(
Spliterator<F> from, Function<? super Spliterator<F>, ? extends T> wrapper) {
Spliterator<F> it = from.trySplit();
return it == null ? null : wrapper.apply(it);
}
private static final class DicedSpliterator<T> implements Spliterator<List<T>> {
private final Spliterator<? extends T> underlying;
private final int maxSize;
DicedSpliterator(Spliterator<? extends T> underlying, int maxSize) {
this.underlying = requireNonNull(underlying);
this.maxSize = maxSize;
}
@Override public boolean tryAdvance(Consumer<? super List<T>> action) {
requireNonNull(action);
List<T> chunk = new ArrayList<>(chunkSize());
for (int i = 0; i < maxSize && underlying.tryAdvance(chunk::add); i++) {}
if (chunk.isEmpty()) return false;
action.accept(chunk);
return true;
}
@Override public Spliterator<List<T>> trySplit() {
return splitThenWrap(underlying, it -> new DicedSpliterator<>(it, maxSize));
}
@Override public long estimateSize() {
long size = underlying.estimateSize();
return size == Long.MAX_VALUE ? Long.MAX_VALUE : estimateChunks(size);
}
@Override public long getExactSizeIfKnown() {
return -1;
}
@Override public int characteristics() {
return Spliterator.NONNULL;
}
private int chunkSize() {
long estimate = underlying.estimateSize();
if (estimate <= maxSize) return (int) estimate;
// The user could set a large chunk size for an unknown-size stream, don't blow up memory.
return estimate == Long.MAX_VALUE ? Math.min(maxSize, 8192) : maxSize;
}
private long estimateChunks(long size) {
long lower = size / maxSize;
return lower + ((size % maxSize == 0) ? 0 : 1);
}
}
private static final class FlattenedSpliterator<T> implements Spliterator<T> {
private final Spliterator<? extends Stream<? extends T>> blocks;
private Spliterator<? extends T> currentBlock;
private final Consumer<Stream<? extends T>> nextBlock = block -> {
currentBlock = block.spliterator();
};
FlattenedSpliterator(Spliterator<? extends Stream<? extends T>> blocks) {
this.blocks = requireNonNull(blocks);
}
private FlattenedSpliterator(
Spliterator<? extends T> currentBlock, Spliterator<? extends Stream<? extends T>> blocks) {
this.blocks = requireNonNull(blocks);
this.currentBlock = currentBlock;
}
@Override public boolean tryAdvance(Consumer<? super T> action) {
requireNonNull(action);
if (currentBlock == null && !tryAdvanceBlock()) {
return false;
}
boolean advanced = false;
while ((!(advanced = currentBlock.tryAdvance(action))) && tryAdvanceBlock()) {}
return advanced;
}
@Override public Spliterator<T> trySplit() {
return splitThenWrap(blocks, it -> {
Spliterator<T> result = new FlattenedSpliterator<>(currentBlock, it);
currentBlock = null;
return result;
});
}
@Override public long estimateSize() {
return Long.MAX_VALUE;
}
@Override public long getExactSizeIfKnown() {
return -1;
}
@Override public int characteristics() {
// While we maintain encounter order as long as 'blocks' does, returning an ordered stream
// (which can be infinite) could surprise users when the user does things like
// "parallel().limit(n)". It's sufficient for normal use cases to respect encounter order
// without reporting order-ness.
return 0;
}
private boolean tryAdvanceBlock() {
return blocks.tryAdvance(nextBlock);
}
}
private MoreStreams() {}
}
|
Dylan Hockley, 6, died in the shooting at Sandy Hook Elementary School in Newtown on Friday.
Stories of beauty and selflessness in the face of horror continue to trickle out in the wake of last week's mass shooting at Sandy Hook Elementary. The latest is the story of Anne Marie Murphy, an aide assigned to work with the special needs students in Victoria Soto's class.
One of those students was Dylan Hockley, who had moved to Newtown, Conn. just two years ago from England.
He was learning to read, his parents said. He loved bouncing on his trampoline, he loved seeing the moon. He loved chocolate and computer games and adored his big brother Jake.
"Dylan had dimples and blue eyes," his grandmother Theresa Moretti told The Boston Herald. "He had the most mischievous little grin."
The 6 year old died in the attack on his elementary school.
The story of the massacre has been told. There were morning announcements. The gunman came in. There was screaming and gunshots. Twenty-six people were fatally shot.
Dylan and his aide were among the least fortunate, but they died together, Murphy cradling her student in her arms, according to a statement released by Hockley's family.
"We take great comfort in knowing that Dylan was not alone when he died, but was wrapped in the arms of his amazing aide, Anne Marie Murphy," his family wrote in a statement. "Dylan loved Mrs. Murphy so much and pointed at her picture on our refrigerator every day."
Murphy, 52, was a mother herself. She had four children and was part of a big, warm family, her parents told Newsday.
"She was a happy soul," her mother Alice McGowen told the newspaper. "She was a very good daughter, a good mother, a good wife."
She was the sixth of seven children and her parents were planning on hosting a house full of children and grandchildren for Christmas Eve. They still will, though there will be one less guest.
"I've done my crying. Haven't we all?" McGowen told Newsday. "I'll miss her presence. She died doing what she loved. She was serving children and serving God."
Authorities told Murphy's father that she died shielding students from the bullets.
Dylan's teacher, Victoria Soto—a 27-year-old third-year teacher—also lost her life trying to stand between her first-graders and the attacker, her uncle said on ABC News.
The Hockley family said Soto "was warm and funny and Dylan loved her dearly," and added that they chose Sandy Hook specifically for its elementary school and that they have no regrets about their choice of location.
"Our boys have flourished here and our family's happiness has been limitless."
They credited the school's principal and psychologist with helping them navigate Dylan's special education needs.
Principal Dawn Hochsprung—whose Twitter account left a heartbreaking record of her devotion to her school, her community and its youngest students—also died in the attack. So did Mary Sherlach, the school psychologist, a mother and wife, nearly ready to retire.
Dylan’s family said that while their hearts break for their youngest son, they "are also filled with love for these and the other beautiful women who all selflessly died trying to save our children."
We want to give sincere thanks and appreciation to the emergency services and first responders who helped everyone on Friday, December 14. It was an impossible day for us, but even in our grief we cannot comprehend what other people may have experienced.
The support of our beautiful community and from family, friends and people around the world has been overwhelming and we are humbled. We feel the love and comfort that people are sending and this gives our family strength. We thank everyone for their support, which we will continue to need as we begin this long journey of healing. Our thoughts and prayers are with the other families who have also been affected by this tragedy. We are forever bound together and hope we can support and find solace with each other.
Sandy Hook and Newtown have warmly welcomed us since we moved here two years ago from England. We specifically chose Sandy Hook for the community and the elementary school. We do not and shall never regret this choice. Our boys have flourished here and our family's happiness has been limitless.
We cannot speak highly enough of Dawn Hochsprung and Mary Sherlach, exceptional women who knew both our children and who specifically helped us navigate Dylan's special education needs. Dylan's teacher, Vicki Soto, was warm and funny and Dylan loved her dearly.
We take great comfort in knowing that Dylan was not alone when he died, but was wrapped in the arms of his amazing aide, Anne Marie Murphy. Dylan loved Mrs. Murphy so much and pointed at her picture on our refrigerator every day. Though our hearts break for Dylan, they are also filled with love for these and the other beautiful women who all selflessly died trying to save our children.
Everyone who met Dylan fell in love with him. His beaming smile would light up any room and his laugh was the sweetest music. He loved to cuddle, play tag every morning at the bus stop with our neighbors, bounce on the trampoline, play computer games, watch movies, the color purple, seeing the moon and eating his favorite foods, especially chocolate. He was learning to read and was so proud when he read us a new book every day. He adored his big brother Jake, his best friend and role model.
There are no words that can express our feeling of loss. We will always be a family of four, as though Dylan is no longer physically with us, he is forever in our hearts and minds. We love you Mister D, our special gorgeous angel. |
Saskatoon city hall will take a look at getting out of the impound lot business.
City council’s transportation committee endorsed a proposal on Monday to reconsider whether the city should continue running its own vehicle impound lot in light of complaints from the towing industry.
A City of Saskatoon report recommends raising rates at the impound lot and expanding the city’s reach to include vehicles impounded by SGI to address a deficit in running the lot.
Representatives of the towing industry and the North Saskatoon Business Association appeared at Monday’s committee meeting to criticize the plan.
“It sounds like a great idea, except that I can’t compete with the City of Saskatoon because I have to follow different rules than you,” said Harvey Britton, who owns a towing company in Rosthern and is vice-president of the Roadside Responders Association of Saskatchewan Inc.
The city has used property tax revenue to make up an average annual deficit of $48,000 for the last four years as the number of impounded vehicles dropped from 2,759 in 2015 to 2,411 last year.
Conversely, the number of vehicles impounded each year by SGI has skyrocketed from 463 in 2016 to 1,637 in 2018.
A city report that will be considered by council proposes increasing the rates for a three-day impoundment to $116 from $95.
The same report says adding vehicles impounded by SGI would provide $300,000 to $400,000 in extra annual surplus for the city.
The city contracts out its towing services for about $200,000 a year, the committee heard. Privatizing the impound lot would still require a position to oversee the program for about $90,000 a year, which would need to be covered by property tax, the report says.
The city-owned land on which the lot is located is valued at $2.2 million. |
<filename>parser/keys.go
package parser
import (
docopt "github.com/docopt/docopt-go"
"github.com/teamhephy/workflow-cli/cmd"
"github.com/teamhephy/workflow-cli/executable"
)
// Keys routes key commands to the specific function.
func Keys(argv []string, cmdr cmd.Commander) error {
usage := executable.Render(`
Valid commands for SSH keys:
keys:list list SSH keys for the logged in user
keys:add add an SSH key
keys:remove remove an SSH key
Use '{{.Name}} help [command]' to learn more.
`)
switch argv[0] {
case "keys:list":
return keysList(argv, cmdr)
case "keys:add":
return keyAdd(argv, cmdr)
case "keys:remove":
return keyRemove(argv, cmdr)
default:
if printHelp(argv, usage) {
return nil
}
if argv[0] == "keys" {
argv[0] = "keys:list"
return keysList(argv, cmdr)
}
PrintUsage(cmdr)
return nil
}
}
func keysList(argv []string, cmdr cmd.Commander) error {
usage := executable.Render(`
Lists SSH keys for the logged in user.
Usage: {{.Name}} keys:list [options]
Options:
-l --limit=<num>
the maximum number of results to display, defaults to config setting
`)
args, err := docopt.Parse(usage, argv, true, "", false, true)
if err != nil {
return err
}
results, err := responseLimit(safeGetValue(args, "--limit"))
if err != nil {
return err
}
return cmdr.KeysList(results)
}
func keyAdd(argv []string, cmdr cmd.Commander) error {
usage := executable.Render(`
Adds SSH keys for the logged in user.
Usage: {{.Name}} keys:add [<name>] [<key>]
<name> and <key> can be used in either order and are both optional
Arguments:
<name>
name of the SSH key
<key>
a local file path to an SSH public key used to push application code.
`)
args, err := docopt.Parse(usage, argv, true, "", false, true)
if err != nil {
return err
}
return cmdr.KeyAdd(safeGetValue(args, "<name>"), safeGetValue(args, "<key>"))
}
func keyRemove(argv []string, cmdr cmd.Commander) error {
usage := executable.Render(`
Removes an SSH key for the logged in user.
Usage: {{.Name}} keys:remove <key>
Arguments:
<key>
the SSH public key to revoke source code push access.
`)
args, err := docopt.Parse(usage, argv, true, "", false, true)
if err != nil {
return err
}
return cmdr.KeyRemove(safeGetValue(args, "<key>"))
}
|
import axios from "axios";
import type { Coin } from "./types";
function tickerFromAssetHash(assetHash: string) {
return assetHash.slice(0, 4).toUpperCase();
}
export function assetHashToCoin(assetHash: string): Coin {
const ticker = tickerFromAssetHash(assetHash);
const precision = 8;
const name = 'unknown';
return {
name,
precision,
ticker,
assetHash
};
}
/**
* a non-blocking function that returns the coin data for a given asset hash
* @param assetHash asset hash to get coin data for
* @param explorerLiquidAPI the api supporting /assets/{assetHash} endpoints
*/
export async function getAssetData(assetHash: string, explorerLiquidAPI: string): Promise<Coin> {
try {
const { precision, ticker, name } = (await axios.get(`${explorerLiquidAPI}/asset/${assetHash}`)).data;
return {
precision: precision ?? 8,
ticker: ticker || tickerFromAssetHash(assetHash),
name: name || '',
assetHash
};
} catch (e) {
console.error(e);
return assetHashToCoin(assetHash);
}
} |
The nonprofit affiliate of the Cystic Fibrosis Foundation (CFF), Cystic Fibrosis Foundation Therapeutics, Inc., will expand its research partnership with Genzyme in a collaboration that will be supported by $14 million in additional funding. The funding is expected to be invested in developing novel therapy options for patients who suffer from the most common mutation of cystic fibrosis (CF), F508del.
The funding from CFF Therapeutics will be used specifically in Genzyme’s R&D programs and was granted in order to support studies to identify new compounds able to repair the defective CFTR protein present in CF patients. The research is focused on the F508del mutation since about 90% of the 30,000 people who live with the disease in the U.S. have at least one copy of it.
“The Foundation is focused on supporting the discovery and development of powerful new therapies that attack the underlying cause of this deadly disease,” stated the president and CEO of the CFF, Robert J. Beall, PhD in a press release. “We are pleased to continue CFFT’s agreement with Genzyme and are excited by the possibilities of what our pooled knowledge, expertise and resources can bring.”
The CFF and Genzyme have been working together since 2011, when the two entities established a collaborative research program, which has already resulted in the study of numerous chemical compounds able to address abnormalities in the CFTR protein with the F508del mutation, as well as assist in the relocation of the defective protein for it to move into the cell’s surface.
Together with the new funding, Genzyme will focus on developing selected compounds and accelerating the process in order to eventually initiate new CF clinical trials, while the CFFT will continue its work with several ongoing research programs being conducted in collaboration with biotechnology and pharmaceutical companies focused on finding treatment for the basic genetic defect in CF.
The CFFT has also recently established a partnership with the specialty biopharmaceutical company Shire for the development of technology that can maintain and/or restore lung function, as well as protect against respiratory infections, which are two primary health concerns in CF patients. Shire has an extensive pipeline of promising therapeutics that the company believes will generate sales of about $3 billion by the year 2020, and the CFFT agreed to invest up to $15 million in their messenger RNA (mRNA) platform for cystic fibrosis.
[adrotate group=”1″] |
// mockClient creates a goff.Client that returns the given content and error
// whenever client.GetFantasyContent is called.
func mockClient(f *FantasyContent, e error) *Client {
return &Client{
Provider: &mockedContentProvider{content: f, err: e, count: 0},
}
} |
An Application of Random Projection to Parameter Estimation in Partial Differential Equations In this article, we use a dimension reduction technique called random projection to reduce the computational cost of estimating unknown parameters in models based on partial differential equations (PDEs). In this setting, the repeated numerical solution of the discrete PDE model dominates the cost of parameter estimation. In turn, the size of the discretized PDE corresponds directly to the number of physical experiments. As the number of experiments grows, parameter estimation becomes prohibitively expensive. In order to reduce this cost, we develop an algorithmic technique based on random projection that solves the parameter estimation problem using a much smaller number of so-called encoded experiments. Encoded experiments amount to nothing more than random sums of physical experiments. Using this construction, we provide a lower bound for the required number of encoded experiments. This bound holds in a probabilistic sense and is independent of the number of physical experiments. Finally, we demonstrat... |
A Database of Non-Manual Signs in Turkish Sign Language Sign languages are visual languages. The message is not only transferred via hand gestures (manual signs) but also head/body motion and facial expressions (non-manual signs). In this article, we present a database of non-manual signs in Turkish sign language (TSL). There are eight non-manual signs in the database, which are frequently used in TSL. The database contains the videos of these signs as well as a ground truth data of 60 manually landmarked points of the face. |
#!/usr/bin/env python3
import unittest
from unittest.mock import mock_open
from text_to_image import encode_file
class EncodeFileTestCase(unittest.TestCase):
def test_nonexistent_file_input(self):
self.assertRaises(FileExistsError, encode_file, "nonexistent_file.txt", "file_output_image.png", limit=256)
self.assertRaises(FileExistsError, encode_file, "nonexistent_file.TXT", "file_output_image.png", limit=256)
def test_input_file_wrong_file_extension(self):
# self.assertRaises(TypeError, encode_file, "test_file.png", "file_output_image.png", limit=256)
# self.assertRaises(TypeError, encode_file, "test_file.pdf", "file_output_image.png", limit=256)
pass # TODO: needs mocking with text file
if __name__ == "__main__":
unittest.main()
|
#ifndef _MYFLOCK_H_INCLUDED_
#define _MYFLOCK_H_INCLUDED_
/*++
/* NAME
/* myflock 3h
/* SUMMARY
/* lock open file
/* SYNOPSIS
/* #include <myflock.h>
/* DESCRIPTION
/* .nf
/*
* External interface.
*/
extern int myflock(int, int, int);
/*
* Lock styles.
*/
#define MYFLOCK_STYLE_FLOCK 1
#define MYFLOCK_STYLE_FCNTL 2
/*
* Lock request types.
*/
#define MYFLOCK_OP_NONE 0
#define MYFLOCK_OP_SHARED 1
#define MYFLOCK_OP_EXCLUSIVE 2
#define MYFLOCK_OP_NOWAIT 4
#define MYFLOCK_OP_BITS \
(MYFLOCK_OP_SHARED | MYFLOCK_OP_EXCLUSIVE | MYFLOCK_OP_NOWAIT)
/* LICENSE
/* .ad
/* .fi
/* The Secure Mailer license must be distributed with this software.
/* AUTHOR(S)
/* <NAME>
/* <NAME> Research
/* P.O. Box 704
/* Yorktown Heights, NY 10598, USA
/*--*/
#endif
|
/**
* An adapter that allows a Reactor {@link Dispatcher} to be used as a Spring {@link
* TaskExecutor}.
*
* @author Jon Brisbin
*/
public class DispatcherTaskExecutor implements TaskExecutor {
private final Registry<Consumer<? extends Event<?>>> consumerRegistry = new EmptyConsumerRegistry();
private final EventRouter eventRouter = new RunnableEventRouter();
private final Dispatcher dispatcher;
/**
* Creates a new DispatcherTaskExceutor that will use the given {@code dispatcher} to execute
* tasks.
*
* @param dispatcher
* The dispatcher to use
*/
@Autowired
public DispatcherTaskExecutor(Dispatcher dispatcher) {
this.dispatcher = dispatcher;
}
@Override
public void execute(Runnable task) {
dispatcher.dispatch(null, Event.wrap(task), consumerRegistry, null, eventRouter, null);
}
private static final class RunnableEventRouter implements EventRouter {
@Override
public void route(Object key,
Event<?> event,
List<Registration<? extends Consumer<? extends Event<?>>>> consumers,
Consumer<?> completionConsumer,
Consumer<Throwable> errorConsumer) {
((Runnable)event.getData()).run();
}
}
private static final class EmptyConsumerRegistry implements Registry<Consumer<? extends Event<?>>> {
@Override
public Iterator<Registration<? extends Consumer<? extends Event<?>>>> iterator() {
throw new UnsupportedOperationException();
}
@Override
public <V extends Consumer<? extends Event<?>>> Registration<V> register(Selector sel, V obj) {
throw new UnsupportedOperationException();
}
@Override
public boolean unregister(Object key) {
throw new UnsupportedOperationException();
}
@Override
public List<Registration<? extends Consumer<? extends Event<?>>>> select(Object key) {
return Collections.emptyList();
}
}
} |
// Copyright (c) 2020 <NAME>.
// See LICENSE.txt for details.
#include "records/end_marker_generator.h"
namespace tc_records {
EndMarkerGenerator::EndMarkerGenerator() {}
tc_schema::EndMarker
EndMarkerGenerator::getRandomEndMarker() {
tc_schema::EndMarker em;
em.sessionNumber = sessionNumber++;
em.recordCounts = pg.getRandomMap(3);
return em;
}
} // namespace tc_records
|
def cancel_job(job_id: str):
_LOGGER.info("cancelling job {}".format(job_id))
try:
JobWorkerManager().cancel_job(job_id)
JobWorkerManager().refresh()
except JobNotFoundError:
raise HTTPNotFoundError("could not find job with job_id {}".format(job_id))
except JobCancelationFailureError:
raise ValidationError(
"job with job_id {} is not in a cancelable state".format(job_id)
)
return get_job(job_id) |
S7Mouse lung adenocarcinoma cell lines reveal PRL2C2 as a novel lung tumour promoter Background Carcinogen-inflicted human cancers, including lung tumours harbour thousands of mutations per genome, most of which are unknown (Garraway, LA et al, Cell 2013;153:1737). Aim To develop a faithful mouse model of human tobacco carcinogen-induced lung adenocarcinoma suitable for the identification of novel oncogenic genes and pathways. Methods We repeatedly managed to obtain several murine lung adenocarcinoma cell lines (MLA) by chronically exposing various mouse strains to different tobacco carcinogens. MLA were characterised for cancer stemness and oncogenes, as well as global gene expression. Results To date, 12 MLA cell lines have been derived from Wt and transgenic mice on the FVB, Balb/c, and C57BL/6 strains by means of urethane or diethylnitrosamine exposure. All MLA were immortal, phenotypically stable, and indefinitely passaged in vitro over a period of over 18 months and/or 60 passages. In addition, all cell lines were oncogenic, transplantable, metastatic, and uniformly lethal in vivo. Interestingly, MLA displayed Kras mutations in codon 61, mono- or bi-allelic Trp53 loss, and expression of lung cancer stemness factors Itgb3 and Lgr6, in amazing similarity to human lung cancers. Microarray revealed that all MLA cell lines heavily overexpressed Prl2c2, encoding proliferin, in comparison to the native lungs. Prl2c2 silencing diminished MLA proliferation and stemness, to a degree comparable with Itgb3 interference. Conclusions MLA are faithful models of human lung adenocarcinoma that led to the discovery of Prl2c2 as a candidate lung tumour promoter. Funding European Research Council Starting Independent Investigator Grant #260524. Respire 2 European Respiratory Society Fellowship, European Respiratory Society Short Term Research Fellowship. |
/*=============================================================================
Copyright (c) 2017 <NAME>
http://spirit.sourceforge.net/
Use, modification and distribution is subject to the Boost Software
License, Version 1.0. (See accompanying file LICENSE_1_0.txt or copy at
http://www.boost.org/LICENSE_1_0.txt)
=============================================================================*/
#if !defined(BOOST_SPIRIT_QUICKBOOK_STRING_VIEW_HPP)
#define BOOST_SPIRIT_QUICKBOOK_STRING_VIEW_HPP
#include <boost/functional/hash/hash_fwd.hpp>
#include <boost/utility/string_view.hpp>
namespace quickbook {
// boost::string_view now can't be constructed from an rvalue std::string,
// which is something that quickbook does in several places, so this wraps
// it to allow that.
struct string_view : boost::string_view {
typedef boost::string_view base;
string_view() : base() {}
string_view(string_view const &x) : base(x) {}
string_view(std::string const &x) : base(x) {}
string_view(const char *x) : base(x) {}
string_view(const char *x, base::size_type len) : base(x, len) {}
std::string to_s() const { return std::string(begin(), end()); }
};
typedef quickbook::string_view::const_iterator string_iterator;
inline std::size_t hash_value(string_view const &x) {
return boost::hash_range(x.begin(), x.end());
}
inline std::string &operator+=(std::string &x, string_view const &y) {
return x.append(y.begin(), y.end());
}
} // namespace quickbook
#endif
|
Intellectual capital in churches: Matching solution complexity with problem complexity The problems organizations face have varying degrees of complexity. What is not often understood, however, is that the knowledge needed to solve these problems also varies in complexity, and should match the complexity of the problem itself. The current study provides grounded theory for how leaders in churches should approach problems relating to Intellectual Capital (IC) assets. These intangible assets are crucial to the ability of churches to create value that enriches the lives of individuals in their communities. In two, 90minute focus groups, the leadership team of a United Methodist Church in South Carolina, USA was asked about their IC and their past, present, and future solutions to increasing IC value. Qualitative coding of these transcripts found that leadership often provided knowledgebased solutions that did not match the assumed complexity of the IC problem. This caused numerous failures in the maintenance of IC. It is suggested that church leadership view all problems as knowledge problems to uncover these hidden assumptions of complexity, and use these assumptions to seek out knowledgebased solutions that match that complexity. These findings can be extended to nonreligious contexts. |
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
#include <stdio.h>
#include <xcb/xcb.h>
#include <xcb/randr.h>
int main() {
const char display_name[] = ":0";
/* Connect to X server */
xcb_connection_t* connection = xcb_connect(display_name, NULL);
/* Bail on failure */
if (xcb_connection_has_error(connection)) {
fprintf(stderr, "Connection failed.\n");
xcb_disconnect(connection);
return 1;
}
/* Get screen */
const xcb_setup_t* setup = xcb_get_setup(connection);
xcb_screen_t* screen = xcb_setup_roots_iterator(setup).data;
/* Create dummy window */
xcb_window_t dummy_id = xcb_generate_id(connection);
xcb_create_window(connection, 0, dummy_id, screen->root, 0, 0, 1, 1, 0,
XCB_WINDOW_CLASS_COPY_FROM_PARENT, XCB_COPY_FROM_PARENT, 0, NULL);
/* Flush pending requests */
xcb_flush(connection);
/* Disconnect */
xcb_disconnect(connection);
fprintf(stderr, "Success.\n");
return 0;
}
|
Risk assessment in the plateau pika (Ochotona curzoniae): intensity of behavioral response differs with predator species Background The ability of a prey species to assess the risk that a predator poses can have important fitness advantages for the prey species. To better understand predatorprey interactions, more species need to be observed to determine how prey behavioral responses differ in intensity when approached by different types of predators. The plateau pika (Ochotona curzoniae) is preyed upon by all predators occurring in its distribution area. Therefore, it is an ideal species to study anti-predator behavior. In this study, we investigated the intensity of anti-predator behavior of pikas in response to visual cues by using four predator species models in Maqu County on the eastern Qinghai-Tibetan Plateau. Results The behavioral response metrics, such as Flight Initiation Distance (FID), the hiding time and the percentage of vigilance were significantly different when exposed to a Tibetan fox, a wolf, a Saker falcon and a large-billed crow, respectively. Pikas showed a stronger response to Saker falcons compared to any of the other predators. Conclusions Our results showed that pikas alter their behavioral (such as FID, the hiding time and the vigilance) response intensity to optimally balance the benefits when exposed to different taxidermy predator species models. We conclude that pikas are able to assess their actual risk of predation and show a threat-sensitive behavioral response. Background It is crucial for prey species to evaluate and respond adaptively to risks posed by their predators, as predators have strong direct and indirect risk effects on prey species. Prey species can be exposed to a wide range of predator species that differ in size, density, habitat use, diel activity and hunting styles in natural systems. Studying the behavioral response intensity of prey to risks posed by different predator species, is therefore an important component of improving our understanding of predator-prey interactions. Predation is a pervasive selection force that influences physiological, morphological, and behavioral adaptations in prey species in order to increase the chances of a successful escape. Generally, the assessment of predation risk is translated into the display of an antipredator behavior. Antipredator behavioral responses to predation risks include a reduction in foraging activity, increased vigilance, reduced movement, reduced use of conspicuous behavioral displays, increased hiding time in a refuge or shelter, and increased alarm calls. However, these behavioral strategies have associated costs, as they can provoke a reduction in factors such as energy intake, energetic investment in defensive structures, or lower mating success. As risk assessment is difficult to quantify, most studies use Flight Initiation Distance (FID), the hiding time in a refuge and vigilance as the metrics to study the risk levels associated with antipredator behaviors of prey species [7,14,. FID is the distance at which a prey starts to flee upon approach of a predator. Prey approached by predators often flee into refuges and emerge after a brief stay. The hiding time is the time from the moment that prey hides in refuge to the moment that it re-emerges again. Vigilance is the time that prey spend in gathering information that is used to observe predators and assessing the potential predation risk. In general, a longer FID, a longer hiding time in a refuge and higher vigilance means that the prey is experiencing a higher risk of predation [22,. A growing number of studies demonstrated that prey can assess their actual risk of predation and shape their antipredator effort accordingly in response to different degrees of predation threat, which supports the threatsensitive predator avoidance hypothesis. The threat-sensitive predator avoidance hypothesis has been verified in many animals, including insects, crabs, fish, amphibians, reptiles, mammals and birds [23,28,. These studies have shown that prey usually exhibit different antipredator behavioral response intensities when attacked by predator species which exhibit different levels of predation risks. However, to our knowledge, this hypothesis has rarely been tested in small, burrowing, grassland herbivores in the wild. The plateau pika (Ochotona curzoniae) is a small, diurnal, social burrowing herbivorous lagomorph, which occurs in most areas above an altitude of 3300 m in the Tibetan plateau. The pika is an ideal species to study the assessment of predation risk because they are preyed upon by nearly all of the predators occurring on the plateau. These predators include wolves (Canis lupis), Tibetan foxes (Vulpes ferrilata), snow leopards (Uncia uncia), brown bears (Ursus arctos), steppe polecat (Mustela eversmanni), Alpine weasel (Mustela altaica pallas), golden eagles (Aquila chrysaetos), upland buzzards (Buteo hemilasius), saker falcons (Falco cherrug), goshawks (Accipiter gentilis), black kites (Milvus migrans), little owls (Athene noctua) and large-billed crows (Corvus macrorhynchos tibetosinensis). Previous studies demonstrated that the Tibetan fox and the Saker falcon are regarded as the most threatening predators for pikas since the Tibetan fox is a pika specialist and pikas are a main food source of the Saker falcon (90% of pellets under the nest of a Saker falcon contained pika remains). Wolves and crows hunt pikas opportunistically or when other food is scarce, but generally do not pose a serious risk to pikas. In addition, a previous study found that pikas responded differently when they were presented with the calls of different predators. Therefore, it is believed that different types of predators represent different risk levels to pikas. Encounters between predator and prey are rarely observed in nature. For this reason, the predator models have been evaluated using indirect studies. In this study, we conducted a field experiment to test 'the threatsensitive predator avoidance hypothesis' using burrowing plateau pikas. We exposed the pikas to four of their common predators, the Tibetan fox, wolf (Canis lupis), Saker falcon and large-billed crow, representing different levels of predation risk to the pikas. We assumed that the Tibetan fox and Saker falcon are more threatening predator species than the wolf and large-billed crow based on whether pikas are the main food source for these predators. We hypothesized that the pika would have the ability to assess the level of predation risk and exert different behavior response intensities when exposed to different predator species models. Specifically, we predicted that: pikas would be longer FID when exposed to a more threatening predator species model; the hiding time in a refuge would be longer after an unsuccessful 'attack' by a more threatening predator species model; and pikas would allocate more time to vigilance (vigilance is defined as the total duration of time that a pika has its head lifted above its back) when they re-emerge from a refuge after an unsuccessful 'attack' by a more threatening predator species model. Results When approached by a saker falcon, crow, fox or wolf, pikas maintained 16.8 m, 7.1 m, 8.8 m and 5.1 m in FID, respectively ( Fig. 1a, b; Fig. 2). Pikas spent 898 s, 263 s, 299 s and 248 s in the refuge, respectively, following an unsuccessful predation by a saker falcon, crow, fox or wolf (Fig. 1a, b; Fig. 2). In addition, when reemerging from the refuge, pikas spent about 74%, 57%, 61% and 56% of their time during the first 10 min on vigilance after an unsuccessful predation by a saker falcon, crow, fox or wolf, respectively (Fig. 1a, b; Fig. 2). A mixed linear model analysis showed that SM (F = 7.492, p = 0.001) and GS (F = 34.864, p < 0.001) had significant effects for FID, while P (F = 0.058, p = 0.944) and EO (F = 0.907, p = 0.533) had not, and the interaction effects between SM and GS was significant (F = 6.187, p = 0.002). However, for the hiding time in the refuge, Kruskal-Wallis tests showed a significant difference across different predator species model treatments (p < 0.05). After the p was adjusted, we found no significant difference between wolf and crow (p = 1; Fig. 2), fox and crow (p = 0.163; Fig. 2) and between saker falcon and fox (p = 0.120; Fig. 2). However, there was a significant difference between wolf and fox (p = 0.004; Fig. 2), between wolf and saker falcon (p < 0.001; Fig. 2) and between crow and saker falcon (p < 0.001; Fig. 2). A mixed linear model analysis showed that SM (F-value = 6.329, p = 0.002) and GS (F = 16.684, p < 0.001) had significant effects in vigilance, while P (F = 0.780, p = 0.468) and EO (F = 1.288, p = 0.285) had not. However, the interaction effects (F = 3.573, p = 0.026) of SM and GS did differ significantly in vigilance. Discussion The results from our study provide evidence that pikas display different behavioral response intensities when exposed to different predator species models. The saker falcon is perceived as the greatest threat by pikas as it elicited the strongest anti-predator behavioral response, with the longest FID and hiding time in the refuge, and the highest vigilance percentage. Our results support the 'threat-sensitive predation avoidance hypothesis' that pikas have the ability to assess the degree of risk posed by a predator, and that responses are graded in intensity depending on the threat level perceived. Compared to the previously studies, this is the first report to assess pika anti-predator behavior in response to the presence of different predator species. These results provide valuable information that may be used in the biological control of one species that can be inhibited by using the interrelationships with another species. Prey minimizes the cost of escape by remaining where they are until the potential cost of staying outweighs the benefits. This suggests that when a prey detects a predator early, it should delay escape attempts until some criterion relating to escape costs to the cost of not fleeing is met. According to the escape theory, predators with a higher risk are associated with greater FID, while FID is expected to be shorter when predation risk is lower. Our results showed that the FID was strongly influenced by the SM. GS is known to affect the ability of prey animals to detect predators, which then alter the FID. We also found GS has a significant influence on FID. Prey often respond to predator attacks by hiding in their refuges or safe microhabitats. However, remaining in refuges can also incur fitness costs, and the decision of when to come out from a refuge after an unsuccessful attack by a predator is an important component of anti-predator behavior. There is a tradeoff between staying in refuge with a diminishing risk of predation over time, but with the increased risk of starvation while in the refuge. Cooper and Frederick demonstrated that the hiding time in a refuge should be longer when the perceived risk is higher. Our results are similar to previous studies, and support the view that the hiding time in refuge changed with exposure to different predators which present different level of risk. The level of vigilance is associated with predation risk and vigilance can increase the ability of prey to gather information about the current predation risk. In addition, the vigilance level of prey depends on the level of previous predation risk. In general, prey reduced foraging time and engaged in anti-predator behavior when the previous predation risk was high. Our results indicate that the vigilance level was significantly higher in response to a saker falcon compared to the other predators, which indicates pikas perceive the saker falcon as the greatest risk of our four test predator species. Aerobic movements of animals is energetically costly, especially in QTP. The reduction of unnecessary aerobic movements lowers energetic costs and can increase the survival rate of pikas. Pikas have adapted to display varying anti-predator behavioural response intensities depending on the level of risk posed by different predators. The results of the present study indicate that the saker falcon is regarded as the most dangerous predator because pikas elicited the strongest anti-predator response (for example, the furthest FID, the longest hiding time in refuge and the highest vigilance percentage) when exposed to it. A possible explanation for the difference in responses elicited by the different predators is the difference in the approach speed of the different predator species. Zhang et al. suggested that raptors (eagle and falcon) are more threatening than beasts (fox and wolf ) because raptors approach faster. In contrast, our results indicate that the threat of a fox is greater than that of a crow. Thus, a more likely explanation for the difference in behavioral response intensities are related to whether the pika is the main food resource for the specific predator. In addition, our results also indicate that the saker falcon poses a greater threat to pikas than the fox, implying that pikas are able to evaluate risk levels by assessing the predator visually and having stronger antipredator behavioral responses when facing a more threatening predator. The ability to discriminate between more and less dangerous predators can have significant advantages for pika survival. Many other animals also vary their behavioral response intensity depending on the predator species [23,28,, and this adaptation is as a result of co-evolution with predators over millions of years. However, it is not known whether the ability of pika to discriminate between predators is innate or based on experience and would require further studies to elucidate this. Predators play an important role in the control of pikas as the direct and indirect predation risk effects can significantly affect the fertility and survival of pikas. Over the past decades, plateau pikas have been targeted for control by poisoning primarily because they are believed to have a negative impact on rangeland and compete with livestock for food. An unfortunate consequence of these poisoning campaigns to kill pikas is that the predator species may inadvertently be poisoned. Besides that, many predators of pikas are being killed for profits. The situation is further exasperated by the fact that the pika fertility is far greater than that of its predators, and the pika population can recover rapidly to its original state in a short time. whereas the predator numbers remain low due to the killing and poisoning campaigns. Essentially the natural mechanism of pika population control is eliminated from the system, and the pika populations continue to increase unchecked. Therefore, it is imperative that the poisoning campaigns and the killing of carnivore campaigns should be halted. Conclusions Our results show that pikas are able to discriminate between predator species which present different levels of risk and alter their response according to what is likely to be a larger threat. In addition, our results also provide support to previous studies suggesting that the saker falcon is the most threatening predator of pikas in the alpine meadow of the Qinghai-Tibet Plateau. Finally, given the critically important role of predators of pikas in controlling their population densities we urge that the campaigns to poison pikas and the killing of their carnivore predators should be terminated. Study site The study site is located in a natural alpine meadow in Luqu County, Gansu province, northwestern China. Geographically, the study site is located on eastern part of Qinghai-Tibetan plateau (lat. 34° 14 N; long. 102° 13 E; alt. 3650 m). The site has a typical alpine continental climate, with mean annual temperatures of 2.3 °C. The average annual precipitation is 543.6 mm, and occurs predominantly between June and September. The main soil type is subalpine meadow soil. The vegetation type is alpine meadow, and dominant species is Kobresia humilis, Elymus nutans, Festuca ovina L, Polygonum viviparum L, Anemone obtusiloba D. The inclination of study site (plateau pika habitat) is about 13° on a western slope. In this area, the distribution of pika families is patchy and each family contains 4-7 individuals. In our study area, the range of the active area of a pika family is about 470-680 m 2. Experiment design The experiments were conducted 15-29 June, 2016, after the breeding season. We randomly selected three different pika populations (P) which were spatially nonadjacently distributed in our study site. Ten days before the start of the experiment, we placed two iron pillars (50 cm diameter, 3 m high) in each area, where one pillar was situated in the pika colony, the other was situated on the slope above the pika habitat, and the distance between the two pillars was 50 m (Fig. 3). The two pillars were connected by a rope that was strong enough to hold and slide the predator models. The height of the rope was adjusted depending on the predator species. We fixed an infrared high definition camera (Huian: WL-1008T, LED, 2megapixel, 12.8, Progressive ScanCMOS, 1920 1080 fps) that can rotate 360° on the pillar that was in the colony, and used a cable to connect it to a computer (Lenovo, G5050) in a tent that was 400 m away from the pika colony. During the experiments, the anti-predator behavior of the pikas were observed and recorded. We tested four different conditions: a wolf (length: 135 cm, width: 25 cm, height: 30 cm), a Tibetan fox (length: 50 cm, width: 15 cm, height: 35 cm), a largebilled crow (length: 10 cm, width: 5 cm, height: 15 cm) and a saker falcon (length: 45 cm, width: 150 cm, height: 25 cm). The four predator models served as the predator species models (SM) (Fig. 4). Each population was tested for 4 cycles (each cycle was 2 days long) and the interval between cycles was at least 2 days. A cycle consisted of presenting each of the four predators to a colony of pikas. The order (EO) of the predators was randomized to avoid habituation of the pikas to the experimental procedure, while the interval between different predators in a cycle was at least 3 h. In addition, we recorded the survey dates (SD) of SM in different P. During the experimental procedures, the predator models were placed on the rope and a person dragged the model from the upper pole to the lower pole inside the pika colony with a rope by walking 80 m away (Human activities affect the activity of pikas at distances closer than 30 m), parallel to the model at a speed of 5 m/s. When pikas hid in their burrows, the predator model was moved back up to the upper pole. Tests were conducted in the morning during peak hours of pika activity (8:00-9:00) on a sunny day. Taking into account the height of the animal and its hunting style, we adjusted the height to 40 cm, 90 cm, 120 cm and 130 cm for the tibetan fox, wolf, large-billed crow and Saker falcon, respectively. Trials were stopped if there were predators in the surrounding environments. We analyzed the videos at one quarter speed and scored the hiding time and vigilance using J Watcher 1.5.0. In our experiments, we only observed adult pikas whose vigilance direction was opposite to that of the approaching predator model to determine the FID because vigilance direction can influence the FID. In addition, group size (GS) was quantified as it can also influence FID. When all experiments were analyzed, we measured the FID and the refuge distance (RD) measured for individual observed pikas, the FID and refuge distance was measured to the nearest 0.1 m. The hiding time was defined as the period between first adult retreating, to the first adult pika emerging again from burrows. Finally, we measured the vigilance percentage within ten minutes once the pika has left the burrow entrance. The vigilance is the total duration of time that a pika has its head lifted above its back, regardless whether it was quadrupedal or bipedal. Data analyses To improve normality, the FID was reciprocally transformed and vigilance was square root transformed, and were tested with general linear models in SPSS 22.0. Pearson correlation coefficients were used to identify collinearity among independent variables. To control for multicollinearity, we tested correlations of pairs of independent variables. Association between variables was assessed using the Spearman correlation index (Rs) and was considered significant when p < 0.05. We only maintain one of the correlated collinear variables in the next analysis. The effect of SM on the FID was analyzed using a mixed linear model with GS and RD as covariates, P and SD and EO as random variables and SM as a fixed variable, RD and SD were not included as predictors in the LMMs as GS and RD, SD and EO were highly collinear. Then we fit a model without RD and SD to test for the main effects. The effect of SM on the vigilance was analyzed using a mixed linear model with GS as covariates, P and SD and EO as random variables and SM as a fixed variable, SD was not included as a predictor in the LMMs as SD and EO were highly collinear. Then we fit a model without SD to test for the main effects. All interactions among these were included in the model and removed if not significant. However, hiding time was not normally distributed despite multiple transformations, therefore we used Nonparametric Tests (Kruskal-Wallis) followed by all pairwise multiple comparisons. |
<gh_stars>0
package com.biniek.sylwia.product;
import javax.persistence.Entity;
import javax.persistence.Id;
import com.fasterxml.jackson.annotation.JsonInclude;
@Entity
@JsonInclude(JsonInclude.Include.NON_NULL)
public class Product {
private @Id Long creditId;
private String productName;
private int value;
public Product() {
}
public Product(String productName, int value) {
this.productName = productName;
this.value = value;
}
public Product(Long creditId, String productName, int value) {
this.creditId = creditId;
this.productName = productName;
this.value = value;
}
public void setCreditId(Long creditId){
this.creditId = creditId;
}
public Long getCreditId() {
return creditId;
}
public String getProductName() {
return productName;
}
public int getValue() {
return value;
}
}
|
Russia's official opposition parties stand accused of cutting a Kremlin deal to secure Vladimir Putin's third term.
Moscow, Russia - When Russians head to the polls this Sunday to elect a new parliament, they will have a lot on their minds. Well, at least those who have decided not to vote for the ruling party, United Russia. And, judging by the latest opinion polls, that could be at least 60 per cent of voters, as only about 30 to 40 per cent of people are ready to cast their ballots in favour of the party in power.
But other than these unusually low ratings (United Russia won 64.3 per cent of the vote in 2007's election), you would be forgiven if you may think time has stood still in the past four years of Russia's political life. All seven parties running this time contested the 2007 elections (one under a different name). And the same four parties now represented in the state Duma are again expected to pass the seven per cent threshold and secure seats.
Opposing the ruling United Russia Parties are The Communist Party of Russia, the ultranationalist Liberal Democratic Party, the social-democratic A Just Russia, the liberal Yabloko and Right Cause parties and the left-leaning Patriots of Russia.
So on the surface, Russian voters face a full-spectrum of political parties to choose from to represent them in the country's top legislative body. But critics say this picture of a functioning multi-party democracy does not match the reality. Real opposition, they say, does not exist, claiming that Prime Minister Vladimir Putin's notion of a "managed democracy" has also extended to "managing" the opposition.
Critics also say this seeming plurality does not actually translate into a real debate in parliament or diversity of legislative initiatives. And that's because, even combined, the three parties that made it to the previous Duma have been no match for the dominant United Russia. In the previous Duma, the party in power held 315 of 450 seats, a two-thirds majority, which allowed it to pass any bill proposed by the Kremlin. In fact, the previous Duma was so obedient that it passed more bills that came from the executive branch of the government than those the lawmakers proposed themselves.
This dominance of United Russia and its readiness to toe the Kremlin line has also allowed the Duma to shoot down any initiatives that came from the opposition and did not sit well with Russia's rulers, for example, a bill on progressive taxation, pushed by the social-democratic A Just Russia. The country's oligarchs, the bill said, must pay more than the flat 13 per cent rate currently required of all income-earners in Russia. "The bill failed," says party member and Duma deputy Aleksandr Burkov, because "the 'trade union' of oligarchs and bureaucrats - United Russia - voted against it".
But United Russia says this dominance and ability to push through laws quickly is what saved Russia from experiencing the full blow of the ongoing financial crisis. Last week, Putin told the United Russia faction that if "leading political forces" could not come to an agreement, the country would risk a Greek-style debt crisis.
"If we fragment our parliament, it will not be able to make the necessary decisions in the necessary time ... At some point this will drag us to the line behind which our friends and partners in Europe now find themselves," he said.
This argument, however, does not hold up with the opposition - who accuse the Kremlin and United Russia of protecting the rich. Gennadiy Zyuganov, the leader of the Communists, Russia's largest and oldest party, said: "The financial-economic course of the country is designed not to protect the interests of workers, but to serve the oligarchs, whose party is United Russia."
And the flamboyant leader of the nationalist LDPR, Vladimir Zhirinovsky, does not mince his words either. "Your faction in the Duma is made up half of criminal businessmen and half of former KGB agents," he yelled at a United Russia member during a recent televised debate, dangling a pair of handcuffs. "You've been fleecing this country for 12 years. You’ve been lying for 12 years and breaking all the promises you've given to improve the people's lives."
But for other opposition voices, such criticism and debates simply amount to a political show. They say the opposition has been tamed by the Kremlin and it is only allowed to exist with its approval. These critics have even come up with a special term for the parliamentary parties (those in the Duma). They call them "systemic opposition" or part of the system created by Putin. So it is, they say, only logical to call themselves, "non-systemic" or "the real opposition" in Russia.
These critical voices include Boris Nemtsov, a former Russian prime minister and a co-leader of the Party of People's Freedom (Parnas). In June this year, Russian election officials rejected the party's registration documents, effectively barring it from contesting the poll.
Nemstov says the decision was politically motivated, because the Kremlin could not bear to give the go ahead to the party that accuses Putin of creating a deeply corrupt system that benefits him personally as well as a tight circle of his close associates. All the parties that are allowed to participate in these elections are not independent, writes Nemtsov in his blog.
His fellow opposition leader, former world chess champion Garry Kasparov, agrees: "All these parties are 100 per cent under Kremlin control. We know these people, they are not newcomers, they have been around for 20 years. It is clear they are no longer in the position to challenge the Kremlin. Voting for them is to vote for puppets in this theatre of the absurd."
And as if to support these assertions, just three days before the vote, a high-ranking election official told the English-language daily The Moscow Times that United Russia had struck a deal under which the other three parliamentary parties (the Communists, the LDPR, and A Just Russia) are "pretending to play the opposition" in exchange for guarantees that they will secure seats in the next Duma. "They want to preserve the status quo," he said. "And to achieve that they have agreed to play the roles the Kremlin has given them in this farce." The offical asked for anonymity in fear of reprisal.
And this is why, in the run-up to to these elections, the nation's non-systemic opposition parties have been calling on Russians not to vote for any political party. But in Russia, the "None of the Above" option on a ballot paper was abolished in 2006, creating somewhat of a dilemma for opposition politicians. For months, they argued about what election strategy to adopt, crossing their virtual swords on the pages of their blogs.
On one hand, there is Alexei Navalny, a Yale-educated lawyer, famous blogger, whistleblower and anti-corruption activist who coined the phrase "The Party of Swindlers and Thieves" to describe United Russia. His motto is "Anyone But United Russia."
"The correct strategy," he said - back in September during the opposition's debates on election strategy - "would be to create a united political space and to act against United Russia together, all of us, systemic and non-systemic opposition."
Just three days before the vote, Navaln wrote a detailed blog entry urging civic-minded Russians to call on at least five people they know to vote against United Russia. "Talk to your grandma and talk to your auntie," he writes. "Print out flyers, send out text messages, even providing links to a collection of anti-United Russia posters. "Don't campaign for any specific party. Only urge to vote against the Party of Swindlers and Thieves and its political monopoly".
But Nemtsov and others says this strategy is naïve, and the only way to protest against Putin's United Russia is by spoiling ballot papers and actually putting a cross in boxes next to the titles of all seven parties. "To vote for any party - those who will get into the Duma and those who won't - is to give legitimacy to the disgusting farce that these elections have become," writes Nemtsov.
He has even created a group, with other fellow opposition-minded liberals - such as political satirist Viktor Shenderovich - named Nakh-Nakh. It makes a reference to a popular fairtale about three little piglets, one of whom was called "Naf-Naf". It is also a play on a strong Russian expletive for refusal. The Cyrilic "KH" corresponds to the Latin "X" - a symbol with which they want voters to void their ballots.
They have even come up with a series of online cartoons, dubbed "Common Adverntures of the Piglet Nah-Nah in Putin's Russia". This one shows the piglet who goes to a polling station and sees Putin's face everywhere, even on the ballot paper. Engraged, the piglet puts a cross in all boxes next to all parties. "Put a cross on the crooks in power," urges a voice at the end of the clip. "Vote against all! Vote for Russia!"
Another shows the piglet trying to get on a merry-go-round - with faces of the same Russian politicans, leaders of parliamentary parties, who simply change places as Putin sits atop the fairground ride.
Those cartoons attempt to portray what Russia's opposition says is wrong with the kind of "managed democracy" that Russia seems to have become: a fixed set of political parties appearing likely to again secure Duma seats, but with little real ability to challenge the ruling party; opposition forces that have no way of contesting the elections, fragmented by internal arguments and prevented from presenting a unified front; and, finally, satire and the internet seemingly the only space where genuine political debate is still allowed to take place in Putin's Russia. |
Turning for Ulcer Reduction (TURN) Study: An Economic Analysis. BACKGROUND The Turning for Ulcer Reduction (TURN) study was a multisite, randomized controlled trial that aimed to determine the optimal frequency of turning nursing facility residents with mobility limitations who are at moderate and high risk for pressure ulcer (PrU) development. Here we present data from the economic analysis. OBJECTIVES This economic analysis aims to estimate the economic consequences for Ontario of switching from a repositioning schedule of 2-hour intervals to a schedule of 3-hour or 4-hour intervals. DATA SOURCES Costs considered in the analysis included those associated with nursing staff time spent repositioning residents and with incontinent care supplies, which included briefs, barrier cream, and washcloths. RESULTS The total economic benefit of switching to 3-hour or 4-hour repositioning is estimated to be $11.05 or $16.74 per day, respectively, for every resident at moderate or high risk of developing PrUs. For a typical facility with 123 residents, 41 (33%) of whom are at moderate or high risk of developing PrUs, the total economic benefit is estimated to be $453 daily for 3-hour or $686 daily for 4-hour repositioning. For Ontario as a whole, assuming that there are 77,933 residents at 634 LTC facilities, 25,927 (33%) of whom are at moderate or high risk of developing PrUs, the total economic benefits of switching to 3-hour or 4-hour repositioning are estimated to be $286,420 or $433,913 daily, respectively, equivalent to $104.5 million or $158.4 million per year. LIMITATIONS We did not consider the savings the Ontario Ministry of Health and Long-Term Care might incur should less frequent repositioning reduce the incidence of work-related injury among nursing staff, so our findings are potentially conservative. CONCLUSIONS A switch to 3-hour or 4-hour repositioning appears likely to yield substantial economic benefits to Ontario without placing residents at greater risk of developing PrUs. |
Behavioral Healthcare Providers Experiences Related to use of Telehealth as a Result of the COVID-19 Pandemic: An Exploratory Study Background: Due to the COVID-19 pandemic, healthcare providers were forced to shift many services quickly from in-person to virtual, including substance use disorder (SUD) and mental health (MH) treatment services. This led to a sharp increase in use of telehealth services, with health systems seeing patients virtually at hundreds of times the rate as before the onset of the COVID-19 pandemic.By analyzing qualitative data about SUD and MH care providers experiences using telehealth, this study aims to elucidate emergent themes related to telehealth use by the front-line behavioral health workforce.Methods: This study uses qualitative data from large-scale web surveys distributed to SUD and MH providers between May and August 2020. At the end of these surveys, the following question was posed in free-response form: Is there anything else you would like to say about use of telehealth during or after the COVID-19 pandemic? The 391 responses to this question were analyzed for emergent themes using a conventional approach to content analysis.Results: Three major themes emerged in the data: COVID-specific experiences with telehealth, general experiences with telehealth, and recommendations to continue telehealth delivery. Convenience, access to new populations, and lack of commute were frequently cited advantages, while perceived ineffectiveness of and limited access to technology were frequently cited disadvantages. Also commonly mentioned was the relaxation of reimbursement regulations. Providers supported continuation of relaxed regulations, increased institutional support, and using a combination of telehealth and in-person care in their practices. Conclusions: This study advanced our knowledge of how the behavioral health workforce experiences telehealth delivery. Further longitudinal research comparing treatment outcomes of those receiving in-person and virtual services will be necessary to undergird organizations financial support, and perhaps also legislative support, of virtual SUD and MH services. |
# coding: utf8
# Copyright (c) 2010-2011, Found IT A/S and Piped Project Contributors.
# See LICENSE for details.
import datetime
from StringIO import StringIO
import mock
from twisted.trial import unittest
from twisted.internet import defer
from piped import exceptions
from piped.processors import web_processors
from piped.providers.test import test_web_provider
class StubRequest(object):
def __init__(self):
self.headers = dict()
self.code = None
self.data = None
self.finished = False
self.channel = Ellipsis
def setResponseCode(self, code):
self.code = code
def setHeader(self, key, value):
self.headers[key] = value
def getHeader(self, key):
return self.headers.get(key)
def getClientIP(self):
return 'client-ip'
def write(self, data):
self.data = data
def finish(self):
self.finished = True
class TestResponseWriter(unittest.TestCase):
def test_response_code(self):
codes = dict()
# We test one code from each group of 100
codes['SWITCHING'] = 101
codes['OK'] = 200
codes['MULTIPLE_CHOICE'] = 300
codes['BAD_REQUEST'] = 400
codes['INTERNAL_SERVER_ERROR'] = 500
for code_name, code_value in codes.items():
processor_str_code = web_processors.ResponseWriter(response_code=code_name)
processor_int_code = web_processors.ResponseWriter(response_code=code_value)
self.assertEquals(processor_str_code.response_code, code_value)
self.assertEquals(processor_int_code.response_code, code_value)
self.assertRaises(exceptions.ConfigurationError, web_processors.ResponseWriter, response_code='NONEXISTING HTTP CODE')
def test_simple_processing(self):
request = StubRequest()
processor = web_processors.ResponseWriter()
processor.process(dict(request=request, content='some data'))
self.assertEquals(request.data, 'some data')
self.assertTrue(request.finished)
def test_encoding(self):
request = StubRequest()
processor = web_processors.ResponseWriter()
processor.process(dict(request=request, content=u'¡some data!'))
self.assertEquals(request.data, u'¡some data!'.encode('utf8'))
self.assertTrue(request.finished)
def test_fallback(self):
request = StubRequest()
processor = web_processors.ResponseWriter(fallback_content='some data')
processor.process(dict(request=request))
self.assertEquals(request.data, 'some data')
class TestSetHttpHeaders(unittest.TestCase):
def test_setting_expected_headers(self):
request = StubRequest()
expected_headers = dict(x_found='was-here', content_type='text/plain')
processor = web_processors.SetHttpHeaders(headers=expected_headers)
baton = dict(request=request)
processor.process(baton)
self.assertEqual(request.headers, expected_headers)
def test_not_removing_existing_headers(self):
request = StubRequest()
request.headers['existing'] = 'leave me be'
expected_headers = dict(x_found='was-here', content_type='text/plain')
processor = web_processors.SetHttpHeaders(headers=expected_headers)
baton = dict(request=request)
processor.process(baton)
expected_headers['existing'] = 'leave me be'
self.assertEqual(request.headers, expected_headers)
def test_overwriting_existing_headers(self):
request = StubRequest()
request.headers['existing'] = 'do not leave me be'
expected_headers = dict(x_found='was-here', content_type='text/plain', existing='overwritten')
processor = web_processors.SetHttpHeaders(headers=expected_headers)
baton = dict(request=request)
processor.process(baton)
expected_headers['existing'] = 'overwritten'
self.assertEqual(request.headers, expected_headers)
class TestSetExpireHeader(unittest.TestCase):
def test_setting_the_right_headers(self):
request = StubRequest()
processor = web_processors.SetExpireHeader(timedelta=dict(days=1, hours=1, minutes=1, seconds=1))
pretended_today = datetime.datetime(2011, 4, 1)
with mock.patch('piped.processors.web_processors.datetime.datetime') as mocked_datetime:
mocked_datetime.now.return_value = pretended_today
processor.process(dict(request=request))
expected_headers = {'cache-control': 'public,max-age=90061', 'expires': 'Sat, 02 Apr 2011 01:01:01'}
self.assertEqual(request.headers, expected_headers)
class TestIPDeterminer(unittest.TestCase):
def test_getting_client_ip(self):
request = StubRequest()
processor = web_processors.IPDeterminer()
baton = dict(request=request)
processor.process(baton)
self.assertEquals(baton['ip'], 'client-ip')
def test_getting_proxied_ip(self):
request = StubRequest()
request.setHeader('x-forwarded-for', 'some proxy')
processor = web_processors.IPDeterminer(proxied=True)
baton = dict(request=request)
processor.process(baton)
self.assertEquals(baton['ip'], 'some proxy')
def test_getting_proxied_ip_with_custom_header(self):
request = StubRequest()
request.setHeader('not-exactly-x-forwarded-for', 'some other proxy')
processor = web_processors.IPDeterminer(proxied=True, proxy_header='not-exactly-x-forwarded-for')
baton = dict(request=request)
processor.process(baton)
self.assertEquals(baton['ip'], 'some other proxy')
def test_getting_proxied_ip_with_no_proxy_header(self):
request = StubRequest()
processor = web_processors.IPDeterminer(proxied=True)
baton = dict(request=request)
processor.process(baton)
self.assertEquals(baton['ip'], 'client-ip')
def test_failing_when_getting_proxied_ip_with_no_proxy_header_and_should_fail(self):
request = StubRequest()
processor = web_processors.IPDeterminer(proxied=True, fail_if_not_proxied=True)
baton = dict(request=request)
exc = self.assertRaises(exceptions.PipedError, processor.process, baton)
self.assertEquals(exc.msg, 'could not determine IP from proxy-header')
class TestExtractRequestArguments(unittest.TestCase):
def get_processor(self, **kwargs):
return web_processors.ExtractRequestArguments(**kwargs)
@defer.inlineCallbacks
def test_simple_extract(self):
request = test_web_provider.DummyRequest([])
request.addArg('foo', 42)
request.args['bar'] = [1, 2] # a multivalued argument
baton = dict(request=request)
processor = self.get_processor(mapping=['foo', 'bar'])
yield processor.process(baton)
# only the first value of 'bar' is used by default:
self.assertEquals(baton, dict(request=request, foo=42, bar=1))
@defer.inlineCallbacks
def test_get_all_arguments(self):
request = test_web_provider.DummyRequest([])
request.addArg('foo', 42)
request.args['bar'] = [1, 2] # a multivalued argument
baton = dict(request=request)
processor = self.get_processor(mapping=['foo', dict(bar=dict(only_first=False))])
yield processor.process(baton)
# all values of bar should be returned as a list:
self.assertEquals(baton, dict(request=request, foo=42, bar=[1,2]))
@defer.inlineCallbacks
def test_nonexistent(self):
request = test_web_provider.DummyRequest([])
request.addArg('foo', 42)
baton = dict(request=request)
processor = self.get_processor(mapping=['foo', 'nonexistent'])
yield processor.process(baton)
# the nonexistent value should be skipped
self.assertEquals(baton, dict(request=request, foo=42))
@defer.inlineCallbacks
def test_nonexistent_without_skipping(self):
request = test_web_provider.DummyRequest([])
request.addArg('foo', 42)
baton = dict(request=request)
processor = self.get_processor(mapping=['foo', 'nonexistent'], skip_if_nonexistent=False)
yield processor.process(baton)
# the nonexistent value should be replaced by the default fallback
self.assertEquals(baton, dict(request=request, foo=42, nonexistent=None))
@defer.inlineCallbacks
def test_nonexistent_with_fallback(self):
request = test_web_provider.DummyRequest([])
request.addArg('foo', 42)
baton = dict(request=request)
processor = self.get_processor(mapping=['foo', 'nonexistent'], input_fallback='fallback', skip_if_nonexistent=False)
yield processor.process(baton)
# the nonexistent value should be replaced by the fallback
self.assertEquals(baton, dict(request=request, foo=42, nonexistent='fallback'))
@defer.inlineCallbacks
def test_loading_json(self):
request = test_web_provider.DummyRequest([])
request.addArg('foo', 42)
request.args['bar'] = ['{"loaded": 42}', '{"loaded": 93}'] # a multivalued argument containing json
baton = dict(request=request)
processor = self.get_processor(mapping=['foo', dict(bar=dict(load_json=True, only_first=False))])
yield processor.process(baton)
self.assertEquals(baton, dict(request=request, foo=42, bar=[dict(loaded=42), dict(loaded=93)]))
@defer.inlineCallbacks
def test_loading_invalid_json(self):
request = test_web_provider.DummyRequest([])
request.addArg('foo', '')
baton = dict(request=request)
processor = self.get_processor(mapping=[dict(foo=dict(load_json=True))])
try:
yield processor.process(baton)
self.fail('Expected ValueError to be raised.')
except Exception as e:
self.assertIsInstance(e, ValueError)
self.assertEquals(e.args, ('No JSON object could be decoded',))
class TestClientGetPage(unittest.TestCase):
def _create_processor(self, **config):
processor = web_processors.ClientGetPage(**config)
return processor
def test_arguments(self):
processor = self._create_processor(base_url='http://example.com/', timeout=lambda: defer.succeed(80),
postdata=StringIO('my data'), headers=lambda: dict(foo='bar'), cookies=defer.succeed(dict(cookies='yum')))
with mock.patch.object(web_processors, 'client') as mocked_client:
mocked_client.getPage.return_value = 'mocked page'
baton = yield processor.process(dict(url=['foo', 'bar', 'baz']))
self.assertEquals(mocked_client.getPage.call_count, 1)
args, kwargs = mocked_client.getPage.call_args_list[0]
# we avoid using positional arguments
self.assertEquals(args, ())
# the base url was used, and the url was flattened
self.assertEquals(kwargs['url'], 'http://example.com/foo/bar/baz')
# the postdata was extracted from a buffer
self.assertEquals(kwargs['postdata'], 'my data')
# headers were retrieved from a callable
self.assertEquals(kwargs['headers'], dict(foo='bar'))
# the timeout comes from an async function
self.assertEquals(kwargs['timeout'], 80)
# cookies comes from a deferred
self.assertEquals(kwargs['cookies'], dict(cookies='yum'))
# the resulting page is set in the baton
self.assertEquals(baton['page'], 'mocked page') |
<filename>OOP/OOP-Practice/private_check/main.py<gh_stars>0
class Program:
_var = "hello"
class NewGram(Program):
def __init__(self):
print(self._var)
# NewGram()
# print(Program.__var)
print(Program._var) |
<reponame>JULIELab/jcore-dependencies
package dragon.ir.kngbase;
import dragon.ir.index.IRSignatureIndexList;
import dragon.matrix.DoubleSuperSparseMatrix;
import dragon.matrix.IntSparseMatrix;
import dragon.nlp.Counter;
import dragon.nlp.Token;
import dragon.util.MathUtil;
import java.io.File;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.Iterator;
import java.util.Random;
/**
* <p>A program for topic signature model estimation</p>
* <p>This program finds a set of weighted terms to represent the semantics of a topic signature. A topic signature
* can be anything here including multiword phrases, concept pairs, and individual terms. There are two approaches
* to thee model estimation. One is the maximum likelihood estimator. The other uses EM algoritm if one can provide
* the disbribution of individual terms on a corpus. See more details about the EM algorithm in our previous work.<br><br>
* <NAME>., <NAME>., and <NAME>., <em>Semantic Smoothing of Document Models for Agglomerative Clustering</em>,
* In the Twentieth International Joint Conference on Artificial Intelligence(IJCAI 07), Hyderabad, India, Jan 6-12,
* 2007, pp. 2928-2933<br><br>One should provide indices for topic signatures and individual terms, respectively, or
* give the occurrence matrix of topic signatures and individual terms.</p>
* <p>Copyright: Copyright (c) 2005</p>
* <p>Company: IST, Drexel University</p>
* @author <NAME>
* @version 1.0
*/
public class TopicSignatureModel{
private IRSignatureIndexList srcIndexList;
private IRSignatureIndexList destIndexList;
private IntSparseMatrix srcSignatureDocMatrix;
private IntSparseMatrix destDocSignatureMatrix;
private IntSparseMatrix cooccurMatrix;
private boolean useDocFrequency;
private boolean useMeanTrim;
private boolean useEM;
private double probThreshold;
private double bkgCoeffi;
private int[] buf;
private int iterationNum;
private int totalDestSignatureNum;
private int DOC_THRESH;
/**
* The constructor for the mode of the maximum likelihood estimator
* @param srcIndexList the statisitcs of topic signatures in the collection.
* @param srcSignatureDocMatrix the doc-term matrix for topic signatures.
* @param destDocSignatureMatrix the doc-term matrix for individual terms topic signatures will translate to.
*/
public TopicSignatureModel(IRSignatureIndexList srcIndexList, IntSparseMatrix srcSignatureDocMatrix, IntSparseMatrix destDocSignatureMatrix) {
this.srcIndexList =srcIndexList;
this.srcSignatureDocMatrix=srcSignatureDocMatrix;
this.destDocSignatureMatrix=destDocSignatureMatrix;
useDocFrequency=true;
useMeanTrim=true;
probThreshold=0.001;
useEM=false;
iterationNum =15;
bkgCoeffi =0.5;
totalDestSignatureNum=destDocSignatureMatrix.columns();
}
/**
* The constructor for the mode of the maximum likelihood estimator
* @param srcIndexList the statisitcs of topic signatures in the collection.
* @param cooccurMatrix the cooccurence matrix of topic signatures and individual terms.
*/
public TopicSignatureModel(IRSignatureIndexList srcIndexList, IntSparseMatrix cooccurMatrix) {
this.srcIndexList =srcIndexList;
this.cooccurMatrix =cooccurMatrix;
useMeanTrim=true;
probThreshold=0.001;
useEM=false;
iterationNum =15;
bkgCoeffi =0.5;
totalDestSignatureNum=cooccurMatrix.columns();
}
/**
* The constructor for the mode of EM algorithm
* @param srcIndexList the statisitcs of topic signatures in the collection.
* @param destIndexList the statisitcs of individual terms in the collection.
* @param cooccurMatrix the cooccurence matrix of topic signatures and individual terms.
*/
public TopicSignatureModel(IRSignatureIndexList srcIndexList, IRSignatureIndexList destIndexList, IntSparseMatrix cooccurMatrix) {
this.srcIndexList =srcIndexList;
this.destIndexList =destIndexList;
this.cooccurMatrix =cooccurMatrix;
useMeanTrim=true;
probThreshold=0.001;
useEM=true;
iterationNum =15;
bkgCoeffi =0.5;
totalDestSignatureNum=cooccurMatrix.columns();
}
/**
* The constructor for the mode of EM algorithm
* @param srcIndexList the statisitcs of topic signatures in the collection.
* @param srcSignatureDocMatrix srcSignatureDocMatrix the doc-term matrix for topic signatures.
* @param destIndexList the statisitcs of individual terms in the collection.
* @param destDocSignatureMatrix destDocSignatureMatrix the doc-term matrix for individual terms topic signatures will translate to.
*/
public TopicSignatureModel(IRSignatureIndexList srcIndexList, IntSparseMatrix srcSignatureDocMatrix, IRSignatureIndexList destIndexList, IntSparseMatrix destDocSignatureMatrix) {
this.srcIndexList =srcIndexList;
this.srcSignatureDocMatrix=srcSignatureDocMatrix;
this.destIndexList =destIndexList;
this.destDocSignatureMatrix=destDocSignatureMatrix;
useDocFrequency=true;
useMeanTrim=true;
probThreshold=0.001;
useEM=true;
iterationNum =15;
bkgCoeffi =0.5;
totalDestSignatureNum=destDocSignatureMatrix.columns();
}
public void setUseEM(boolean option){
this.useEM=option;
}
public boolean getUseEM(){
return useEM;
}
public void setEMBackgroundCoefficient(double coeffi){
this.bkgCoeffi =coeffi;
}
public double getEMBackgroundCoefficient(){
return this.bkgCoeffi;
}
public void setEMIterationNum(int iterationNum){
this.iterationNum =iterationNum;
}
public int getEMIterationNum(){
return this.iterationNum;
}
public void setUseDocFrequency(boolean option){
this.useDocFrequency=option;
}
public boolean getUseDocFrequency(){
return useDocFrequency;
}
public void setUseMeanTrim(boolean option){
this.useMeanTrim =option;
}
public boolean getUseMeanTrim(){
return useMeanTrim;
}
public void setProbThreshold(double threshold){
this.probThreshold =threshold;
}
public double getProbThreshold(){
return probThreshold;
}
public boolean genTransMatrix(int minDocFrequency,String matrixPath, String matrixKey){
ArrayList tokenList;
DoubleSuperSparseMatrix outputTransMatrix, outputTransTMatrix;
File file;
Token curToken;
String transIndexFile, transMatrixFile;
String transTIndexFile, transTMatrixFile;
int cellNum,rowNum;
int i, j;
transIndexFile=matrixPath+"/"+matrixKey+".index";
transMatrixFile=matrixPath+"/"+matrixKey+".matrix";
transTIndexFile=matrixPath+"/"+matrixKey+"t.index";
transTMatrixFile=matrixPath+"/"+matrixKey+"t.matrix";
file=new File(transMatrixFile);
if(file.exists()) file.delete();
file=new File(transIndexFile);
if(file.exists()) file.delete();
file=new File(transTMatrixFile);
if(file.exists()) file.delete();
file=new File(transTIndexFile);
if(file.exists()) file.delete();
outputTransMatrix=new DoubleSuperSparseMatrix(transIndexFile, transMatrixFile,false,false);
outputTransMatrix.setFlushInterval(Integer.MAX_VALUE);
outputTransTMatrix=new DoubleSuperSparseMatrix(transTIndexFile, transTMatrixFile,false,false);
outputTransTMatrix.setFlushInterval(Integer.MAX_VALUE);
cellNum=0;
rowNum=srcIndexList.size();
buf=new int[totalDestSignatureNum];
if(destDocSignatureMatrix!=null)
this.DOC_THRESH=computeDocThreshold(destDocSignatureMatrix);
for(i=0;i<rowNum;i++){
if(i%1000==0) System.out.println((new java.util.Date()).toString()+" Processing Row#"+i);
if (srcIndexList.getIRSignature(i).getDocFrequency() < minDocFrequency) continue;
if (cooccurMatrix!=null && cooccurMatrix.getNonZeroNumInRow(i)<5) continue;
tokenList=genSignatureTranslation(i);
for (j = 0; j <tokenList.size(); j++) {
curToken=(Token)tokenList.get(j);
outputTransMatrix.add(i,curToken.getIndex(),curToken.getWeight());
outputTransTMatrix.add(curToken.getIndex(), i, curToken.getWeight());
}
cellNum+=tokenList.size();
tokenList.clear();
if(cellNum>=5000000){
outputTransTMatrix.flush();
outputTransMatrix.flush();
cellNum=0;
}
}
outputTransTMatrix.finalizeData();
outputTransTMatrix.close();
outputTransMatrix.finalizeData();
outputTransMatrix.close();
return true;
}
public ArrayList genSignatureTranslation(int srcSignatureIndex){
ArrayList tokenList;
int[] arrDoc;
if(srcSignatureDocMatrix!=null){
arrDoc = srcSignatureDocMatrix.getNonZeroColumnsInRow(srcSignatureIndex);
if (arrDoc.length > DOC_THRESH)
tokenList = computeDistributionByArray(arrDoc);
else
tokenList = computeDistributionByHash(arrDoc);
}
else
tokenList=computeDistributionByCooccurMatrix(srcSignatureIndex);
if(useEM)
tokenList=emTopicSignatureModel(tokenList);
return tokenList;
}
private int computeDocThreshold(IntSparseMatrix doctermMatrix){
return (int)(doctermMatrix.columns()/computeAvgTermNum(doctermMatrix)/8.0);
}
private double computeAvgTermNum(IntSparseMatrix doctermMatrix){
Random random;
int i, num, index;
double sum;
random=new Random();
num=Math.min(50,doctermMatrix.rows());
sum=0;
for(i=0;i<num;i++){
index=random.nextInt(doctermMatrix.rows());
sum+=doctermMatrix.getNonZeroNumInRow(index);
}
return sum/num;
}
private ArrayList computeDistributionByCooccurMatrix(int signatureIndex){
ArrayList list;
Token curToken;
int[] arrIndex, arrFreq;
int i;
double rowTotal, mean;
rowTotal=0;
arrIndex=cooccurMatrix.getNonZeroColumnsInRow(signatureIndex);
arrFreq=cooccurMatrix.getNonZeroIntScoresInRow(signatureIndex);
for(i=0;i<arrFreq.length;i++)
rowTotal+=arrFreq[i];
if(useMeanTrim)
mean=rowTotal/arrFreq.length;
else
mean=0.5;
if(mean<rowTotal*getMinInitProb())
mean=rowTotal*getMinInitProb();
rowTotal=0;
list=new ArrayList();
for(i=0;i<arrFreq.length;i++){
if(arrFreq[i]>=mean){
list.add(new Token(arrIndex[i],arrFreq[i]));
rowTotal+=arrFreq[i];
}
}
for(i=0;i<list.size();i++){
curToken=(Token)list.get(i);
curToken.setWeight(curToken.getFrequency()/rowTotal);
}
return list;
}
private ArrayList computeDistributionByArray(int[] arrDoc){
ArrayList list;
Token curToken;
int[] arrIndex, arrFreq;
int i, j, k, nonZeroNum;
double rowTotal, mean;
rowTotal=0;
if(buf==null)
buf=new int[totalDestSignatureNum];
MathUtil.initArray(buf,0);
for(j=0;j<arrDoc.length;j++){
arrIndex = destDocSignatureMatrix.getNonZeroColumnsInRow(arrDoc[j]);
if(useDocFrequency)
arrFreq=null;
else
arrFreq=destDocSignatureMatrix.getNonZeroIntScoresInRow(arrDoc[j]);
for (k = 0; k <arrIndex.length; k++) {
if(useDocFrequency)
buf[arrIndex[k]]+=1;
else
buf[arrIndex[k]]+=arrFreq[k];
}
}
nonZeroNum=0;
for(i=0;i<buf.length;i++){
if (buf[i] > 0) {
nonZeroNum++;
rowTotal += buf[i];
}
}
if(useMeanTrim)
mean=rowTotal/nonZeroNum;
else
mean=0.5;
if(mean<rowTotal*getMinInitProb())
mean=rowTotal*getMinInitProb();
rowTotal=0;
list=new ArrayList();
for(i=0;i<buf.length;i++){
if(buf[i]>=mean){
list.add(new Token(i,buf[i]));
rowTotal+=buf[i];
}
}
for(i=0;i<list.size();i++){
curToken=(Token)list.get(i);
curToken.setWeight(curToken.getFrequency()/rowTotal);
}
return list;
}
private ArrayList computeDistributionByHash(int[] arrDoc){
ArrayList list, tokenList;
Token curToken;
int i;
double rowTotal, mean;
tokenList=countTokensByHashMap(arrDoc);
rowTotal=0;
for(i=0;i<tokenList.size();i++)
rowTotal+=((Token)tokenList.get(i)).getFrequency();
if(useMeanTrim || rowTotal*getMinInitProb()>1){
if(useMeanTrim)
mean=rowTotal/tokenList.size();
else
mean=0.5;
if(mean<rowTotal*getMinInitProb())
mean=rowTotal*getMinInitProb();
list=new ArrayList();
rowTotal=0;
for(i=0;i<tokenList.size();i++){
curToken=(Token)tokenList.get(i);
if(curToken.getFrequency()>=mean){
list.add(curToken);
rowTotal+=curToken.getFrequency();
}
}
tokenList.clear();
}
else
list=tokenList;
for(i=0;i<list.size();i++){
curToken=(Token)list.get(i);
curToken.setWeight(curToken.getFrequency()/rowTotal);
}
return list;
}
private ArrayList countTokensByHashMap(int[] arrDoc){
HashMap hash;
ArrayList list;
Token curToken;
Counter counter;
Iterator iterator;
int[] arrTerm, arrFreq;
int i,j, termNum;
hash=new HashMap();
for(j=0;j<arrDoc.length;j++){
termNum = destDocSignatureMatrix.getNonZeroNumInRow(arrDoc[j]);
if(termNum==0) continue;
arrTerm = destDocSignatureMatrix.getNonZeroColumnsInRow(arrDoc[j]);
if(useDocFrequency)
arrFreq=null;
else
arrFreq=destDocSignatureMatrix.getNonZeroIntScoresInRow(arrDoc[j]);
for(i=0;i<termNum;i++){
if (useDocFrequency)
curToken = new Token(arrTerm[i], 1);
else
curToken = new Token(arrTerm[i], arrFreq[i]);
counter=(Counter)hash.get(curToken);
if(counter==null){
counter=new Counter(curToken.getFrequency());
hash.put(curToken,counter);
}
else
counter.addCount(curToken.getFrequency());
}
}
list=new ArrayList(hash.size());
iterator=hash.keySet().iterator();
while(iterator.hasNext()){
curToken=(Token)iterator.next();
counter=(Counter)hash.get(curToken);
curToken.setFrequency(counter.getCount());
list.add(curToken);
}
hash.clear();
return list;
}
private double getMinInitProb(){
/*
if(useEM)
return Math.min(0.0001,probThreshold);
else
return probThreshold;*/
return probThreshold;
}
private ArrayList emTopicSignatureModel(ArrayList list){
Token curToken;
double[] arrCollectionProb, arrProb;
double weightSum;
int termNum;
int i, j;
termNum =list.size();
arrProb = new double[termNum];
//initialize the background model;
arrCollectionProb=new double[termNum];
weightSum=0;
for(i=0;i<termNum;i++){
curToken=(Token)list.get(i);
if(useDocFrequency)
arrCollectionProb[i]=destIndexList.getIRSignature(curToken.getIndex()).getDocFrequency();
else
arrCollectionProb[i]=destIndexList.getIRSignature(curToken.getIndex()).getFrequency();
weightSum+=arrCollectionProb[i];
}
for(i=0;i<termNum;i++)
arrCollectionProb[i]=arrCollectionProb[i]/weightSum;
//start EM
for (i = 0; i < iterationNum; i++) {
weightSum = 0;
for (j = 0; j < termNum; j++) {
curToken=(Token)list.get(j);
arrProb[j] = (1 - bkgCoeffi) * curToken.getWeight() /
( (1 - bkgCoeffi) * curToken.getWeight() + bkgCoeffi * arrCollectionProb[j]) * curToken.getFrequency();
weightSum += arrProb[j];
}
for (j = 0; j < termNum; j++){
curToken=(Token)list.get(j);
curToken.setWeight(arrProb[j]/ weightSum);
}
}
/*newList=new ArrayList(list.size());
for (j = 0; j < termNum; j++){
curToken=(Token)list.get(j);
if(curToken.getWeight()>=probThreshold)
newList.add(curToken);
}
return newList;*/
return list;
}
}
|
Leamington and Warwick Musical Society get their claws into the musical favourite next week – and will be giving it a new look. The poetry of TS Eliot is set to music by Andrew Lloyd Webber, with results that have won hearts around the world for nearly 40 years. But The Really Useful Company, which controls the rights, has ordered the society not to copy the original London set, the original London costumes, the original London make-up or the original London choreography – meaning audiences in Leamington will see the show as never before.
Of all the stars in folk music’s firmament, few shine as brightly as Yorkshire’s Kate Rusby. A career which spans over 25 years in music has shown her to be one of the finest interpreters of traditional folk songs and one of our most emotive, original songwriters. The crossover appeal Kate enjoys is unprecedented for a folk singer and has been achieved without resort to compromise.
Two of Shakespeare’s ‘darker’ comedies will be performed in Warwick next week, courtesy of the talented youngsters at Playbox Theatre. Much Ado is a mix of comic and dramatic suspense with two of Shakespeare’s wittiest and most loved romantics, while the production of Measure for Measure is given as an experiment for actors and audience to view the examination of sexual exploitation, injustice, hypocrisy and abuse of power in Shakespeare’s scathing exploration of sexual politics and power.
The Big Picture Show teams up with The Court House Warwick again to bring a modern favourite to the big screen in the magnificent ballroom. Throw on those glad rags and join in for an evening of 1920s decadence with Baz Luhrmann’s 2013 version of the F Scott Fitzgerald classic. The film, directed in Luhrmann’s typically bombastic style, stars Leonardo DiCaprio and Carey Mulligan. |
Advancement flap to cover a defect between the junction of the alar base and the upper lip Summary Reconstruction following excision of skin lesions at the cosmetically sensitive junction between the alar base and upper lip continues to be challenging for surgeons. We describe an advancement flap from the nasolabial fold area to reconstruct such defects. Our case demonstrates a gentleman with a clinically diagnosed BCC between the alar base and upper lip. An advancement flap from the nasolabial area was designed to reconstruct the defect, with two Burrow's triangles excised to prevent standing cones. The scar of the two Burrow's triangles falls over the nasolabial fold, resulting in the integration of the scar within the natural line. This flap design also maintains of the level of the upper lip, the shape and position of the nostril, and minimises flattening of the philtrum. Excellent cosmetic results were seen six weeks post-op. a b s t r a c t Reconstruction following excision of skin lesions at the cosmetically sensitive junction between the alar base and upper lip continues to be challenging for surgeons. We describe an advancement flap from the nasolabial fold area to reconstruct such defects. Our case demonstrates a gentleman with a clinically diagnosed BCC between the alar base and upper lip. An advancement flap from the nasolabial area was designed to reconstruct the defect, with two Burrow's triangles excised to prevent standing cones. The scar of the two Burrow's triangles falls over the nasolabial fold, resulting in the integration of the scar within the natural line. This flap design also maintains of the level of the upper lip, the shape and position of the nostril, and minimises flattening of the philtrum. Excellent cosmetic results were seen six weeks post-op. Introduction Reconstruction following excision of skin lesions at the cosmetically sensitive junction between the alar base and upper lip continues to be challenging for surgeons. We describe an advancement flap from the nasolabial fold area to reconstruct such defects. Our case demonstrates a gentleman with a clinically diagnosed BCC between the alar base and upper lip. An advancement flap from the nasolabial area was designed to reconstruct the defect, with two Burrow's triangles excised to prevent standing cones. The scar of the two Burrow's triangles falls over the nasolabial fold, resulting in the integration of the scar within the natural line. This flap design also maintains of the level of the upper lip, the shape and position of the nostril, and minimises flattening of the philtrum. Excellent cosmetic results were seen six weeks post-op. Surgical technique Firstly, excise the lesion with a predetermined oncological clearance margin. Following this, draw two parallel horizontal lines at the levels of the superior and inferior borders of the defect, with the lines stopping at the nasolabial fold. The superior line will be shorter, thus being an asymmetrical advancement flap. Next, draw two Burrow's triangles along the nasolabial fold ( Figure 1 ). Incise along the two parallel horizontal lines and excise the Burrow's triangles. Excising the Burrow's triangles will remove standing cones (dog ears), and the scar of the excised Burrow's triangle should fall along the nasolabial fold. The skin flap is then raised to the same thickness as the depth of the defect, and the two triangular corners are trimmed to fit the medial curve of the defect. The flap is then inset, in a similar fashion to an advancement flap ( Figure 2 ). Results Our patient had a BCC excised at the junction between the alar base and upper lip. It was then reconstructed with an advancement flap from the nasolabial fold area. With this flap design we were able to camouflage the scar within the nasolabial fold. Excellent cosmetic results were seen six weeks post-op ( Figure 3 ). Discussion Several other local flaps have been designed to cover this area. One of which is the Cake Flap, 1 a modified version of Evans' Staggered Ellipse 2 which aims to shorten the scar. However, a cake flap over the nasolabial area would create a visible scar over the upper lip. The asymmetrical nature of our flap design allows for such a scar to be hidden within the nasolabial fold. The V-Y Advancement Flap 3 places scars along aesthetic borders, but can result in elevation of the red lip due to scar contracture. The Keystone Flap 4 would result in the raising of the upper lip and blunting of the oral commissure. The Rotation Flap 5 described by Cerci does not necessarily disrupt the position of the nostril or upper lip, but does blunt the oral commissure. Our method maintains the level of the upper lip and preserves the shape of the oral commissure. One limitation of our technique is that the defect should ideally be located just below the alar base. A more lateral defect (towards the nasolabial fold) would limit the amount of tissue that could be advanced medially whilst still allowing for the Burrow's triangle to fall over the nasolabial fold. Declaration of Competing Interest None. Funding None. Ethical approval This was a retrospective review, with the aforementioned patient consented for the publishing of medical photographs. |
A robot graphic simulator for tele-operation In tele-operation using robots, both the skills of an operator and the information about remote workplace are needed in order to perform given tasks successfully. But due to a communication delay and insufficient information about the remote workplace, it is difficult to perform some tasks in tele-operation. In order to solve these problems, graphics techniques in tele-operation have been studied. As the network technology has progressed, the dedicated communication lines of tele-operation have been replaced with the public communication networks such as the Internet. Experiments using the World Wide Web for tele-operation have been studied. In this paper, a graphic simulator is proposed and developed, which can be used to monitor robots in a remote workplace by using on-line simulation. In particular, it supports a network client/server model for tele-operations of robots. In the system, the sensor information are collected continuously and the event messages are generated according to each tele-operation. The generated event messages are transferred to the graphic workstation through the network by using the TCP/IP messaging and utilized to visualize the tele-operation. The dedicated communication protocols to transmit the information efficiently are developed. In order to do this, we classified several tele-operations and the developed protocols are used to transfer the event messages to the graphic workstation. |
Mountain View Christmas Tree Farm collected about 20 silver tip firs to help the Trees for Troops program, which sends Christmas trees to military families and troops overseas. This is the second year the Paradise tree farm participated in the program, getting trees down to Dixon for pickup and distribution by FedEx Corp. The tree farm, on Mountain View Drive, is also collecting money to be forwarded to the program, or individual donations can be made to help the program via www.ChristmasSPIRITFoundation.org, which is the program’s parent organization. |
package ie.gmit.sw.gameassets;
import java.util.*;
import javax.imageio.ImageIO;
import java.awt.image.BufferedImage;
import java.io.File;
import java.io.IOException;
import ie.gmit.sw.game.GameRunner;
import ie.gmit.sw.maze.Cell;
import ie.gmit.sw.maze.Direction;
public class Navigator implements Item {
private BufferedImage image;
private int goalRow;
private int goalCol;
private PriorityQueue<Cell> open;
private List<Cell> closed = new ArrayList<Cell>();
private Map<Cell, Cell> cameFrom = new HashMap<>();
private Map<Cell, Double> gscores = new HashMap<>();
public Navigator(Cell initial) {
goalRow = GameRunner.getTriwizardCup().getRow();
goalCol = GameRunner.getTriwizardCup().getCol();
open = new PriorityQueue<Cell>(20, (Cell cell1, Cell cell2) -> (int)(cell1.getDistanceToCell(goalRow, goalCol)+gscores.get(cell1))
- (int)(cell2.getDistanceToCell(goalRow, goalCol)+gscores.get(cell2)));
try {
image = ImageIO.read(new File("resources/compass.png"));
} catch (IOException e) {
e.printStackTrace();
}
}
@Override
public BufferedImage getImage() {
return image;
}
@Override
public void setImage(BufferedImage image) {
this.image = image;
}
public List<Cell> findPath(Cell start){
gscores.put(start, 0.0);
open.add(start);
//start cell has 0 abs cost
while(!open.isEmpty()){
Cell current = open.poll();
if(current == GameRunner.getTriwizardCup()){
return reconstructPath(current);
}
List<Cell> neighbours = new ArrayList<>();
List<Direction> available = current.getNeighbours();
if(available.contains(Direction.EAST)) neighbours.add(current.getEast());
if(available.contains(Direction.WEST)) neighbours.add(current.getWest());
if(available.contains(Direction.NORTH)) neighbours.add(current.getNorth());
if(available.contains(Direction.SOUTH)) neighbours.add(current.getSouth());
open.remove(current);
closed.add(current);
for(Cell neighbour : neighbours){
double score = gscores.get(current) + 1.0;
if(gscores.containsKey(neighbour) && score >= gscores.get(neighbour)){
//ignore
}else{
cameFrom.put(neighbour, current);
gscores.put(neighbour, score);
}
if(!closed.contains(neighbour)){
open.add(neighbour);
}
}
}
return null;
}
private List<Cell> reconstructPath(Cell current){
List<Cell> returnList = new ArrayList<>();
returnList.add(current);
do{
if(cameFrom.containsKey(current)){
Cell next = cameFrom.get(current);
returnList.add(next);
current = next;
}else{
Collections.reverse(returnList);
return returnList;
}
}
while(true);
}
}
|
Chaplygin Gas of Tachyon Nature Imposed by Symmetry and Constrained via H(z) Data An action of general form is proposed for a Universe containing matter, radiation and dark energy. The latter is interpreted as a tachyon field non-minimally coupled to the scalar curvature. The Palatini approach is used when varying the action so the connection is given by a more generic form. Both the self-interaction potential and the non-minimally coupling function are obtained by constraining the system to present invariability under global point transformation of the fields (Noether Symmetry). The only possible solution is shown to be that of minimal coupling and constant potential (Chaplygin gas). The behavior of the dynamical properties of the system is compared to recent observational data, which infers that the tachyon field must indeed be dynamical. I. INTRODUCTION Tachyons have been vastly studied in M/String theories. Since the realization of its condensation proprieties, researchers have gained interest in its applications in cosmology. At first, there was the problem of describing the string theory tachyon by an effective field theory that would lead to the correct lagrangian in classical gravity. The first classical description of the tachyon field ( ) addressed the lagrangian problem, making way for building the first model within tachyon cosmology ( ). Being a special kind of a scalar field, it present negative pressure, making the tachyon a natural candidate to explain dark energy ( ). The inflationary period could also be explained if one considers the inflaton to behave as a tachyon field. Many different attempts were made under this assumption, testing a wide variety of self-interacting potentials such as power-laws, exponentials and hyperbolic functions of the field ( ). The possible case scenario where the tachyon plays both roles, inflaton and dark energy, has also been studied in the works ( ), where the first establishes constraints on the potential so the radiation's era could commence. The studies above introduced a tachyon field which is minimally coupled through the metric, hence providing just another source for the gravitational field. Nevertheless, such fields might also be considered to be non-minimally coupled to the scalar curvature, becoming part of the spacetime geometry by generating a new degree of freedom for gravity. In this scenario, the gravitational constant G becomes a variable function of spacetime. Tachyon fields in the non-minimal coupling context were analyzed for both the inflationary period ( ) and the current era ( ). In those cases, the coupling functions and the self-interacting potentials were given in an ad-hoc manner, as exponentials and power-law forms. Every time we choose a different coupling or potential function, we create a new cosmological model, or even a new theory of gravity in the non-minimal case. This is a very difficult task since the lack of experiments and observations obligates one to find heuristics arguments to support the choice made. The advantages of searching for symmetries in systems where the lagrangian is known is widely entertained, it not only helps us find exact solutions but might also give us physically meaningful constants of movement. What is less appreciated is the fact that one can constrain a system (one that lacks a closed form of the functional) to present symmetry. In what concerns non-minimally coupled tachyon fields, Noether symmetries were used to establish the coupling and self-interaction functions in the papers ( ). The latter makes use of the Palatini approach, in a way to generalize the theory, since the non-minimal coupling can provide a metric-independent connection. The Chaplygin gas was first introduced by Chaplygin in 1902 ( ) in the realms of aerodynamics. This gas features an exotic equation of state (p c ∝ −1/ c ), which was originally used to describe the lifting force on a wing of an airplane. Because its pressure is negative, the Chaplygin gas became a good candidate to explain dark energy ( ). The attempts to correlate fields and fluids soon showed that the constant potential tachyon field behaves as a Chaplygin gas ( ). Its equation of state allows generalizations, giving rise to the so called Generalized Chaplygin Gas, or just GCG. This gas exerts a negative pressure proportional in moduli to the inverse of some power of its energy density and was investigated in works such as ( ), including its relationship to a -now, not constant potential -tachyon field ( ). Originally, the equation of state of a Chaplyigin gas was so simple that even with the exhausted studies about the GCG there was still plenty of room for further generalizations. Endowing the EoS with a linear barotropic term, which alone would describe an ordinary fluid, enriched the GCG which under this assumption is called the Modified Chaplygin Gas, MCG. Its motivations lie precisily on the possible field nature of the gas ( ), and its parameters have been constrained via observational analysis ( ). Further generalizations account for higher order energy density terms in the EoS of a Chaplygin Gas, the Extended Chaplyigin Gas, ECG ( ). In this work, we start from a very general lagrangian for a tachyon field non-minimally coupled to the scalar curvature. Matter and radiation fields are also included in the system as perfect fluids from the beginning. The connection is initially taken to be metric independent and the action is also varied with respect to it, a process known as the Palatini approach. Since we consider a flat, homogeneous and isotropic Universe, the flat Friedmann-Lematre-Robertson-Walker (FLRW) metric is used to rewrite our functional in the form of a point-like lagrangian. This presents an extra term than usual, which comes from the independent connection. The system is constrained to that one which presents invariance under continuous point transformations, or a Noether symmetry. The coupling and self-interaction potential functions of the tachyon field are then determined. Every new field added to the lagrangian clearly influences these point transformations. For that matter, it is important to start off from a complete system (including the radiation fields) if one takes symmetry as a first principle. We show that for this system to be Noether symmetrical, the non-minimal coupling must vanish and the self-interaction potential must be constant, hence the tachyonic Chaplygin gas. The system is initially composed of five free parameters, namely the Hubble constant, the three density parameters for recent times and the normalized constant potential. The radiation parameter is then established in a ad-hoc way so we are left with four different free parameters. These are determined via the 2 analysis for the recent H(z) data from SNe and gamma-ray bursts. We show that although dark energy tends asymptotically to a cosmological constant, any small discrepancies make the tachyon field dynamically active, so the Chaplygin gas presents property of transition from pressureless matter to dark energy. In order to clarify the typos and notations here used, we remark: the metric signature is (+, −, −, −); the Levi-Civita connection is written with a tilde, = while the independent connection is given without it. Natural constants were rescaled to the unity (8G = c = 1). Throughout the whole paper, derivatives in equations are presented as follows: dots represent time derivatives, while ∂ q i ≡ ∂ ∂q i and ∂qi ≡ ∂ ∂q i stand for partial derivatives with respect to the generalized coordinate q i and velocityq i, respectively. II. ACTION AND POINT LAGRAGIAN A generalization of the general theory of relativity is proposed through a non-minimal coupling of a function of the tachyon field. The general action for both geometry and source is written where is the tachyon field, f () is the non-minimal coupling function, V () is the self-interaction potential and L s is the lagrangian density of other sources (matter and radiation). In order to attain a more general theory we allow the connection to be metric independent. The variation of the action with respect to the connection results in the well-known form where is the Levi-Civita connection. Usually, the self-interaction potential and the coupling function are set in a ad-hoc manner. Instead of approaching the problem this way, we would like to constrain the system to that which has a Noether symmetry. This is done by operating a variational vector field on the point-like lagrangian, and for this, we need to rewrite it on a specific metric. For a flat, homogeneous and isotropic Universe, spacetime is described by the flat FLRW metric. The point-like functional in then becomes and s is point-like lagrangian for a perfect fluid ( ). In this system, besides dark energy, the Universe is composed by matter (ordinary and dark) and radiation. Both dark matter and ordinary matter are treated as dust, hence represented by the same entity here. As the Universe expands, matter's density decrease with a −3 while radiation's with a −4. The lagrangian above contains second-order terms which are more tedious to deal with. Since the action limits are fixed, we can integrate these terms by parts, without loss of generality, so we can work with a first-order lagrangian, which reads where 0 m and 0 r are the recent values of the total density of matter and radiation, respectively, in the Universe. III. NOETHER SYMMETRIES Our system may now be constrained to that which is endowed with a Noether symmetry by finding the forms of f () and V () that allow symmetrical point transformation. This means that our lagrangian shall have such a form that a specific continuous transformation of the generalized coordinates a → and → preserves the general form of the functional, In order to find the function forms of V () and f () that allow such transformation, we need to apply a certain vector field on the lagrangian. This vector field, X, is then called a variational symmetry, or complete lift, and reads where the coefficients i are functions of the generalized coordinates a,. The operation of X on the lagrangian is simply the Lie derivative of L along this vector field (L X L). According to the Noether theorem, if this derivative vanishes, there will be a conserved quantity named Noether charge. Hence, this will be a variational symmetry if such that where ∆ = d/dt is the dynamical vector field and is the locally defined Cartan one-form. The brackets represent the scalar product between vector field and one-form, in the Dirac notation. Thus, the Noether charge reads The condition reads in full form, = ∂ a L + ∂ L + ∂ a +∂ ∂L + ∂ a +∂ ∂L, which for our system becomes where = 1 and = 2. The equation above must hold for any value of and. If it were a polynomial equation for these dynamical variables one could simply make all coefficients equal to zero, but the different powers of the square roots turn the task a little more complicated. We shall differentiate with respect to these quantities and evaluate the resulting equations for different values of and, then we get the solutions for (a, ), (a, ), V () and f (). IV. EQUATIONS OF MOTION The lagrangian, for constant self-interaction potential and f = 1/2 (to regain Einstein's constant according to the notation adopted), reads The Friedmann equation is obtained through the energy equation E L = ∂L ∂ + ∂L ∂ − L, which gives where H =/a is the Hubble parameter and = m + r + is the total energy density of the fields, being the energy density of the tachyon field, m = 0 m /a 3 and r = 0 r /a 4 the matter's and the relativistic material's densities, respectively. The Euler-Lagrange equation for the scalar factor, together with, provides the acceleration equation, being where p = p r + p is the pressure of the fields (as usual, matter behaves as dust so p m = 0), and p r = r /3. The pressure exerted by the tachyon field is The Euler-Lagrange equation for the tachyon field gives the generalized Klein-Gordon equation for the field, which is the same as the fluid equation for dark energy when written in terms of its energy density and pressur An equation of state in the form p() can now be written for the tachyon field. From equation we see that The Chaplygin gas is a fluid described by an equation of state of the kind where A is a positive defined constant, which is precisely the same as for A = V 2 0. Thus, as widely know from the literature, see e.g. ( ), a tachyon field only minimally coupled to the scalar curvature, of constant potential, behaves as a Chaplygin gas. V. NOETHER CONSTANT Any lagrangian system endowed with a Noether symmetry will present a constant of motion, as stated by Noether's theorem. The Noether charge here becomes which is simply the first integral of. VI. SOLUTIONS The energy density of the Chaplygin gas, and its pressure, can be rewritten as functions of the scale factor, using equation. These forms are well known from literature and read where 0 is the Noether constant. From this equation, we see the dual nature of the Chaplygin gas, which behaves as dust matter for a ≤ 1 and as a cosmological constant for a ≥ 1 In order to obtain our parameters' curves with respect to the redshift, we use the relationship a = 1/(1 + z). The Friedmann equation then becomes The equation above can be written in a dimensionless form by dividing it by the Hubble constant, H 2 0 ≡ H 2 = 0 /3, where 0 is the current density of all fluids in the Universe, giving where 0 i ≡ 0 i / 0 is the current density parameter of the i-th component. The bars indicate that the constants have also been divided by the current density, i.e., 0 = 0 / 0 andV 0 = V 0 / 0, then the density parameter for the Chaplygin gas is simply This last relationship allows us to investigate the evolution of the Hubble parameter in terms of dark energy's density parameter, instead of the Noether charge. Finally, we write Recent observations ( ) limit the range of values associated to these parameters. In particular, there is great confidence that 0 r ∼ 8, 510 −5, so we may adopt this result but will constrain the four remaining parameters (namely H 0, 0 m, o, andV 0 ) via H(z) data.. The values assumed by these parameters that best fit the observational data are the ones the minimize the function A primary condition for a good fit is that 2 /dof ≤ 1, where dof stands for degrees of freedom and in this case is given by the number of data points, dof = 25. Our minimized 2 is given by H 0 = 69.6524, 0 m = 0.288261, 0 = 0.711654 andV 0 = 0.709957 resulting in 2 = 12.8676 and 2 /dof = 0.507504. Marginalizing over two parameters allows us to analyze the correlation of the remaining by plotting the contours of their distributions within some confidence interval. The correlation between dark energy and matter density parameter is strong. The H(z) does not seem to impose very strict constraints for our current matter density, but for dark energy we see that within 3- all points lie in the range 0.697 ≤ 0 ≤ 0.726, Fig.1. The correlation between 0 andV 0 is much stronger, as one would expect since the potential defines the energy density. Nevertheless, it is interesting enough to see the form these ellipses take in Fig. 2 whilst the current density parameter for dark energy is given by. The case 0 =V 0 is just the cosmological constant scenario. From the figure, we see very thin ellipses with a slope close to the unity. The best fit parameters listed above show a very small difference between the two of them, and as we will see this difference grows bigger in the past, but there is a high tendency for the cosmological constant. The evolution of the density parameters for different redshift scales are shown below. In Fig. 3 radiation is neglected for its energy density is too small to be observed. As the redshift increases dark energy falls but ever more slowly, and for values z ≥ 2 its density decreases so smoothly that it almost appears to be constant. This is due to the small difference between 0 andV 0 that makes the tachyon field dynamical and the Chaplygin gas property thrive, from equations and it becomes clear that dark energy decays into matter fields as the redshift grows and the term 0 outpacesV 0. Furthermore, we are now looking at the matter era, hence the almost constat behavior. In Fig. 4 we see the evolution of the density parameters for radiation and the combination of the Chaplygin gas and matter, since the first behaves as the latter. In this scenario, the densities equality happen a bit earlier in out history than expected from the cosmological constant case. For instance, we have z eq = 3968.15 (approximately 37.5 thousands years since the beginning of the Universe) whereas for a non-dynamical field describing we would expect z eq ∼ 3600 (about 47 thousand years old). Although a transition happening at higher redshifts does not influence the time at which recombination occurs (once it depends on the temperature), the sooner increase in dark matter's energy density allows it to combine and form potential wells earlier, giving room for structure formation, some of which we have recently discovered and turned out to be quite old ( ). The ratio = p /, between dark energy's pressure and energy density is show in Fig. 5. As expected from equation, the ratio tends asymptotically to −1 as the Universe expands but approaches zero quickly as the redshift increases, when dark energy finally becomes a pressureless field. It becomes clearer the role of the Chaplygin gas even though we are approaching the cosmological constant in recent times. The deceleration parameter q is plotted in Fig. 6. The transition from a decelerated to an accelerated expansion happens at z t = 0.65, while for our current time q 0 = −0.56, both results in agreement with observations ( ). Although the Chaplygin gas has been extensively studied before, new observational data provides great motivation to revisit the model and set new constraints. The evolution of the Hubble parameter described by this model, together with the data we used to define our parameters is shown in Fig. 7. Unfortunately, there are not satisfactorily The dotted lines stand for dark energy while the solid lines represent dark matter. As we enter the matter dominated era, dark energy decays into matter fields contributing even more for its dominance, with its energy density falling ever more slowly, assuming an almost constant behavior. Ratio between pressure and energy density for the Chaplygin gas. Any small difference between 0 andV0 grows considerably with the redshift and dark energy eventually becomes a pressureless field, hence matter. many measurements to make solid statistics for this parameter as there are for the distance modulus, for instance. Also, the errors associated with the data from gamma-ray bursts are much bigger than one would desire them to be. Nevertheless, these sources provide information from a much younger Universe compared to SNe, making it worthwhile for testing models and constraining parameters. VII. CONCLUSIONS In this work, we started from a general action where a tachyon field represents the nature of dark energy. We allowed it to be non-minimally coupled to the scalar curvature and we considered the connection to be independent through the Palatini approach. Dark matter, baryonic matter and relativistic material were included in source fields, as our intention was to build a more complete model. Instead of establishing the self-interaction potential and the coupling function in a ad-hoc manner, we stated that symmetry should play a more primary role and only functions capable The more dynamical the tachyon field is, greater is the inclination of the H(z) curve. Even with such big error bars, we can identify the caseV0 = 0.69 as being the one that accommodates the more data. of composing a continuous and symmetric point transformation on the generalized coordinates would be considered. This lead to the simpler case where the tachyon field is only minimally coupled and its potential is constant, behaving as a Chaplygin gas. The theoretical framework of the Chaplygin gas has been deeply investigated and is widely found in the literature. For this reason, we focused on more recent observational data to constrain the dynamics of the system. The Hubble parameter suggests that, be the Chaplygin gas the underlying nature of dark energy, it shall be slightly dynamical, as opposed to its cosmological constant particular case since, as we see, any small difference between its constant potential and current density parameter grows considerably with the redshift. As this component presents a dual behavior, acting as dark energy for small redshifts and decaying into matter fields later on, the matter era begins earlier in the history of the Universe, what could help explain older structures. |
my_dict = {'class': 3}
my_dict['css_class'] = ""
if my_dict['class']:
my_dict['css_class'] = 'class %(class)s' % my_dict
my_dict['tmp'] = 'classes %(css_class)s' % my_dict
my_dict['tmp'] = 'classes %(claz)s' % <warning descr="Key 'claz' has no corresponding argument">my_dict</warning>
|
Changes to the NIH website, as tracked by the Environmental Data & Governance Initiative.
A unit of the National Institutes of Health has removed references to climate “change” from its website, deletions that one environmental group criticizes as “cleansing” but an NIH official describes as minor.
The revisions occurred on the National Institute of Environmental Health Sciences site. A headline that read “Climate Change and Human Health,” for example, was altered to “Climate and Human Health.” A menu title that read “Climate Change and Children's Health” in June now appears as “Climate and Children's Health.” Links to a fact sheet on “Climate Change and Human Health” also were removed.
“The cleansing continues,” said David Doniger, director of the climate and clean air program at the Natural Resources Defense Council. “But they’re not going to be able to erase the science, or the truth, by scrubbing websites.”
The changes were revealed in a report by the Environmental Data and Governance Initiative, a group of nonprofits and academics who monitor what they call “potential threats” to federal policy and scientific research on energy and the environment.
But Christine Flowers, the NIEHS director of communications, downplayed the changes Wednesday. She said she made them as she added and moved information on the site over a period of months.
“It’s a minor change to a title page,” Flowers said of one headline alteration, “but the information we provide remains the same. In fact, it’s been expanded.”
The phrase “climate change” still can be found several times in the text below the headline that now reads “Climate and Human Health.” Also still available is a “Climate Change and Human Health Literature Portal.”
Similar word changes have been made on Interior Department, Transportation Department and Environmental Protection Agency website pages that mentioned climate change. Scientists inside and outside the government have questioned the motivations because of top Trump administration officials' doubts about how much human activity influences global warming.
In some cases, though, career staffers may have been responsible for the new wording on sites in an effort to avoid administration scrutiny on hot-button issues. The EPA’s Climate Ready Water Utilities site was renamed before President Trump took office — to “Creating Resilient Water Utilities.”
The Trump administration has recently removed Obama-era webpages and tucked public information in hard-to-navigate databases. Here's a few of the changes that have been made, so far. (Monica Akhtar/The Washington Post)
In April, the EPA took down several website pages that contained detailed climate data and scientific information. That action, on the eve of a large demonstration in Washington about protecting the environment, included removal of a Web page explaining climate change that had been on the site for nearly two decades.
The NIEHS, one of 27 institutes and centers that comprise the NIH, focuses on the environment's impact on health. Most of NIH, the nation's premier biomedical research campus, is in Bethesda, Md., but the NIEHS is in Research Triangle Park, N.C.
Read more:
Don’t call it ‘climate change’: How the government is rebranding in the age of Trump
On climate change, Scott Pruitt causes an uproar — and contradicts the EPA’s own website |
// SetValue Assigning values to fields
func SetValue(paramType reflect.Type, paramElem reflect.Value, request commons.BeeRequest, i int) {
var structField = paramType.Field(i)
fieldName := structField.Name
fieldTag := structField.Tag
fieldType := GetFieldType(structField)
field := paramElem.FieldByName(fieldName)
var paramValues []string
if fieldTag != "" {
fieldTagName := fieldTag.Get(Field)
if fieldTagName != "" {
paramValues = request.FormValues(fieldTagName)
}
}
if paramValues == nil || len(paramValues) < 1 {
paramValues = request.FormValues(fieldName)
}
if (paramValues == nil || len(paramValues) < 1) && fieldType != data_type.BeeFile {
return
}
oneParam := paramValues[0]
if fieldType != data_type.BeeFile && oneParam == "" {
return
}
switch fieldType {
case data_type.Int:
val, err := strconv.ParseInt(oneParam, 10, 64)
if err != nil {
errorPrint(fieldName, err)
return
}
field.SetInt(val)
case data_type.Uint:
val, err := strconv.ParseUint(oneParam, 10, 64)
if err != nil {
errorPrint(fieldName, err)
return
}
field.SetUint(val)
break
case data_type.Float:
val, err := strconv.ParseFloat(oneParam, 64)
if err != nil {
errorPrint(fieldName, err)
return
}
field.SetFloat(val)
break
case data_type.Bool:
val, err := strconv.ParseBool(oneParam)
if err != nil {
errorPrint(fieldName, err)
return
}
field.SetBool(val)
break
case data_type.String:
field.SetString(oneParam)
break
case data_type.Slice:
field.Set(reflect.ValueOf(paramValues))
break
case data_type.BeeFile:
contentType := request.ContentType()
if commons.IsFormData(contentType) {
beeFile, err := request.GetFile(fieldName)
if err != nil {
errorPrint(fieldName, err.(error))
return
}
field.Set(reflect.ValueOf(*beeFile))
}
break
}
} |
Aspirin Protected the Nitric Oxide/Cyclic Gmp Generating System in Human Peritoneum ♦ Objective Changes in the expression of endothelial nitric oxide synthase (eNOS) in the peritoneum could be involved in the peritoneal dysfunction associated with peritoneal inflammation. The aim of the present study was to analyze the effect of Escherichia coli lipopolysaccharide (LPS) on eNOS expression in samples of human peritoneum. The effect of aspirin, a drug with anti-inflammatory properties, was also determined. ♦ Results The eNOS protein expressed in human peritoneal tissue was reduced by LPS (10 g/mL) in a time-dependent manner. The eNOS was expressed mainly in capillary endothelial cells and mesothelial cells. Anti-inflammatory doses of aspirin (1 10 mmol/L) restored eNOS expression in LPS-stimulated human peritoneal tissue samples. The main intracellular receptor of NO, soluble guanylate cyclase (sGC), was also downregulated by LPS. This effect was prevented by aspirin (5 mmol/L). ♦ Conclusion Protein expression of the eNOSsGC system in the peritoneal tissue was downregulated by LPS. High doses of aspirin protected both eNOS protein expression and sGC in human peritoneum. These findings suggest a new mechanism of action of aspirin that could be involved in the prevention of peritoneal dysfunction during inflammation. |
package com.example.appcenter.sampleapp_android;
import android.app.Dialog;
import android.content.DialogInterface;
import android.os.Bundle;
import android.support.v4.app.DialogFragment;
import android.support.v4.app.Fragment;
import android.support.v7.app.AlertDialog;
import android.view.LayoutInflater;
import android.view.View;
import android.view.View.OnClickListener;
import android.view.ViewGroup;
import android.widget.Button;
import com.microsoft.appcenter.analytics.Analytics;
import java.util.HashMap;
import java.util.Map;
public class AnalyticsActivity extends Fragment implements OnClickListener {
private static final String pageName = "Analytics";
private static Map<String, String> properties = new HashMap<>();
@Override
public View onCreateView(LayoutInflater inflater, ViewGroup container,
Bundle savedInstanceState) {
ViewGroup rootView = (ViewGroup) inflater.inflate(
R.layout.analytics_root, container, false);
Button eventButton = (Button) rootView.findViewById(R.id.customEventButton);
eventButton.setOnClickListener(this);
Button colorButton = (Button) rootView.findViewById(R.id.customColorButton);
colorButton.setOnClickListener(this);
return rootView;
}
public static AnalyticsActivity newInstance() {
Bundle args = new Bundle();
AnalyticsActivity fragment = new AnalyticsActivity();
fragment.setArguments(args);
return fragment;
}
public static CharSequence getPageName() {
return pageName;
}
public void onClick(View view) {
switch (view.getId()) {
case R.id.customEventButton:
DialogFragment eventDialog = new EventDialog();
eventDialog.show(getFragmentManager(), "eventDialog");
break;
case R.id.customColorButton:
DialogFragment colorDialog = new ColorDialog();
colorDialog.show(getFragmentManager(), "colorDialog");
break;
}
}
public static class EventDialog extends DialogFragment {
public Dialog onCreateDialog(Bundle savedInstanceState) {
Analytics.trackEvent("Sample event");
AlertDialog.Builder builder = new AlertDialog.Builder(getActivity());
builder.setMessage("Event sent").setPositiveButton("OK", new DialogInterface.OnClickListener() {
public void onClick(DialogInterface dialog, int id) {
}
});
return builder.create();
}
}
public static class ColorDialog extends DialogFragment {
public Dialog onCreateDialog(Bundle savedInstanceState) {
AlertDialog.Builder builder = new AlertDialog.Builder(getActivity());
CharSequence[] colors = {"Yellow", "Blue", "Red"};
builder.setTitle("Pick a color").setItems(colors, new DialogInterface.OnClickListener() {
public void onClick(DialogInterface dialog, int index) {
switch (index) {
case 0:
properties.put("Color", "Yellow");
Analytics.trackEvent("Color event", properties);
break;
case 1:
properties.put("Color", "Blue");
Analytics.trackEvent("Color event", properties);
break;
case 2:
properties.put("Color", "Red");
Analytics.trackEvent("Color event", properties);
break;
}
}
});
return builder.create();
}
}
}
|
<filename>source/rendering/wrappers/ImageView.cpp
#include "ImageView.h"
#include "rendering/Layouts.h"
#include "rendering/Renderer.h"
namespace vl {
RImage::RImage(vk::ImageType imageType, const std::string& name, vk::Extent3D extent, vk::Format format,
uint32 mipLevels, uint32 arrayLayers, vk::ImageLayout finalLayout, vk::MemoryPropertyFlags properties,
vk::ImageUsageFlags usageFlags, vk::ImageCreateFlags flags, vk::ImageLayout initialLayout,
vk::SampleCountFlagBits samples, vk::SharingMode sharingMode, vk::ImageTiling tiling)
: format(format)
, extent(extent)
, aspectMask(rvk::getAspectMask(usageFlags, format))
, samples(samples)
, flags(flags)
, arrayLayers(arrayLayers)
, mipLevels(mipLevels)
, isDepth(rvk::isDepthFormat(format))
, name(name)
{
vk::ImageCreateInfo imageInfo{};
imageInfo
.setImageType(imageType) //
.setExtent(extent)
.setMipLevels(mipLevels)
.setArrayLayers(arrayLayers)
.setFormat(format)
.setTiling(tiling)
.setInitialLayout(initialLayout)
.setUsage(usageFlags)
.setSamples(samples)
.setSharingMode(sharingMode)
.setFlags(flags);
uHandle = Device->createImageUnique(imageInfo);
auto memRequirements = Device->getImageMemoryRequirements(uHandle.get());
vk::MemoryAllocateInfo allocInfo{};
allocInfo.setAllocationSize(memRequirements.size);
allocInfo.setMemoryTypeIndex(Device->FindMemoryType(memRequirements.memoryTypeBits, properties));
uMemory = Device->allocateMemoryUnique(allocInfo);
Device->bindImageMemory(uHandle.get(), uMemory.get(), 0);
if (finalLayout != vk::ImageLayout::eUndefined) {
BlockingTransitionToLayout(initialLayout, finalLayout);
}
vk::ImageViewType viewType;
switch (imageType) {
case vk::ImageType::e2D:
if (flags & vk::ImageCreateFlagBits::eCubeCompatible) {
viewType = arrayLayers == 6u ? vk::ImageViewType::eCube : vk::ImageViewType::eCubeArray;
}
else {
if (flags & vk::ImageCreateFlagBits::e2DArrayCompatible) {
viewType = vk::ImageViewType::e2DArray;
}
viewType = vk::ImageViewType::e2D;
}
break;
case vk::ImageType::e1D:
case vk::ImageType::e3D: LOG_ABORT("unhandled image type"); break;
}
vk::ImageViewCreateInfo viewInfo{};
viewInfo
.setImage(uHandle.get()) //
.setViewType(viewType)
.setFormat(format);
viewInfo.subresourceRange
.setAspectMask(aspectMask) // TODO: this aspect mask wont work for depth stencil attachment
.setBaseMipLevel(0u)
.setLevelCount(mipLevels)
.setBaseArrayLayer(0u)
.setLayerCount(arrayLayers);
uView = Device->createImageViewUnique(viewInfo);
DEBUG_NAME(uHandle, name);
DEBUG_NAME(uView, name + ".view");
DEBUG_NAME(uMemory, name + ".mem");
}
void RImage::CopyBufferToImage(const RBuffer& buffer)
{
ScopedOneTimeSubmitCmdBuffer<Dma> cmdBuffer{};
vk::BufferImageCopy region{};
region
.setBufferOffset(0u) //
.setBufferRowLength(0u)
.setBufferImageHeight(0u)
.setImageOffset({ 0, 0, 0 })
.setImageExtent(extent);
region.imageSubresource
.setAspectMask(aspectMask) //
.setMipLevel(0u)
.setBaseArrayLayer(0u)
.setLayerCount(arrayLayers);
cmdBuffer.copyBufferToImage(buffer.handle(), uHandle.get(), vk::ImageLayout::eTransferDstOptimal, region);
}
void RImage::CopyImageToBuffer(const RBuffer& buffer)
{
ScopedOneTimeSubmitCmdBuffer<Dma> cmdBuffer{};
vk::BufferImageCopy region{};
region
.setBufferOffset(0u) //
.setBufferRowLength(0u)
.setBufferImageHeight(0u)
.setImageOffset({ 0, 0, 0 })
.setImageExtent(extent);
region.imageSubresource
.setAspectMask(aspectMask) //
.setMipLevel(0u)
.setBaseArrayLayer(0u)
.setLayerCount(arrayLayers);
cmdBuffer.copyImageToBuffer(uHandle.get(), vk::ImageLayout::eTransferSrcOptimal, buffer.handle(), region);
}
vk::ImageMemoryBarrier RImage::CreateTransitionBarrier(
vk::ImageLayout oldLayout, vk::ImageLayout newLayout, uint32 baseMipLevel, uint32 baseArrayLevel) const
{
vk::ImageMemoryBarrier barrier{};
barrier
.setOldLayout(oldLayout) //
.setNewLayout(newLayout)
.setImage(uHandle.get())
.setSrcAccessMask(rvk::accessFlagsForImageLayout(oldLayout))
.setDstAccessMask(rvk::accessFlagsForImageLayout(newLayout))
.setSrcQueueFamilyIndex(VK_QUEUE_FAMILY_IGNORED)
.setDstQueueFamilyIndex(VK_QUEUE_FAMILY_IGNORED);
barrier.subresourceRange
.setAspectMask(aspectMask) //
.setBaseMipLevel(baseMipLevel)
.setLevelCount(VK_REMAINING_MIP_LEVELS)
.setBaseArrayLayer(baseArrayLevel)
.setLayerCount(VK_REMAINING_ARRAY_LAYERS);
return barrier;
}
void RImage::BlockingTransitionToLayout(vk::ImageLayout oldLayout, vk::ImageLayout newLayout)
{
BlockingTransitionToLayout(
oldLayout, newLayout, rvk::pipelineStageForLayout(oldLayout), rvk::pipelineStageForLayout(newLayout));
}
void RImage::BlockingTransitionToLayout(vk::ImageLayout oldLayout, vk::ImageLayout newLayout,
vk::PipelineStageFlags sourceStage, vk::PipelineStageFlags destStage)
{
ScopedOneTimeSubmitCmdBuffer<Graphics> cmdBuffer{};
auto barrier = CreateTransitionBarrier(oldLayout, newLayout);
cmdBuffer.pipelineBarrier(sourceStage, destStage, vk::DependencyFlags{ 0 }, {}, {}, barrier);
}
void RImage::TransitionToLayout(vk::CommandBuffer cmdBuffer, vk::ImageLayout oldLayout, vk::ImageLayout newLayout) const
{
TransitionToLayout(cmdBuffer, oldLayout, newLayout, rvk::pipelineStageForLayout(oldLayout),
rvk::pipelineStageForLayout(newLayout));
}
void RImage::TransitionToLayout(vk::CommandBuffer cmdBuffer, vk::ImageLayout oldLayout, vk::ImageLayout newLayout,
vk::PipelineStageFlags sourceStage, vk::PipelineStageFlags destStage) const
{
auto barrier = CreateTransitionBarrier(oldLayout, newLayout);
cmdBuffer.pipelineBarrier(sourceStage, destStage, vk::DependencyFlags{ 0 }, {}, {}, barrier);
}
void RImage::GenerateMipmapsAndTransitionEach(vk::ImageLayout oldLayout, vk::ImageLayout newLayout)
{
ScopedOneTimeSubmitCmdBuffer<Graphics> cmdBuffer{};
// Check if image format supports linear blitting
vk::FormatProperties formatProperties = Device->pd.getFormatProperties(format);
// CHECK: from https://vulkan-tutorial.com/Generating_Mipmaps
// There are two alternatives in this case. You could implement a function that searches common texture
// image formats for one that does support linear blitting, or you could implement the mipmap generation in
// software with a library like stb_image_resize. Each mip level can then be loaded into the image in the same
// way that you loaded the original image. It should be noted that it is uncommon in practice to generate the
// mipmap levels at runtime anyway. Usually they are pregenerated and stored in the texture file alongside the
// base level to improve loading speed.
CLOG_ABORT(!(formatProperties.optimalTilingFeatures & vk::FormatFeatureFlagBits::eSampledImageFilterLinear),
"Image format does not support linear blitting!");
for (uint32 layer = 0u; layer < arrayLayers; ++layer) {
vk::ImageMemoryBarrier barrier{};
barrier
.setImage(uHandle.get()) //
.setSrcQueueFamilyIndex(VK_QUEUE_FAMILY_IGNORED)
.setDstQueueFamilyIndex(VK_QUEUE_FAMILY_IGNORED);
barrier.subresourceRange
.setAspectMask(aspectMask) //
.setBaseArrayLayer(layer)
.setLayerCount(1u)
.setLevelCount(1u);
int32 mipWidth = extent.width;
int32 mipHeight = extent.height;
auto intermediateLayout = vk::ImageLayout::eTransferSrcOptimal;
auto oldStage = rvk::pipelineStageForLayout(oldLayout);
auto intermediateStage = rvk::pipelineStageForLayout(intermediateLayout);
auto finalStage = rvk::pipelineStageForLayout(newLayout);
for (uint32 i = 1; i < mipLevels; i++) {
barrier.subresourceRange.setBaseMipLevel(i - 1);
barrier
.setOldLayout(oldLayout) //
.setNewLayout(intermediateLayout)
.setSrcAccessMask(rvk::accessFlagsForImageLayout(oldLayout))
.setDstAccessMask(rvk::accessFlagsForImageLayout(intermediateLayout));
// old to intermediate
cmdBuffer.pipelineBarrier(oldStage, intermediateStage, vk::DependencyFlags{ 0 }, {}, {}, barrier);
vk::ImageBlit blit{};
blit.srcOffsets[0] = vk::Offset3D{ 0, 0, 0 };
blit.srcOffsets[1] = vk::Offset3D{ mipWidth, mipHeight, 1 };
blit.srcSubresource
.setAspectMask(aspectMask) //
.setMipLevel(i - 1)
.setBaseArrayLayer(layer)
.setLayerCount(1u);
blit.dstOffsets[0] = vk::Offset3D{ 0, 0, 0 };
blit.dstOffsets[1] = vk::Offset3D{ mipWidth > 1 ? mipWidth / 2 : 1, mipHeight > 1 ? mipHeight / 2 : 1, 1 };
blit.dstSubresource
.setAspectMask(aspectMask) //
.setMipLevel(i)
.setBaseArrayLayer(layer)
.setLayerCount(1u);
cmdBuffer.blitImage(uHandle.get(), intermediateLayout, uHandle.get(), oldLayout, blit, vk::Filter::eLinear);
barrier
.setOldLayout(intermediateLayout) //
.setNewLayout(newLayout)
.setSrcAccessMask(rvk::accessFlagsForImageLayout(intermediateLayout))
.setDstAccessMask(rvk::accessFlagsForImageLayout(newLayout));
// intermediate to final
cmdBuffer.pipelineBarrier(intermediateStage, finalStage, vk::DependencyFlags{ 0 }, {}, {}, barrier);
if (mipWidth > 1)
mipWidth /= 2;
if (mipHeight > 1)
mipHeight /= 2;
}
// barier for final mip
barrier.subresourceRange.setBaseMipLevel(mipLevels - 1);
barrier
.setOldLayout(oldLayout) //
.setNewLayout(newLayout)
.setSrcAccessMask(rvk::accessFlagsForImageLayout(oldLayout))
.setDstAccessMask(rvk::accessFlagsForImageLayout(newLayout));
cmdBuffer.pipelineBarrier(oldStage, finalStage, vk::DependencyFlags{ 0 }, {}, {}, barrier);
}
}
vk::DescriptorSet RImage::GetDebugDescriptor()
{
if (debugDescriptorSet) {
return *debugDescriptorSet;
}
debugDescriptorSet = DescriptorLayouts->_1imageSamplerFragmentOnly.AllocDescriptorSet();
rvk::writeDescriptorImages(*debugDescriptorSet, 0u, { uView.get() });
return *debugDescriptorSet;
}
void RCubemap::CopyBuffer(const RBuffer& buffer, size_t pixelSize, uint32 mipCount)
{
ScopedOneTimeSubmitCmdBuffer<Dma> cmdBuffer{};
std::vector<vk::BufferImageCopy> regions;
size_t offset{ 0llu };
for (uint32 mip = 0u; mip < mipCount; ++mip) {
uint32 res = extent.width / static_cast<uint32>(std::pow(2, mip));
vk::BufferImageCopy region{};
region
.setBufferOffset(offset) //
.setBufferRowLength(0u)
.setBufferImageHeight(0u)
.setImageOffset({ 0, 0, 0 })
.setImageExtent({ res, res, 1u });
region.imageSubresource
.setAspectMask(aspectMask) //
.setMipLevel(mip)
.setBaseArrayLayer(0u)
.setLayerCount(6u);
regions.push_back(region);
offset += res * res * pixelSize * 6llu;
}
cmdBuffer.copyBufferToImage(buffer.handle(), uHandle.get(), vk::ImageLayout::eTransferDstOptimal, regions);
}
std::vector<vk::UniqueImageView> RCubemap::GetFaceViews(uint32 atMip) const
{
std::vector<vk::UniqueImageView> faceViews;
vk::ImageViewCreateInfo viewInfo{};
viewInfo
.setImage(uHandle.get()) //
.setViewType(vk::ImageViewType::e2D)
.setFormat(format);
for (uint32 i = 0u; i < arrayLayers; ++i) {
viewInfo.subresourceRange
.setAspectMask(aspectMask) //
.setBaseMipLevel(atMip)
.setLevelCount(1u)
.setBaseArrayLayer(i)
.setLayerCount(1u);
faceViews.emplace_back(Device->createImageViewUnique(viewInfo));
}
return faceViews;
}
std::vector<vk::UniqueImageView> RCubemap::GetMipViews() const
{
std::vector<vk::UniqueImageView> mipViews;
vk::ImageViewCreateInfo viewInfo{};
viewInfo
.setImage(uHandle.get()) //
.setViewType(vk::ImageViewType::eCube)
.setFormat(format);
for (uint32 i = 0u; i < mipLevels; ++i) {
viewInfo.subresourceRange
.setAspectMask(aspectMask) //
.setBaseMipLevel(i)
.setLevelCount(1u)
.setBaseArrayLayer(0u)
.setLayerCount(VK_REMAINING_ARRAY_LAYERS);
mipViews.emplace_back(Device->createImageViewUnique(viewInfo));
}
return mipViews;
}
vk::UniqueImageView RCubemap::GetMipView(uint32 atMip) const
{
vk::ImageViewCreateInfo viewInfo{};
viewInfo
.setImage(uHandle.get()) //
.setViewType(vk::ImageViewType::eCube)
.setFormat(format);
viewInfo.subresourceRange
.setAspectMask(aspectMask) //
.setBaseMipLevel(atMip)
.setLevelCount(1u)
.setBaseArrayLayer(0u)
.setLayerCount(6u);
return Device->createImageViewUnique(viewInfo);
}
vk::UniqueImageView RCubemap::GetFaceArrayView(uint32 atMip) const
{
vk::UniqueImageView faceArrayView;
vk::ImageViewCreateInfo viewInfo{};
viewInfo
.setImage(uHandle.get()) //
.setViewType(vk::ImageViewType::e2DArray)
.setFormat(format);
viewInfo.subresourceRange
.setAspectMask(aspectMask) //
.setBaseMipLevel(atMip)
.setLevelCount(1u)
.setBaseArrayLayer(0u)
.setLayerCount(arrayLayers);
faceArrayView = Device->createImageViewUnique(viewInfo);
return faceArrayView;
}
std::vector<vk::UniqueImageView> RCubemapArray::GetFaceViews(uint32 atArrayIndex, uint32 atMip) const
{
std::vector<vk::UniqueImageView> faceViews;
vk::ImageViewCreateInfo viewInfo{};
viewInfo
.setImage(uHandle.get()) //
.setViewType(vk::ImageViewType::e2D)
.setFormat(format);
for (uint32 i = 0u; i < 6u; ++i) {
viewInfo.subresourceRange
.setAspectMask(aspectMask) //
.setBaseMipLevel(atMip)
.setLevelCount(1u)
.setBaseArrayLayer(atArrayIndex * 6u + i)
.setLayerCount(1u);
faceViews.emplace_back(Device->createImageViewUnique(viewInfo));
}
return faceViews;
}
vk::UniqueImageView RCubemapArray::GetCubemapView(uint32 atArrayIndex, uint32 atMip) const
{
vk::ImageViewCreateInfo viewInfo{};
viewInfo
.setImage(uHandle.get()) //
.setViewType(vk::ImageViewType::eCube)
.setFormat(format);
viewInfo.subresourceRange
.setAspectMask(aspectMask) //
.setBaseMipLevel(atMip)
.setLevelCount(1u)
.setBaseArrayLayer(atArrayIndex * 6u)
.setLayerCount(6u);
return Device->createImageViewUnique(viewInfo);
}
} // namespace vl
|
Computer software applications allow users to create a variety of documents to assist them in work, education, and leisure. For example, popular word processing applications allow users to create letters, articles, books, memoranda, and the like. Spreadsheet applications allow users to store, manipulate, print, and display a variety of alphanumeric data. Such applications have a number of well-known strengths including rich editing, formatting, printing, calculation, and on-line and off-line editing.
A typical software application, such as a word processing application, may have tens or even hundreds of style and formatting properties that may be applied to text and/or data. For example, a word processing application may support many combinations of styles and formatting that may be selected by a user. For example, a style called “Header 1” may be selected by a user for automatically applying a certain font size, display and print justification, tab settings and the like. In addition to numerous settings that may be provided by application developers, many applications also allow a user to develop custom style and formatting settings.
According to prior systems, all of the possible style and formatting settings available to the user, whether used or not, are instantiated when the user saves the document. Unfortunately, when the user opens the document with a newer or different version of the software application that has the same or similar style settings but with different individual properties for the same or similar style settings, the user is stuck with the style or formatting settings instantiated for the document using the previous or creating application. Additionally, the software user may be allowed by his application to dictate that certain styles or formatting settings must be used or that certain styles or formatting settings may not be used with a particular document. In such cases, these styles or formatting settings are likewise instantiated upon document save which prevents the use of future changes to the styles or formatting settings allowed or disallowed by the user. Moreover, instantiation of all styles and formatting settings from a given software application causes the file size of a saved document to be excessive.
It is with respect to these and other considerations that the present invention has been made. |
<gh_stars>0
/*
* Copyright (c) 2018 simplity.org
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
* SOFTWARE.
*/
package org.simplity.tp;
import org.simplity.adapter.DataAdapter;
import org.simplity.adapter.source.ContextDataSource;
import org.simplity.adapter.source.DataSource;
import org.simplity.adapter.target.ContextDataTarget;
import org.simplity.adapter.target.DataTarget;
import org.simplity.kernel.comp.ComponentManager;
import org.simplity.kernel.comp.ComponentType;
import org.simplity.kernel.comp.FieldMetaData;
import org.simplity.kernel.value.Value;
import org.simplity.service.ServiceContext;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
/**
* copy data across data objects, like JSON, POJO etc.. based on a DataAdapter
* specification
*
* @author simplity.org
*/
public class CopyData extends Action {
private static final Logger logger = LoggerFactory.getLogger(CopyData.class);
/**
* fully qualified name of adapter to be used for copying data
*/
@FieldMetaData(isRequired = true, isReferenceToComp = true, referredCompType = ComponentType.ADAPTER)
String adapterName;
/**
* name of the object/data sheet to copy data from. skip this if data from
* service context is to be copied
*/
String sourceObjectName;
/**
* name of target object. skip this if data is to be copied to the service
* context itself
*/
String targetObjectName;
/**
* fully qualified name of the class to be used to create the target object.
* Skip this if the object is already available in the context
*/
String targetClassName;
@Override
protected Value doAct(ServiceContext ctx) {
DataAdapter adapter = ComponentManager.getAdapter(this.adapterName);
DataSource source = null;
DataTarget target = null;
if (this.sourceObjectName != null) {
source = ctx.getDataSource(this.sourceObjectName);
} else {
source = new ContextDataSource(ctx);
}
if (this.targetObjectName != null) {
target = ctx.getDataTarget(this.targetObjectName, this.targetClassName);
} else {
target = new ContextDataTarget(ctx);
}
adapter.copy(source, target, ctx);
return Value.VALUE_TRUE;
}
@Override
public void getReady(int idx, Service service) {
super.getReady(idx, service);
if (this.sourceObjectName == null && this.targetObjectName == null) {
logger.warn("Going to copy data from context back into context. Are you sure that is what you want?");
}
}
}
|
Molecular Phenomic Approaches to Deconvolving the Systemic Effects of SARS-CoV-2 Infection and Post-acute COVID-19 Syndrome SARS COV-2 infection causes acute and frequently severe respiratory disease with associated multi-organ damage and systemic disturbances in many biochemical pathways. Metabolic phenotyping provides deep insights into the complex immunopathological problems that drive the resulting COVID-19 disease and is also a source of novel metrics for assessing patient recovery. A multiplatform metabolic phenotyping approach to studying the pathology and systemic metabolic sequelae of COVID-19 is considered here, together with a framework for assessing post-acute COVID-19 Syndrome (PACS) that is a major long-term health consequence for many patients. The sudden emergence of the disease presents a biological discovery challenge as we try to understand the pathological mechanisms of the disease and develop effective mitigation strategies. This requires technologies to measure objectively the extent and sub-phenotypes of the disease at the molecular level. Spectroscopic methods can reveal metabolic sub-phenotypes and new biomarkers that can be monitored during the acute disease phase and beyond. This approach is scalable and translatable to other pathologies and provides as an exemplar strategy for the investigation of other emergent zoonotic diseases with complex immunological drivers, multi-system involvements and diverse persistent symptoms. Global Challenges of the COVID-19 Pandemic and Post-acute COVID-19 Syndrome The exceptionally rapid local and international spread of COVID-19 has posed a uniquely dynamic and interrelated set of medical and economic problems that have directly or indirectly affected the lives of most of the world's population. Global healthcare and disease burden "fall-outs" of COVID-19 will also have significant global societal impacts for many years to come. Climate change, the increase in human population, and changes in geographical population distributions have increasingly brought humans and wild animals together in ways that enable and assist the transmission of novel zoonotic diseases to man (). The SARS CoV-2 b-coronavirus is one of a series of emergent zoonotic threats, and it was inevitable that a spill-over into man would occur from one of these pathogens and provoke a medical disaster. The speed of SARS CoV-2 spread was facilitated by a combination of modern human social behaviours, globalization of business practises, the commonality of long-distance travel, and high-density population packing that enhances the transmissibility of any infectious disease. SARS CoV-2 has proved to be a particularly dangerous enemy because of its mode of airborne transmission and infectivity from people with mild or nonobvious symptoms. The virus is also evolving, and multiple variants are driving new waves of the pandemic (;Van Vinh ). The direct economic coupling of the disease pandemic has also changed the world and human societal relationships irreversibly, and this continues to raise international political tensions. The global 1 3 propagation of this dangerous and poorly understood pathogen has challenged individuals, populations, businesses, as well as scientists and doctors in ways that were scarcely imaginable only a year or two ago. Within a few months of the first COVID-19 outbreaks, some populations were forced into living conditions more reminiscent of wartime. The rate of disease spread posed immediate challenges to medical capacities and capabilities in many countries, and there was a medical, social, and political need to explain what is COVID-19, how it is transmitted, its effects and how to prevent, detect and treat it? Many of these questions have now been at least partially answered due to the huge international research efforts involved, and we also know some of the major predisposing risk factors for severe disease such as body mass index, background pulmonary disease, and diabetes (Van Vinh ). There are still major gaps in our mechanistic understanding of the disease process and the highly complex systemic dysfunction that leads to persistent symptoms and long-term damage associated with PACS (). But COVID-19 is also a major social problem and the ability to deploy new knowledge to affect clinical translation are being confounded by political indecision and population complacency, even in countries that have had success in managing patient numbers down through lockdowns and vaccinations. New SARS COV-2 variants with higher receptor binding and higher infectivity such has B.1.1.7 () and more recently B.1.617.2 are also taking advantage of multiple gaps in our defences and collective poor behaviours and are driving new outbreaks. Paradoxically, some of the greatest advances in science and engineering occur in times of war, when there is a technical race is to create new defences, weapons and strategies. In the present case of COVID-19, there is a global war against an unseen and adaptable enemy, the coronavirus, and that has resulted in major advances in medical technologies. These advances will be put to good use in future against other zoonotic viral enemies. The extraordinary scientific progress in understanding the disease, and the creation of new vaccines in record time (normally years of research and development compressed into months) has been remarkable, and this is a clear sign of hope for the implementation of practical disease mitigation strategies in the future. The pandemic also revealed profound weaknesses in societal preparedness, political decision-making, and some antisocial human behaviours that helped the virus propagate with multiple outbreak waves that cannot be managed by scientific methods alone, and those problems still pose significant threats to the worldwide control of the virus. We now face the emerging and largely unknown challenges of PACS, which may ultimately affect millions of people, and there is an urgent need to develop new tools for measuring, monitoring, and mitigating persistent clinical symptoms and systemic disease consequences of SARS CoV-2 infection. The Acute and Chronic Faces of COVID-19 The acute effects and progression of COVID-19 have now been well-documented in multiple populations. As a new type of b-coronavirus with largely airborne transmission, the most common severe and life-threatening effects are respiratory. Disease severity and respiratory failure are often exacerbated by underlying chronic disease, such as diabetes and these are more likely to be present in elderly patients, but young people can also be severely affected (;Van Vinh ). However, other systemic effects of COVID-19 have also appeared and form a common pattern of the disease which can be present even in the absence of severe respiratory symptoms. The systemic immunopathological responses to SARS CoV-2 infection themselves affect multiple organs and some of these are similar to the effects of related coronaviruses that caused Middle East Respiratory Syndrome (MERS) and Severe Acute Respiratory Syndrome (SARS). COVID-19, like MERS and SARS also causes significant multi-system long-term effects (;;) including liver dysfunction, kidney dysfunction, cardiovascular disease, anosmia and neurological problems, and a host of other rarer complications a). Many COVID-19 patients do not readily recover from these ailments, and a high proportion of patients, are still multi-symptomatic with PACS months, and possibly years, after the acute phase with reports of longterm pulmonary, hematologic, cardiovascular, renal, neuropsychiatric, endocrine, gastrointestinal, hepatobiliary, and dermatological problems PACS patients (). Why Molecular Phenomics is a Powerful Tool for Understanding COVID-19? Phenomics is the systematic study of the continuum of gene-environment interactions that occur throughout life, and the measurement of the emergent physical and chemical properties that result from those interactions that define individual and population phenotypes in health and disease. These same combinations of interactions give rise to a variety of disease risks in individuals populations and help define the phenomic properties of disease states that can be applied to diagnose and stratify patients (Nicholson 2006;a;). In molecular phenomics, we are concerned with the chemical and biochemical signatures (metabolites, proteins, transcripts, etc.) of cells and biofluids, and how these change in characteristic ways during the onset, development, and recovery from disease. This type of information is essential to the molecular characterization of emergent diseases such as COVID-19, and to understand their systemic effects. Such molecular characterization requires the use of comprehensive multiparametric measurements that in the case of blood chemistry, proteins, and metabolites is the domain of mass spectrometry and NMR spectroscopy, which form the main basis of this discussion. Metabolic phenotyping or Metabotyping () is a subclass of metabolic research that is closely aligned with metabonomics and metabolomics in which multiple metabolites representing many pathways and chemical classes are measured to give a global representation of ongoing systemic disease processes or the patho-physiological status of the individual (). Understanding any emergent disease is both a clinical and molecular discovery process, and so we seek to measure many biochemical components in multiple pathways that relate to the clinical signs and symptoms and the wholebody responses to the disease development through time. Metabolic frameworks for understanding such processes have been described previously (). The emphasis being on obtaining significant metabolic pathway coverage and hence the application of multiple techniques in parallel to explore many biochemical features of the disease in individual patients. Ideally, the studies are performed wherever possible on samples from the same patients, because this enables the use of statistical spectroscopy and other data fusion tools to construct integrated multivariate models describing the integrated systemic responses to disease. This also paves the way to the discovery of new biomarkers relating to diagnosis and prognosis of the disease as well as monitoring progression and recovery processes (). The Metabolic Continuum from Health to COVID-19 Disease Development The initiation of a disease process in a healthy individual involves either a series of complex gene interactions over long periods of time or an initial acute event, such as an infection, that rapidly disrupts homeostasis and changes the function and phenotype of the organism. The degree (severity) and rates of such molecular phenotypic changes are also dependent on the history of the individuals' geneenvironment interactions as well as age, gender, ethnicity, nutrition, immune status and the presence of confounding disease. This can be thought of as an extension of the patient journey concept in which the metabolic trajectory of individuals is measured and monitored through time to augment understanding of the systemic responses to therapy ((Nicholson et al., 2012. A summary of the extended concept of the metabolic journey from health to disease and recovery is provided in Fig. 1. To understand the metabolic starting properties of a population group, there should be somebody of reference data and materials to define "normality" for the age, gender, and ethnicity distribution of the population. Such information can often be obtained from appropriate screening of epidemiological samples (a, b), and this also serves as a reference framework for assessing functional Fig. 1 A framework for understanding the natural history of an emergent zoonotic disease using metabotyping: schematic illustration of the collective COVID-19 patient journey from health to disease using a metabolic systems framework to assess disease progression and recovery in relation to associated studies that enable model cross-val-idation through published literature and sequential analysis of multiple disease cohort samples. The population phenomics box illustrates the collection of different metabolic signatures from population subgroups some of which may have different disease risks recovery from a disease process by comparing the patients in the post-acute disease phase with data from their own relevant pre-disease population group. The patient journey phenotyping process involves the discovery of new metrics that help quantify and map disease progression in a metabolic framework and can lead to new diagnostic and prognostic biomarker discoveries. The aim of this approach is to cross-model and cross-validate biomarkers derived from the molecular study of multiple patient journeys from many sources. This process results in new molecular insights into the systemic disease process and the possibility of development of metrics for studying functional recovery from the disease. The approach is illustrated for COVID-19 Fig. 1 but, in principle, the concept is scalable and translatable to any emergent disease, especially those with multiple systemic involvements such as COVID-19 (;;b;). What can Immuno-metabolic Phenotyping Teach us About COVID-19? Metabolic phenotyping provides comprehensive broad systems level information that can be directly related to pathway and organ dysfunction, so it is an intrinsically translational -omics science. COVID-19 manifests itself as a multi-organ and multi-system disease with a complex presentation of symptoms, target organs and biochemical perturbations. It is essentially a systemic metabolic disease driven by the heightened activities of the immune system responses to the viral presence. Profound effects are seen in lipid ) and amino acid (b;;) metabolism and other small molecules as well as shifts in lipoprotein profiles, these changes reflecting liver, cardiovascular, diabetic, muscle wasting, and neuropathological effects ;b;;;a). There is also a plethora of other effects including anosmia, Guillain-Barre syndrome, and chronic fatigue) and most of those symptoms can be persistent in patients with PACS (). We have referred previously to COVID-19 as a mosaic disease that we are attempting to understand from a fragmented pattern of signs, symptoms, and chemical pathologies to enable improved patient management ). The complex shifts in metabolism observed in COVID-19 are not uniformly expressed, nor are they necessarily proportional to disease severity as simply classified by respiratory symptoms, although many changes are proportional to the lung damage which in its severe form is primarily immunologically driven. Indeed, the early immune responses are major determinants of individual clinical outcomes (). Recent immunological findings suggest that there might be limitations in the clinical value of early stage severity prediction because early CD8+ bystander cytotoxic T cell responses are responsible for the mitigation of severe lung injury and patients only have mild symptoms. Patients that fail to show strong CD8+ response appear to progress rapidly to severe respiratory disease, their disease trajectory having already have been set by the time of initial presentation at the clinic (). Given the narrow time windows involved for such severity progressions, which can be as little as a few hours from initial hospital presentation to critical care hospitalisation, there is insufficient time to test and intervene to modify the acute disease outcome. Nevertheless, early metabolic, and immunological responses may be predictive of PACS. This may provide a more clinically translatable and tractable metabolic modelling approach as the evolution of PACS takes place over weeks to months rather than hours to days as in the acute disease phase, however, this is still untested. Analytical Considerations, Windows on Metabolism, and Biochemical Findings in COVID-19 Research Several powerful structural analytical and quantitative measurement tools are available to study typical patient samples that contain systemic metabolic information. The most notable of these are Nuclear Magnetic Resonance (NMR) spectroscopy and mass spectrometry (MS) and these can readily be applied to molecular phenotyping of biofluid samples (). However, some early studies on the metabolic sequelae of COVID-19 were poorly sampled and performed with inappropriate sample preparation, partially driven by caution concerning the handling of samples that potentially contained live virus in analytical laboratories without appropriate microbiological physical containment capabilities. Methods involving sample heating (typically at 56 °C for 30 min) were employed to inactivate the virus. Unfortunately, this process irrevocably distorts the molecular composition of blood plasma leading to the loss of biomarker information via denaturation or chemical hydrolysis, or worse the physical or chemical creation of false pseudomarkers that are products of the heat treatment process rather than the disease itself. The various spectroscopic "windows" into the disease process reveal different, but complementary, insights into the altered biochemistry and pathological processes. NMR spectroscopy of blood plasma has given new metabolic, glycoproteomic and lipoproteomic insights into the acute COVID-19 disease process and the relationships between these parameters and cytokines and chemokines indicates the presence of an activated immune-metabolic axis ;a). Even simple metabolic measurements can be revealing, thus, in severely affected patients, the plasma lactate to pyruvate ratio can be high reflecting impaired peripheral blood oxygenation and poor pulmonary function ). More complex measurements enable deeper understanding of immune-metabolic interactions that underpin the disease. Proton NMR methods allow simultaneous observation of metabolites, lipoproteins and glycoproteins ((Nicholson et al., 2012. Thus, acute-phase reactive proteins such as a-1acid glycoprotein (GlycA) are highly elevated in plasma in active COVID-19 disease, and there is also a marked reduction in HDL-related species, e.g., phospholipids, with a significant rise in many VLDL and LDL components a). In particular, the Apolipoprotein B100/Apolipoprotein A1 ratio (ABA1) is increased in active COVID-19 disease and ABA1 values have previously been associated with increased cardiovascular and atherosclerotic risks (b;a). Mass spectrometry (MS) is also contributing significantly to the understanding of perturbed COVID-19 biochemistry and MS metabolic windows give highly complementary metabolic information to the NMR windows. Thus, disturbance of the tryptophan-kynurenine pathway (as measured by mass spectrometry, 20) is also associated with atherogenic risk (). Tryptophan/kynurenine pathway disruption is a feature of many acute inflammatory conditions (Chen and Guillemin 2009;;), all via immunologically driven cellular processes involving stimulation by interferon-g, TNF-a and other cytokines and chemokines. The disturbances to this pathway in COVID-19 as measured by MS of blood plasma are profound, with elevated quinolinic acid and 3-hydroxykynurenine and kynurenine together with reduced tryptophan providing an additional atherogenic driver over and above the atherogenic lipoproteins profile, particularly as reflected in the plasma ABA1 ratio. The plasma tryptophan-kynurenine ratio is high during active COVID-19, and it is also elevated in a variety of neurological/neuro-immune disorders (). Indeed, SARS COV-2 infection also causes multiple neurological side-effects and manifestations which can be long-lasting (Varatharaj 2020), and it is likely that these relate to the tryptophan-kynurenine pathway as part of the multi-level systems involvement in the complex of acute COVID-19 pathologies. Thus, individual metabolic pathway abnormalities (such as tryptophan-kynurenine) contribute to the development of disparate pathologies such as the cardiovascular and neurological effects seen in COVID-19. Mass spectrometry is also contributing to understanding the systemic effects of COVID-19 on lipid metabolism. For instance, there are major shifts in the multiple lipidic pathways and concentrations but these lipidomic signatures are still poorly understood from a mechanistic viewpoint and are the subject of intense study. A recent untargeted metabolic and lipidomic study uncovered multiple potential diagnostic and mechanistic biomarkers including phosphatidylcholines, phosphatidylethanolamines, triglycerides, free fatty acids (especially arachidonic acid and oleic acid), porphyrins, and aromatic amino acids were identified as being diagnostic (). Pathway enrichment analysis revealed several candidate mechanistic possibilities, but interpretation of such data is constrained by the breadth of the pathway input information, which equates the number and diversity of the metabolites measured analytically. So, the multiple window approach is necessary to examine interrelated pathological drivers from multiple compartments. Ultimately, untargeted analysis requires quantitative crossvalidation using isotope dilution tandem mass spectrometry for instance to establish robust quantitative biomarker ranges that are required for real-world clinical deployment. Furthermore, many of the observed cellular and systemic metabolic disruptions are modulated by cytokine activities a) that need to be measured and co-modelled to gain deeper mechanistic insights. Novel NMR spectroscopic methods utilising combined relaxation and diffusion editing have identified new and highly diagnostic molecular biomarkers in specific molecular compartments that have specific motional properties. These methods can be used to distinguish COVID-19 from other mild respiratory diseases with high fidelity based on immune-metabolic signatures of supramolecular lipoprotein and glycoprotein complexes (b). This work led to the discovery of a novel class of supramolecular structure composite (SPC) NMR signal markers which are characteristically reduced in COVID-19. These signals originating from phospholipids, especially lysophosphatidylcholines present mainly in lipoprotein HDL sub-compartments or bound to glycoproteins. The immunopathological significance of these signals still remains to be discovered but the SPC/GlycA ratio is highly discriminating between SARS COV-2 positive patients and others with mild respiratory disease awaiting diagnosis (b). There has been a rapid proliferation of COVID-19-related metabolic literature from many laboratories using a variety of analytical instrumentation, different patient cohorts from different genetic backgrounds, with varying severities and time sampling points. Harmonisation of technology and approach in phenomic analytical chemistry has been a longterm objective in the metabolic phenotyping field (). In the case of COVID-19 metabolic studies, standardization, and cross-validation of sample preparation methods has been investigated and recommendations made to ensure robust metabolic data collection that enable interlaboratory comparisons ). In the case of NMR spectroscopy, protocol harmonization already exists for in vitro diagnostic applications such as plasma metabolites and lipoproteins () and this has led to highly complementary analytical data being generated in multiple laboratories studying COVID-19 round the world (;) which gives confidence that these biomarker signatures of the disease are accurate and translatable across populations. Systemic Post-acute COVID-19 Syndrome, Phenoreversion, and Patient Recovery It is notable, that although the actual disease presentation and the clinical severity classifications are dominated by the pulmonary pathology, respiratory symptoms are just "the tip of the iceberg" in terms of the underlying biochemical and immunological dysfunction caused by SARS CoV-2 infection. From a clinical viewpoint, the systemic nature of COVID-19 needs to be considered carefully with respect to both the patient severity classification and what the different systemic dysfunctional expression means in relation to physical patient assessment, management and long-term therapy. The deep metabolic perturbations caused by the virus are also relevant for our understanding of the COVID-19 "recovery" process and the development of new metrics for assessment of PACS at the personalized level. An individual can only be thought of as "fully recovered "when their biochemical profiles are substantially normalized with respect to the earlier systemic perturbations seen in the acute phase. This process of normalization is effectively the opposite of disease phenoconversion ) and we refer to this process as Phenoreversion and this can be measured metabolically ). Phenoreversion appears to be highly variable between individuals ranging between complete or full recovery, to partial recovery or no recovery, with residual immunologically driven long-term biochemical effects. It was also notable that the non-hospitalised mildly affected patients at 3-month follow-up also included some new phenotypic changes not observed in the controls or the acute phase patients. This included elevated plasma 3-indole acetic acid, a microbial product of tryptophan metabolism, indicating a possible microbiome activity shift which would not unsurprising given the complex immunological interactions at all stages of the disease. The significance of this observation remains to be determined. Phenoreversion appears to be a complex and variable process. Thus, the individual metabolic parameters used as Phenoreversion metrics can be substantially reversed or not reversed within the whole postacute phase population cohort, and it is inevitable that some of these changes will impact on long-term disease risks for the affected individuals. Even mildly symptomatic, non-hospitalized patients can show multiple symptoms months after the acute phase of the disease including chronic fatigue, joint pain anosmia and a multitude of neurological symptoms (). There are systematic metabolic differences between asymptomatic and symptomatic post COVID-19 patients. Given the number of affected individuals worldwide (currently approaching 170 million), the possibility that the long-term effects of COVID-19 on individuals and populations may be as great a healthcare and economic burden as the acute phase of the disease witnessed so far. Thus, it is essential to perform large-scale metabolic and clinical follow-up studies in affected groups to assess the prevalence of PACS and to assess individuals for metabolic effects that with help in the management of patients and the possible mitigation of the long-term symptoms. Despite the rapid expansion of knowledge about the virus, there is still so much unknown about the COVID-19 disease and PACS processes in different populations, and with the steady evolution of new SARS CoV-2 variants we can also expect an expanded repertoire of biological properties of the virus which may present new medical challenges. An important future determinant of the pandemic outcomes will be the ability and agility to adapt healthcare policy to changes in the virus structure and biological properties. Metabolic phenotyping has previously been shown to be exceptionally effective at measuring and monitoring systemic pathological processes in multiple disease states including COVID-19, and so should become a vital tool for the molecular investigations and large-scale screening that are now ongoing round the world as part of the ongoing fight against the disease in "The COVID-19 War". Consent to Publication Not applicable. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/. |
// schemas returns a list of schema types in the annotations.
func (as Annotations) schemas() []string {
var schemas []string
for _, a := range as {
if strings.HasPrefix(a.Key, SchemaPrefix) {
schemas = append(schemas, a.Key[len(SchemaPrefix):])
}
}
return schemas
} |
Gedrics: the next generation of icons Using today s combination of standard point-and-click user -interface elements for pen-based applications is a decision that implies that the pen is nothing more than a derivative of the mouse. This assumption is not necessarily correct. In order to be able to design more adequate interaction styles for pens, this paper introduces a new kind of user interface element: the gedric. Gedrics are gesture-driven icons, a combination of icons, pull-down menus and gestures. They provide a very fast, easy-to-learn, and easy-to-use interaction style for future pen interfaces. This paper describes and discusses the concept and implementation of gedrics. |
import { combineLatest, interval, of } from "rxjs";
import { scan, map, startWith, switchMap } from "rxjs/operators";
enum Color {
initial = "white",
final = "green",
}
if (typeof performance === "undefined" && global) {
(global as any).performance = {
now() {
return new Date();
},
};
}
const getRandomNumber = (min: number = 0, max: number = 255) =>
Math.random() * (max - min) + min;
const getRandomChar = () => String.fromCharCode(getRandomNumber());
const getTime = (): number => {
return performance ? performance.now() : Date.now();
};
const createChar$ = (maxLifeSpan: number) =>
of({
char: getRandomChar(),
color: Color.initial,
opacity: getRandomNumber(0.85, 1),
changeChance: getRandomNumber(0.05, 0.3),
createdAt: getTime(),
maxLifeSpan,
});
const matrix$ = combineLatest(createChar$(5000), interval(1000)).pipe(
scan(([ch, n]) => {
console.log(ch);
const char = {
...ch,
updatedAt: getTime(),
};
return [char, n];
}),
map(([char]) => char)
);
export { matrix$ };
|
Q:
How do I approach structuring when to read/write objects to db?
This is a problem I've faced in many languages over the years (js, go, c, f#, haskell, python, ...), but I haven't found a general approach to solving this yet. Learning about consistency models in databases made me realise all my approaches so far are failing in different ways, usually I end up with a ton of accidental race conditions.
The problem:
I'm fetching data from a db, and storing it in data structures in memory on a server. There's no limit on how many parts of a single codebase may hold those elements at a time, so we might hold two contradictory copies of a single datum in memory if we're not careful. Some updates require atomic changes in the db (login attempts, for example), while others (like user display name) may be stale, or cached for a while. I don't want to call the db on each read/write, because it's way to slow and expensive.
How do you usually approach this problem? What's the pros/cons of your approach? Are there some language-agnostic best-practices in this area? Some way to be reasonably confident the code free from data races?
Tagging as ORM even though it's not strictly about OOP-RDBMS mappings, because I don't know what to call the more general thing.
A:
You can take a look at Unit of work:
[A unit of work] maintains a list of objects affected by a business transaction and coordinates the writing out of changes and the resolution of concurrency problems.
The pattern is described in Patterns of Enterprise Application Architecture
As for avoiding race conditions, you might use some kind of locking mechanism. There are a few strategies. This answer explains it nicely, in summary:
Optimistic locking: when you read a record you take some kind of version from it (e.g. the last modification timestamp). And when it's time to write the changes you check if the record has been modified or not, if so, you abort the transaction.
Pessimistic locking: before reading the record, you create a lock so that nobody else can read/write to it while you work with it. Once the changes are written or discarded, you release the lock.
Chosing one or the other depends on the use case, mainly depending on how many clients you expect to write to a given resource at the same time. If there are many, pessimistic locks would be the choice.
The answer talks about DB locks, but you can apply the idea to cache locks (redis, memcached).
Unfortunately, AFAIK there's no simple way to "be free from race conditions". We have to think about each case/transaction. |
<filename>tests/pbraiders/database/adapter/adapterfactory.py
# coding=utf-8
"""Factory used to create a database mapper."""
from abc import ABC, abstractmethod
from pbraiders.database.adapter import AbstractAdapter
from pbraiders.database.processor.init.teardown import Log as DeleteLog
from pbraiders.database.processor.init.teardown import Data as DeleteData
from pbraiders.database.processor.init.setup import Users as InsertUsers
from pbraiders.database.processor.init.setup import Config as InsertConfig
from pbraiders.database.processor.init.setup import Contacts as InsertContacts
class AdapterFactory(ABC):
@abstractmethod
def create(self, config: dict) -> AbstractAdapter:
pass
def initialize(self, configDB: dict, configData: dict) -> AbstractAdapter:
pDB = self.create(configDB)
DeleteLog(pDB).execute()
DeleteData(pDB).execute()
InsertUsers(pDB).execute(configData['users'])
InsertConfig(pDB).execute(configData['config'])
InsertContacts(pDB).execute(configData['contacts'])
return pDB
|
<gh_stars>0
package com.pizza.auth;
import java.util.List;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.security.authentication.BadCredentialsException;
import org.springframework.security.authentication.UsernamePasswordAuthenticationToken;
import org.springframework.security.authentication.dao.AbstractUserDetailsAuthenticationProvider;
import org.springframework.security.core.AuthenticationException;
import org.springframework.security.core.GrantedAuthority;
import org.springframework.security.core.authority.AuthorityUtils;
import org.springframework.security.core.userdetails.User;
import org.springframework.security.core.userdetails.UserDetails;
public class JwtAuthenticationProvider extends AbstractUserDetailsAuthenticationProvider {
@Autowired
private JwtUtil jwtUtil;
@Override
public boolean supports(Class<?> authentication) {
return (JwtAuthenticationToken.class.isAssignableFrom(authentication));
}// TODO why?
@Override
protected void additionalAuthenticationChecks(UserDetails userDetails,
UsernamePasswordAuthenticationToken authentication) throws AuthenticationException {
}
@Override
protected UserDetails retrieveUser(String username, UsernamePasswordAuthenticationToken authentication)
throws AuthenticationException {
// remark: username is constant "NONE_PROVIDED"
JwtAuthenticationToken jwtAuthenticationToken = (JwtAuthenticationToken) authentication;
String token = jwtAuthenticationToken.getToken();
JwtUser parsedUser = jwtUtil.parseToken(token);
if (parsedUser == null) {
throw new BadCredentialsException("JWT token is not valid");
}
return parsedUser;
}
}
|
/// Determines whether target volume is pinned.
pub fn is_volume_pinned(volume: &Volume) -> bool {
if let Some(labels) = &volume.spec.labels {
volume_opts::decode_local_volume_flag(labels.get(volume_opts::LOCAL_VOLUME))
} else {
false
}
} |
/**
* Created by hujian on 2017/3/7.
*/
public class SpaceSavingTopology {
public static void main(String[] args){
TridentTopology tridentTopology = new TridentTopology();
TridentState tridentState = tridentTopology
.newStream("spaceSaving",new FrequencySpout(10))
.each(new Fields("item","frequency","type"),new CountEntryInstanceCreator<Integer>(),
new Fields("instance"))
.partitionPersist(new MemoryMapState.Factory(),new Fields("instance"),
new SpaceSavingModelUpdater("ss",
new SpaceSavingIml<Integer>(100)));
LocalCluster localCluster = new LocalCluster();
localCluster.submitTopology("SS-topology",new Config(),tridentTopology.build());
}
} |
/*
* Copyright 2017-2020 original authors
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package io.micronaut.buffer.netty;
import io.micronaut.context.annotation.BootstrapContextCompatible;
import io.micronaut.core.annotation.Internal;
import io.micronaut.core.convert.ConversionService;
import io.micronaut.core.io.buffer.ByteBuffer;
import io.micronaut.core.io.buffer.ByteBufferFactory;
import io.netty.buffer.ByteBuf;
import io.netty.buffer.ByteBufAllocator;
import io.netty.buffer.Unpooled;
import javax.inject.Singleton;
/**
* A {@link ByteBufferFactory} implementation for Netty.
*
* @author <NAME>
* @since 1.0
*/
@Internal
@Singleton
@BootstrapContextCompatible
public class NettyByteBufferFactory implements ByteBufferFactory<ByteBufAllocator, ByteBuf> {
/**
* Default Netty ByteBuffer Factory.
*/
public static final NettyByteBufferFactory DEFAULT = new NettyByteBufferFactory();
private final ByteBufAllocator allocator;
static {
ConversionService.SHARED.addConverter(ByteBuf.class, ByteBuffer.class, DEFAULT::wrap);
ConversionService.SHARED.addConverter(ByteBuffer.class, ByteBuf.class, byteBuffer -> {
if (byteBuffer instanceof NettyByteBuffer) {
return (ByteBuf) byteBuffer.asNativeBuffer();
}
throw new IllegalArgumentException("Unconvertible buffer type " + byteBuffer);
});
}
/**
* Default constructor.
*/
public NettyByteBufferFactory() {
this.allocator = ByteBufAllocator.DEFAULT;
}
/**
* @param allocator The {@link ByteBufAllocator}
*/
public NettyByteBufferFactory(ByteBufAllocator allocator) {
this.allocator = allocator;
}
@Override
public ByteBufAllocator getNativeAllocator() {
return allocator;
}
@Override
public ByteBuffer<ByteBuf> buffer() {
return new NettyByteBuffer(allocator.buffer());
}
@Override
public ByteBuffer<ByteBuf> buffer(int initialCapacity) {
return new NettyByteBuffer(allocator.buffer(initialCapacity));
}
@Override
public ByteBuffer<ByteBuf> buffer(int initialCapacity, int maxCapacity) {
return new NettyByteBuffer(allocator.buffer(initialCapacity, maxCapacity));
}
@Override
public ByteBuffer<ByteBuf> copiedBuffer(byte[] bytes) {
return new NettyByteBuffer(Unpooled.copiedBuffer(bytes));
}
@Override
public ByteBuffer<ByteBuf> copiedBuffer(java.nio.ByteBuffer nioBuffer) {
return new NettyByteBuffer(Unpooled.copiedBuffer(nioBuffer));
}
@Override
public ByteBuffer<ByteBuf> wrap(ByteBuf existing) {
return new NettyByteBuffer(existing);
}
@Override
public ByteBuffer<ByteBuf> wrap(byte[] existing) {
return new NettyByteBuffer(Unpooled.wrappedBuffer(existing));
}
}
|
Cross-generational health promotion and engaging families. We were winding down at the end of a half-day consultative visit. Heidi (a pseudonym), a senior leader oa large corporation, was as inquisitive as she was astute and, as we parted, Heidi said, "The main thing I'm taking away from our meeting today is that one size doesn't fit all." "Bingo!" I thought. Then I said: "If it did, I expect you'd be after a new fashion soon enough anyway. As we discussed, let's not confuse diffusion of innovation with adoption. There would he no innovation were it not for you early adopters." It was a session that made me mindful of the competing demands we often face as health promotion practitioners. On the one hand, we bring specialized experience and knowledge to guide leaders in program planning but, on the other hand, complex organizations often present with questions for which there is scant empirical evidence for offering definitive answers. Those of us who serve many clients have all manner of different bosses. Heidi has two attribtites common among great bosses: she is easy to work for but difficult to please. We had been discussing how to take her already successful program to the next level. Adding a family component had emerged as the next priority. Heidi has shrewd marketing instincts and had begun considering the steps she'd need to sell the initiative through the right channels. As a young, highly educated leader with considerable resources available to her, Heidi fits the profile of an early adopter, and I never doubted her authority or ability to move ahead decisively. She intuitively understood what social scientists have confirmed: diffusion of a new idea is more commonly accompanied by subjective impressions than by scientific data. Early adopters like Heidi are embracing family programming as the next phase of their health promotion joumey even though they have few examples to build upon. At the same time, she is not yet fully pleased with the engagement level of many of her employee subgroups, particularly younger and ethnically diverse parts of her organization. The question for the employee health management field is not whether innovations in service of families and generational differences will be adopted; it is more a matter of how long it will take for such ideas to spread, be tested, and mature into offerings with proven effectiveness. |
The attack in San Bernardino is part of the growing self-radicalization trend that is all too familiar to us in the counterterrorism community.
Those with tendencies, for whatever the reason, to partake in some type of jihad can be influenced and inspired much more freely and frequently with the evolution of social media and the reach of the Internet.
As some of the more recent terrorist cases have demonstrated, operatives are acquiring their skills and motivation through the breadth of propaganda, instruction and messaging that is available online.
There is also a perception that ISIS is succeeding, which greatly exacerbates the radicalization phenomenon.
All of this, coupled with traditional influences such as travel abroad to connect with or receive training from terrorist organizations, creates the basis for today’s terrorist threat environment.
The digital age has helped terrorists do a better job of not only recruiting members but also inspiring sympathizers to carry out attacks on their own, thinking globally but acting locally.
It has also provided something new to emergent terrorists: a sense of empowerment to design a plan of attack that will help them ensure their own success and achieve potential martyrdom.
What is also emerging is a tougher threat for law enforcement and the intelligence community to counter, especially as self-radicalized individuals act more independently and anonymously.
They also have the added benefit of practicing tighter operational security now than in the past, including using encrypted technologies that hinder the ability to intercept and analyze information.
Unfortunately, the threat that exists today has the potential to pose significant danger for quite some time.
However, we can take steps to defeat it.
This starts by providing law enforcement and its partners with more effective technologies, tools and resources to uncover potential plots before they emerge.
The threat should also be countered by better informing and engaging the public to identify suspicious behavior and possible threats.
Lastly, it is clear we need to take additional steps to destroy ISIS — the messenger that has become so effective at targeting the growing radicalized population.
Michael O’Neil is a former commanding officer of the NYPD Counterterrorism Division and the CEO of MSA Security. |
This invention relates to computer error handling. More specifically, the invention relates to a firmware-based error handling mechanism to support the creation, storage and retrieval of customized and extendible error records in computer platforms.
Modern computers are designed to monitor their own performance and frequently to test themselves to assure that operations have been performed properly. When a fault occurs, a machine interrupt typically is issued, and the hardware and software attempt to locate and identify the error. Depending on the severity of the error, control programs may shut down the entire machine, may avoid use of the faulty component, or may simply record the fact that an error has occurred.
System error detection, containment and recovery are critical elements of highly reliable and fault tolerant computing environments. While error detection is primarily accomplished through hardware mechanisms, system software plays a greater role in the containment and recovery of errors. The degree to which overall error handling is effective in maintaining system integrity depends upon the level of coordination and cooperation between the system CPUs, platform hardware fabric, and system software. Vendors of such computer systems therefore have developed maintenance and diagnostic facilities as part of their computer platforms. When a system failure occurs, diagnostic software may attempt to determine the cause of the failure and may also attempt to store information describing the failure, so that subsequent efforts to resolve or eliminate the failure may benefit from the stored information.
In the prior art, software-based error handling mechanisms of the type described have traditionally resided in a portion of the computer operating system. As a result, operating system designers have been required to develop unique error handling subsystems for each supported computer platform. Because of this constraint, computer error handling capabilities have been relatively limited in the prior art. In particular, designers of multiple computing environments have been forced to isolate the error management functions of each component operating system. Similarly, designers of complex computer platforms having multiple domains and/or partitions have been forced to deploy separate and isolated error management systems. Additionally, Original Equipment Manufacturers (OEMs) have been restricted in their ability to develop customized computer platforms that provide enhanced maintenance capabilities.
Accordingly, there is a need in the art for a unified and standardized approach to computer error handling at the firmware level, outside the traditional sphere of an operating system. Such an error handling mechanism would allow computer platform designers and operating system engineers to develop standard error management subsystems that make effective use of common interfaces and methods. A standard error handling mechanism would also permit OEMs to develop error parsers, utilities and enhanced maintenance diagnostics that do not depend on the specific features any particular operating system. |
package com.github.ruananswer.testUtility;
import java.io.File;
import java.io.FilenameFilter;
/**
* Created by ruan on 16-4-18.
*/
public class StlFilenameFilter implements FilenameFilter {
private String type;
public StlFilenameFilter(String regex) {
this.type = regex;
}
@Override
public boolean accept(File dir, String name) {
return name.endsWith(type);
}
}
|
<gh_stars>0
import { Component, Input} from '@angular/core';
import { Mixtape } from './mixtape';
import { environment } from '../../environments/environment';
@Component({
selector: 'thda-mix',
templateUrl: './mixtape-stamp.component.html',
styleUrls: ['./mixtape-stamp.component.scss'],
})
// This class is used to present a single mixtape, i.e., a single front door to a set of stories known as a "mixtape", in a presumed grid/list of mixtapes.
// It takes as input the mixtape details in the form of a Mixtape object.
export class MixtapeStampComponent {
@Input() mixtapeInput: Mixtape;
@Input('selectedID') selectedMixtapeID: number;
public myMediaBase: string;
constructor() {
this.myMediaBase = environment.mediaBase;
}
isSelected(mixTape: Mixtape) {
return mixTape.mixSetID == this.selectedMixtapeID;
}
}
|
A Visual Representation of Complex Relationships for Object-Oriented Databases It becomes complicated to represent complex object structures as a network of nodes and arcs when a large volume of objects is to be browsed. This paper presents the issues of designing and implementing an icon-based graphic user interface for representing complex relationship instances as a node-and-link form. The iconbased browser presented in this paper supports the simplification of the visual representation of complex relationships among instance level objects by introducing a new browsing method which uses instance-set-icon, active/inactive link, and inactive icon. We also discuss the issues of implementing a two-layer architecture which consists of a stored object layer and a window object layer in order to realize the simplified visual representation of complex relationships. The icon-based browsing for complex object structures has been implemented by using the Xl 1 Motif. |
export const WriteListCode = {
name: 'write-list',
html:
`
<ul>
<li *ngFor="let item of groceries | async">
{{item.text}}
<button (click)="deleteItem(item.$key)" *ngIf="item.text === 'Bread'" md-button color="primary">Make gluten free ❌ 🍞</button>
<button (click)="prioritize(item)" *ngIf="item.text === 'Eggs'" md-button color="primary">Make {{item.text}} important ‼️</button>
</li>
</ul>
<p *ngIf="(groceries | async)?.length === 0">List empty</p>
<button (click)="addItem('Milk')" md-raised-button>Add Milk 🥛</button>
<button (click)="addItem('Apples')" md-raised-button>Add Apples 🍎</button>
<button (click)="addItem('Cucumbers')" md-raised-button>Add Cucumbers 🥒</button>
<button (click)="deleteEverything()" md-raised-button>Remove Everything 🔥</button>
<button (click)="reset()" md-raised-button>Reset ♻️</button>
`,
typescript:
`
import { Component } from '@angular/core';
import { AfoListObservable, AngularFireOfflineDatabase } from 'angularfire2-offline/database';
@Component({
selector: 'app-write-list',
templateUrl: './write-list.component.html'
})
export class WriteListComponent {
groceries: AfoListObservable<any[]>;
constructor(private afoDatabase: AngularFireOfflineDatabase) {
this.groceries = this.afoDatabase.list('/groceries');
}
addItem(newName: string) {
this.groceries.push({ text: newName });
}
deleteEverything() {
this.groceries.remove();
}
deleteItem(key: string) {
this.groceries.remove(key);
}
prioritize(item) {
this.groceries.update(item.$key, { text: item.text + '‼️' });
}
reset() {
this.groceries.remove();
this.groceries.push({text: 'Milk'});
this.groceries.push({text: 'Eggs'});
this.groceries.push({text: 'Bread'});
}
}
`
};
|
Genotypic clustering does not imply recent tuberculosis transmission in a high prevalence setting: A genomic epidemiology study in Lima, Peru Background Whole genome sequencing (WGS) can elucidate Mycobacterium tuberculosis (Mtb) transmission patterns but more data is needed to guide its use in high-burden settings. In a household-based transmissibility study of 4,000 TB patients in Lima, Peru, we identified a large MIRU-VNTR Mtb cluster with a range of resistance phenotypes and studied host and bacterial factors contributing to its spread. Methods WGS was performed on 61 of 148 isolates in the cluster. We compared transmission link inference using epidemiological or genomic data with and without the inclusion of controversial variants, and estimated the dates of emergence of the cluster and antimicrobial drug resistance acquisition events by generating a time-calibrated phylogeny. We validated our findings in genomic data from an outbreak of 325 TB cases in London. Using a larger set of 12,032 public Mtb genomes, we determined bacterial factors characterizing this cluster and under positive selection in other Mtb lineages. Findings Four isolates were distantly related and the remaining 57 isolates diverged ca. 1968 (95% HPD: 1945-1985). Isoniazid resistance arose once, whereas rifampicin resistance emerged subsequently at least three times. Amplification of other drug resistance occurred as recently as within the last year of sampling. High quality PE/PPE variants and indels added information for transmission inference. We identified five cluster-defining SNPs, including esxV S23L to be potentially contributing to transmissibility. Interpretation Clusters defined by MIRU-VNTR typing, could be circulating for decades in a high-burden setting. WGS allows for an improved understanding of transmission, as well as bacterial resistance and fitness factors. Funding The study was funded by the National Institutes of Health (Peru Epi study U19-AI076217 and K01-ES026835 to MRF). The funding sources had no role in any aspect of the study, manuscript or decision to submit it for publication. Research in context Evidence before this study Use of whole genome sequencing (WGS) to study tuberculosis (TB) transmission has proven to have higher resolution that traditional typing methods in low-burden settings. The implications of its use in high-burden settings are not well understood. Added value of this study Using WGS, we found that TB clusters defined by traditional typing methods may be circulating for several decades. Genomic regions typically excluded from WGS analysis contain large amount of genetic variation that may affect interpretation of transmission events. We also identified five bacterial mutations that may contribute to transmission fitness. Implications of all the available evidence Added value of WGS for understanding TB transmission may be even higher in high-burden vs. low-burden settings. Methods integrating variants found in polymorphic sites and insertions and deletions are likely to have higher resolution. Several host and bacterial factors may be responsible for higher transmissibility that can be targets of intervention to interrupt TB transmission in communities. 5 VNTR cluster spanning pan-susceptible to MDR-TB isolates that was identified in 4,000 TB 6 patients enrolled in a household transmissibility study. We examine both host and TB genotypic 7 data to understand the evolution of isolates within this cluster, infer the timing of emergence of 8 antibiotic resistance, and identify genetic bacterial factors unique to this cluster that may have 9 contributed to its success. and consent have been previously described. 33,34 Briefly, patients were enrolled if they were 1 6 diagnosed with pulmonary TB (PTB) at public health clinics and were followed through therapy. 7 Their household contacts were also followed with tuberculin skin testing and monitored for 6 Please refer to the supplement for bacterial culture, drug susceptibility testing (DST), DNA 1 sequencing methodology and phylogenetic analysis. (Table 1). 7 About one-half were smear-positive (51.4%) and one-third used alcohol (35.8%) or other 8 intoxicating substances (27.9%). Two patients had isolates that did not meet our sequencing 9 quality criteria and were excluded. Of those patients with high quality sequence data (n=61) a 1 0 higher majority were male with MDR-TB but were otherwise comparable to the superset of 148 1 1 (Table 1). Sequencing data revealed that 58 isolates belonged to the Latin America- The geographic distribution of strains based on household coordinates, colored by resistance 2 0 pattern is shown in Figure 1. Comparison between genetic and geographic distance did not 2 1 support that the cluster spread in a single geographic direction, even when three most distant other). A pair of isolates collected two months apart from a host who had not been on treatment 2 0 (MDR strains M06 and M10) was found to have no SNP differences outside of PE/PPE regions. 3 4 Looking at genetic evidence alone for recent transmission using a distance cutoff of 6 Other than the pair (index parent M23 and contact child M12), none of these belonged to the 1 same household. With the addition of high confidence indels and PE/PPE SNPs and using the 2 cutoff of ≤ 7 variants (distance between the two isolates belonging to the same patient when high 3 confidence indels and PE/PPE variants were included), there were 41 links among 30 patients, 4 i.e. 76% fewer links than when these variants were excluded. Phylogenetic trees built by Table 2). In addition to 2 2 identifying the known mutations that confer resistance to isoniazid and rifampicin, we found an prevalence settings have noted shorter genetic distance between isolates, 2,3,13 and in one case 2 3 the distances were insufficient to reliably and consistently inform contact tracing interventions. 13 4 It is possible, that certain features of our selected cluster have led to the observation of such 2 5 high levels of diversity. First our cluster spans the spectrum of pan-susceptible to resistant to 7 2 6 drugs, second is that our isolates all belong to lineage 4, a lineage that has been noted to be the 1 most phenotypically and genotypically diverse of the TB lineages. 39 However, the proportion of 2 diversity that could be linked directly to drug resistance was low. A parsimonious explanation of 3 the high degree of observed genomic diversity is that the rate of MIRU-VNTR pattern evolution 4 is on average slow and on the order of decades. Despite this, MIRU-VNTR likely offers sufficient 5 resolution in low prevalence settings as most TB cases there tend to be imported. indels did not show significant differences, there were notable differences within the closely 2 2 related cluster highlighting that similarity measures that rely on SNPs alone could be 2 3 misleading. Inclusion of indels and PE/PPE regions in estimation of divergence dates is limited 2 4 by our current lack of knowledge regarding their evolutionary rates but these regions account for 2 5 an appreciable proportion of variation seen between closely related isolates and thus including 1 this information may affect interpretation of transmission links. 43 3 We identified many genomic links using the SNP distance threshold of ≤ 5 criterion 3 that were 4 not discovered within household contact investigation, providing evidence that household 5 contact investigation is not sufficient to identify and treat secondary TB cases as transmission 6 can occur anywhere in the community. Additionally, 2 of 3 case pairs that belonged to the same 7 household were found to have large genetic distances making it more likely that transmission 8 occurred outside the household. Although the dataset used was relatively small, these findings Our phylogenetic dating procedures supports that acquisition of MDR is not recent in Lima, and 2 0 that MDR cases, given the observed phylogenic structure, are mostly related to transmission. 1 This finding is consistent with other studies performed in other countries including South The cluster under study was the largest such cluster observed in the Lima household 1 2 transmission study. Its transmission success was likely due to both bacterial and host factors. 3 We quantified the host predilection to transmit TB with the PTP measure and found the cluster 1 4 to have a higher score than the median PTP measure reported by a study in Netherlands. 35 have led to an underestimation of the diversity. However, the sampled subset demonstrated a 1 substantial amount of diversity, more than would be expected within a cluster with identical 2 MIRU pattern. 2, 3 We also cannot exclude that the 2 outer most isolates were miss-assigned the 3 reported MIRU pattern and because of this we focused on the isolates confirmed to be of the 4 same lineage by in silico spoligotyping and the WGS SNP barcode. Finally, it is important to 5 note that our dating estimates are heavily reliant on the molecular clock rate that has been 6 previously reported in the literature. |
poster="https://v.politico.com/images/1155968404/201611/2174/1155968404_5220910424001_5220909724001-vs.jpg?pubId=1155968404" true White House confirms Obama has held more talks with Trump
President Barack Obama has spoken with President-elect Donald Trump “at least once” since the two men met earlier this month inside the Oval Office, White House press secretary Josh Earnest confirmed Tuesday afternoon.
Earnest said the White House would not provide a readout of conversations between the president and his successor, just as it has not when Obama has spoken privately to his predecessors. He said there is a “president-to-president prerogative that we’re trying to protect.”
“In the same way that I protected the ability of President Obama to consult confidentially with other senior officials, including some former presidents, I’m not going to read out or confirm every reported meeting or phone call or conversation,” Earnest said. “I can tell you that the president has had a conversation with the president-elect since the Oval Office meeting.”
That the two men have spoken again in the days since their first meeting was first reported Monday by multiple outlets, including POLITICO, when Trump said as much during an off-the-record meeting with multiple TV news executives and anchors. The president-elect was effusive in his praise of Obama, according to a source in the room, adding that he had spoken to the president at least twice since their White House meeting.
According to the source, Trump had said he had never met Obama before the White House meeting and that he didn’t think he would like him. Trump said that he ended up liking Obama "so much," and that he had so much respect for him.
"The feeling is mutual, because it takes two to tango," Trump said, according to the source.
Trump then told the group that he had subsequently spoken with Obama on the phone. One attendee asked who placed the call, and Trump responded that one time he called Obama and another time Obama called him.
“Those of you who covered the president-elect’s visit to the Oval Office a couple of weeks ago, you took note of the fact that the president-elect indicated his desire to continue to consult with President Obama over the course of the transition,” Earnest said. “You’ve also heard President Obama indicate the high priority he has placed on facilitating a smooth and effective transition. So, reports that the two may have talked after their White House meeting, I think are not particularly surprising.”
Hadas Gold contributed to this report. |
<filename>src/Thunk/Thunk.cpu/System.1/Byte.h
#ifndef __System_Byte_H__
#define __System_Byte_H__
#include "..\System.h"
using namespace System;
// default
#define __defaultValueTable_bool false
namespace System {
/// <summary>
/// Boolean
/// </summary>
public_ class Byte : public Object
{
public:
static Type MyType;
private:
byte _value;
// boxing/unboxing
static Object *_box(byte value);
static byte _unbox(Object *value);
};
}
#endif /* __System_Byte_H__ */
|
Subsets and Splits