text
stringlengths 0
7.84M
| meta
dict |
---|---|
Q:
How can I used dashed or dotted line for drawbox in ffmpeg?
How can I used dashed or dotted line (instead of the default solid line) for drawbox in ffmpeg? In general, is there anyway I can customize the look of objects drawn?
I don't find anything that allows me to do that in the documentation.
A:
This function is not implemented by ffmpeg
| {
"pile_set_name": "StackExchange"
} |
Hello there!
Hi! I am Nathalie & I'll give you daily updates about Ed Sheeran (I track Teddy Sheeran)
@EdSheeran_EU on twitter/Instagram | {
"pile_set_name": "OpenWebText2"
} |
It's hard to imagine now, but it's not that long ago that David Cameron was still bandying about the slogan: "vote blue, go green." He once championed himself as a Tory moderniser on the environment. But we've come a long way since the days of trips to the Arctic and hugging huskies. Cameron now openly talks about "getting rid of green crap," while Tory minister Michael Fallon has said the Tories would stop the construction of onshore wind farms if they win in 2015. As we near the general election, the Conservatives are rapidly abandoning any pretence that they care about the green agenda.
Nowhere is this clearer than in the European Parliament, where the Tories are completely unrestricted by the constraints of coalition government. Time and again Conservative MEPs have shown their true colours when it comes to EU environmental measures, and they are definitely not green. They voted down EU measures to restrict the destructive practice of deep-sea fishing. They've opposed efforts to reduce plastic bag use and tackle the scourge of plastic waste in our oceans. And they've repeatedly voted against efforts to strengthen the EU's carbon emissions trading scheme, Europe's landmark policy for fighting climate change.
Yesterday, the Tories showed their true colours yet again when MEPs voted on EU proposals to tackle air pollution, which the new European Commission is threatening to withdraw or water down in its drive to cut red tape. This is being rightly opposed by many MEPs, including Liberal Democrat MEP Catherine Bearder who's leading the charge to keep these proposals on the table. We all want to see moves to make EU regulation smarter and more efficient, but that shouldn't come at the expense of the air we breathe. Air pollution now causes an estimated 29,000 premature deaths in the UK each year, almost as many as smoking. And with 40% of the most deadly pollutants coming from elsewhere in the EU, it's clear that we need urgent action on this across Europe.
Yet Conservative MEPs refused to stand up for these vital measures to improve air quality, voting against key amendments to prevent them being delayed and calling for them to be watered down to "reduce administrative burdens." It seems that for the Tories, laws that protect the environment and improve people's quality of life are just more red tape to be slashed. They fail to see that moving towards a cleaner, greener economy isn't just the right thing to do for the planet. It's the best to way to secure future growth and jobs.
The Conservatives' approach to the environment in Europe shows what sort of approach they would take if they are allowed to govern alone. In coalition, Liberal Democrats have fought to make sure that the environment has stayed at the top of the agenda. We've doubled the amount of energy generated from offshore wind and stopped the Tories from slashing support for renewable energy. And while senior Conservative politicians voice their doubts about man-made climate change, Energy Secretary Ed Davey has been busy paving the way for a global deal to cut carbon emissions. Without the Lib Dems, there would be nothing to stop the Tories from lurching to the right on the environment. The truth is, the only way to make blue go green is by adding yellow.
Tim Farron is Lib Dem MP for Westmorland and Lonsdale | {
"pile_set_name": "OpenWebText2"
} |
Q:
How to properly access a StringVar() of a class from another class - Python - tkinter
(I'm using mac 10.8.5 and Python3 with PyCharm)
I have a tkinter GUI TestMain() class plus one PageOne() class and a PageTwo() class.
I need PageOne() and PageTwo() to be different GUI windows cause they will handle different data.
I minimized the code in order to set it as readable as possible.
After many tests I tried to place the tk.StringVar() and a simple function in the global scope as you can see below, but there's still a problem.
import tkinter as tk
page1_label = tk.StringVar()
page2_entry = tk.StringVar()
def set_ebdata():
data = page2_entry.get()
page1_label.set(data)
class TestMain(tk.Tk):
def __init__(self, *args, **kwargs):
tk.Tk.__init__(self, *args, **kwargs)
tk.Tk.wm_title(self, 'TEST GUI')
container = tk.Frame(self)
container.pack(side='top')
container.grid_rowconfigure(0, weight=1)
container.grid_columnconfigure(0, weight=1)
self.frames = {}
for F in (PageOne, PageTwo):
frame = F(container, self)
self.frames[F] = frame
frame.configure(background='lightgrey')
frame.grid(row=0, column=0, sticky='nswe')
self.show_frame(PageOne)
def show_frame(self, cont):
frame = self.frames[cont]
frame.tkraise()
class PageOne(tk.Frame):
def __init__(self, parent, controller):
tk.Frame.__init__(self, parent)
frame_eb_data = tk.Frame(self, width=100, height=100, bg="orange", colormap="new")
frame_eb_data.grid(row=0, column=0, sticky='w', padx=5, pady=5)
frame_but_right = tk.Frame(self, width=240, height=60, bg="yellow", colormap="new")
frame_but_right.grid(row=0, column=1, padx=5, pady=5, rowspan=2)
lab_eb_data = tk.Label(frame_eb_data, background='#DDD4EF', textvariable=page1_label)
lab_eb_data.grid(row=0, column=0, sticky='n')
b_ebdata = tk.Button(frame_but_right, text="Page 2", width=10, height=2, command=lambda: controller.show_frame(PageTwo))
b_ebdata.grid(row=3, column=0)
class PageTwo(tk.Frame):
def __init__(self, parent, controller):
tk.Frame.__init__(self, parent)
frame_buttons = tk.Frame(self, width=455, bg="#DDD4EF", colormap="new")
frame_buttons.grid(row=0, column=0, padx=5, pady=5, sticky='e')
frame_up_left = tk.Frame(self, width=485, height=260, bg="#89E3FA", colormap="new")
frame_up_left.grid(row=1, column=0, sticky='w', padx=5, pady=5)
b_data = tk.Label(frame_buttons, text='Example GUI', font='TrebuchetMS 30 bold', background="#DDD4EF")
b_data.grid(row=0, column=0, padx=13, pady=5, sticky='w')
b5 = tk.Button(frame_buttons, text='Set Text', command=lambda: set_ebdata)
b5.grid(row=0, column=2, padx=5, pady=5, sticky='e')
b6 = tk.Button(frame_buttons, text='Page 1', command=lambda: controller.show_frame(PageOne))
b6.grid(row=0, column=3, padx=5, pady=5, sticky='e')
label_2 = tk.Label(frame_up_left, text="Name:", font=("bold", 14))
label_2.grid(row=1, column=0, sticky='e')
entry_nombre_fld = tk.Entry(frame_up_left, width=40, textvariable=page2_entry)
entry_nombre_fld.grid(row=1, column=1, columnspan=3, sticky='w')
app = TestMain()
app.mainloop()
When you run the program a window with a "Page 2" button (b_ebdata) appears, by clicking it you enter Page 2 window which has a "Set Text" button (b5), a "Page 1" button (b6) and an entry field (entry_nombre_fld).
I'd like to set the text I'll enter in the entry field (entry_nombre_fld) in the Page 1 label (lab_eb_data) by clicking the "Set Text" button (b5).
Could a solution be to put page1_label = tk.StringVar() into PageOne() class and page2_entry = tk.StringVar() into PageTwo() class and make those accessible by each other?
Any other suggestion ?
Thx in advance for your help!
A:
I had to change a few things but for the most part the major solution is to move your StringVar()'s into the main class. Then next we can use the controller argument in the other 2 classes to manipulate the data.
I added a function on page 2 to deal with updating the the label StringVar.
Because of this I deleted the other function you had for this.
I had to change your entry field to a class attribute so we can use its content in the new method. I also had to create a class attribute for the controller in page 2 so we can use the controller in the method as well.
Now there might be an easier way but this is what I managed with your code.
import tkinter as tk
class TestMain(tk.Tk):
def __init__(self, *args, **kwargs):
tk.Tk.__init__(self, *args, **kwargs)
self.title('TEST GUI')
# Moved StringVar()'s to the main class
self.page1_label = tk.StringVar()
self.page2_entry = tk.StringVar()
container = tk.Frame(self)
container.pack(side='top')
container.grid_rowconfigure(0, weight=1)
container.grid_columnconfigure(0, weight=1)
self.frames = {}
for F in (PageOne, PageTwo):
frame = F(container, self)
self.frames[F] = frame
frame.configure(background='lightgrey')
frame.grid(row=0, column=0, sticky='nswe')
self.show_frame(PageOne)
def show_frame(self, cont):
frame = self.frames[cont]
frame.tkraise()
# Deleted this function
class PageOne(tk.Frame):
def __init__(self, parent, controller):
tk.Frame.__init__(self, parent)
frame_eb_data = tk.Frame(self, width=100, height=100, bg="orange")
frame_eb_data.grid(row=0, column=0, sticky='nsew', padx=5, pady=5)
frame_but_right = tk.Frame(self, width=240, height=60, bg="yellow")
frame_but_right.grid(row=1, column=0, padx=5, pady=5, sticky='nsew')
lab_eb_data = tk.Label(frame_eb_data, background='#DDD4EF', textvariable=controller.page1_label)
lab_eb_data.grid(row=0, column=0)
b_ebdata = tk.Button(frame_but_right, text="Page 2", width=10, height=2, command=lambda: controller.show_frame(PageTwo))
b_ebdata.grid(row=0, column=0)
class PageTwo(tk.Frame):
def __init__(self, parent, controller):
tk.Frame.__init__(self, parent)
# Added the self.controller so the method below can use it.
self.controller = controller
frame_buttons = tk.Frame(self, width=455, bg="#DDD4EF", colormap="new")
frame_buttons.grid(row=0, column=0, padx=5, pady=5, sticky='e')
frame_up_left = tk.Frame(self, width=485, height=260, bg="#89E3FA", colormap="new")
frame_up_left.grid(row=1, column=0, sticky='w', padx=5, pady=5)
b_data = tk.Label(frame_buttons, text='Example GUI', font='TrebuchetMS 30 bold', background="#DDD4EF")
b_data.grid(row=0, column=0, padx=13, pady=5, sticky='w')
b5 = tk.Button(frame_buttons, text='Set Text', command= self.update_p2_label)
b5.grid(row=0, column=2, padx=5, pady=5, sticky='e')
b6 = tk.Button(frame_buttons, text='Page 1', command=lambda: controller.show_frame(PageOne))
b6.grid(row=0, column=3, padx=5, pady=5, sticky='e')
self.entry_nombre_fld = tk.Entry(frame_up_left, width=40)
self.entry_nombre_fld.grid(row=1, column=1, columnspan=3, sticky='w')
label_2 = tk.Label(frame_up_left, text="Name:", font=("bold", 14))
label_2.grid(row=1, column=0, sticky='e')
# Added this function to update the page1_label StringVar.
def update_p2_label(self):
self.controller.page1_label.set(self.entry_nombre_fld.get())
app = TestMain()
app.mainloop()
| {
"pile_set_name": "StackExchange"
} |
function [ error_per_image, err_pp, err_pp_dim ] = compute_error( ground_truth_all, detected_points_all )
%compute_error
% compute the average point-to-point Euclidean error normalized by the
% inter-ocular distance (measured as the Euclidean distance between the
% outer corners of the eyes)
%
% Inputs:
% grounth_truth_all, size: num_of_points x 2 x num_of_images
% detected_points_all, size: num_of_points x 2 x num_of_images
% Output:
% error_per_image, size: num_of_images x 1
num_of_images = size(ground_truth_all,3);
num_of_points = size(ground_truth_all,1);
error_per_image = zeros(num_of_images,1);
err_pp = zeros(num_of_images, num_of_points);
err_pp_dim = zeros(num_of_images, num_of_points, 2);
for i =1:num_of_images
detected_points = detected_points_all(:,:,i);
ground_truth_points = ground_truth_all(:,:,i);
if(num_of_points == 66 || num_of_points == 68)
interocular_distance = norm(ground_truth_points(37,:)-ground_truth_points(46,:));
else
interocular_distance = norm(ground_truth_points(37-17,:)-ground_truth_points(46-17,:));
end
sum=0;
for j=1:num_of_points
sum = sum+norm(detected_points(j,:)-ground_truth_points(j,:));
err_pp(i,j) = norm(detected_points(j,:)-ground_truth_points(j,:));
err_pp_dim(i,j,1) = detected_points(j,1)-ground_truth_points(j,1);
err_pp_dim(i,j,2) = detected_points(j,2)-ground_truth_points(j,2);
end
error_per_image(i) = sum/(num_of_points*interocular_distance);
err_pp(i,:) = err_pp(i,:) ./ interocular_distance;
err_pp_dim(i,:) = err_pp_dim(i,:) ./ interocular_distance;
end
end
| {
"pile_set_name": "Github"
} |
Democrats are ramping up efforts to challenge Texas Rep. Will Hurd, one of the most vulnerable incumbent Republicans in the House going into the 2020 election, hoping that Latino voters will help them unseat the moderate lawmaker and flip his battleground district.
In its first Spanish-language ad of the campaign cycle, the Democratic Congressional Campaign Committee (DCCC), the fundraising arm of Democrats in the House, is looking to cast Hurd as a politician who touts himself as a centrist, but has failed to actively condemn President Trump's immigration agenda. The digital ad — which launched Wednesday — is expected to reach thousands of Spanish-speaking voters in Texas' 23rd Congressional District on Facebook and Instagram.
The ad cites an NBC News report that revealed an incident in which 37 migrant children were stuck in a van "under the blistering Texas sun" for hours. It asks in Spanish, "Why hasn't Congressman Will Hurd spoken up for immigrant children left in vans?"
The new digital Spanish-language ad by the Democratic Congressional Campaign Committee asks, "Why hasn't Congressman Will Hurd spoken up for immigrant children left in vans?" DCCC
"Texans are outraged by the Trump administration's cruel and inhumane policy of separating children from their families and there is a moral responsibility for elected officials to speak out when they see injustice across their communities," DCCC Texas senior adviser Roger Garza said in statement. "After 37 children were stuck in detention facility vans, Congressman Hurd should have spoken up. Instead, he hasn't said a word and people across Texas need to know that."
Get Breaking News Delivered to Your Inbox
Although Democrats are expected to play defense and protect their majority in the House during the November 2020 elections, they are also targeting swing districts, especially those with large Latino communities like Hurd's, which they failed to flip in 2018. Nearly 65 percent of eligible voters in Texas' 23rd congressional district are Latino.
Hurd, a former CIA officer, has represented his sprawling border district in south Texas since he toppled a Democratic incumbent in 2014. He has been re-elected twice by razor thin margins in 2016 and 2018, when he bested Iraq war veteran Gina Ortiz Jones by less than 1,000 votes and became one of the few Republicans in districts Hillary Clinton won in 2016 to survive the Democratic wave during the 2018 midterms.
The Texas Republican has often isolated himself from the president and criticized some of his immigration proposals, including the construction of a wall along the entire U.S.-border. He has also supported legislation to place young undocumented immigrants, dubbed DREAMers, on a pathway to U.S. citizenship.
Justin Hollis, a spokesperson's for Hurd's congressional campaign, denounced the ad commissioned by the DCCC.
"The Democrats have reached a new low by running nonsense ads against Congressman Hurd — the only member of Congress who has championed billions in humanitarian assistance and a solution to the border crisis," Hollis wrote in a statement to CBS News. "If this Washington-based group actually cared about migrants they should run ads against the Democrats who stopped Hurd's bills." | {
"pile_set_name": "OpenWebText2"
} |
Teen assaulted at bus stop in Prince George’s County
WASHINGTON – Prince George’s County police are on the lookout after a high school student was assaulted at her bus stop Thursday morning in Fort Washington.
Police say the teen was approached by an adult man near her bus stop at Lourdes Drive and Hickory Drive around 6:30 a.m.
According to police, after the man grabbed the teenage girl, they both fell to the ground where she was able to kick her attacker and escape.
The 16-year-old girl’s father tells WTOP she is shaken up, but OK. She called her dad right after the attack.
Police describe the suspect as a black male, about 6 feet tall, 170 pounds and in his late 20s to early 30s. He was last seen wearing a gray-hooded sweatshirt and sweat pants.
Police are canvassing the area and will have extra officers near the bus stop. Anyone with information about this case should contact the Prince George’s County Police Department’s Crime Solvers at 1-866-411-TIPS (8477), text “PGPD plus your message” to CRIMES (274637) on your cell phone or go the Prince George’s County Police Department website to submit a tip online. | {
"pile_set_name": "Pile-CC"
} |
Looking for News?
Search for it here
Last five comments
M00o93H7pQ09L8X1t49cHY01Z5j4TT91fGfr
fires
4:04PM - REQ RECALL TO LANCASTER CT FOR ETA,1039 CT LANCASTER THEY HAD TO GET THE CREW TOGETHER,THEY WILL BE ENRT IN A FEW MINS - INFO FOR F8,F9
2:48PM - 1039 LANCASTER CALTRANS COPIES TO MOVE THE SIGNS,WILL SEND A UNIT - ETA WITHIN 30 MINS
2:42PM - PER 78-S5,REQ TO ADVISE 78-F9,THAT WE WILL BE NOTIFYING CT TO MOVE THE SIGNS (JEO 140TH) WHEN THEY GO 1097,HE WILL NEED TO RE-POSITION HIMSELF
1:59PM - FOR INFO CP5 HAS BEEN 1097 AT FIRE CP AND IS UP AND RUNNING
1:25PM - PER F4 REQ RO KNOW IF EVACTUATION AT LAKE HUGHES RD IS VOLUTARY OR MANDATORY
12:31PM - CENTRAL UNIT 1097 - LAKE HUGHES RD AND WARM SPRINGS
12:03PM - S5 ALS O REQ TO ID ANY UNITS 1097 OR ENRT NOT LISTED ABOVE
12:03PM - 78-R2 AT CP,78-R3 AT 3PT AT 138,78-R4 AT 138 AT 170TH W,78-F1 AT LK HUGHES,78-F2 ON 138 AT GORMAN POST,78-F3 AT ELIZ LK RD AT JOHNSON RD,78-F4 AT ELIZ LK RD AT LK HUGHES RD,78-R5 AT MUNZ RCH RD AND FAIRMONT NEENACH,78-F6 RESP TO LK HUGHES JNO CASTAIC LK,78-F7 RESP TO PINE CYN AT OLD RIDGE RTE
12:03PM - 78-S5 REQ RADIO DO A ROLL CALL JUST OF THE UNITS DEPRLOYED ON THE FIRE SCENE AND UP UNIT FIELDS TO SHOW WHERE EACH UNIT IS AT—NEG ON CLEARENCE JUST OT MAKE SURE ALL UNITS AND ACCOUNTED FOR AND 1020 S CORRECT
11:47AM - 228 AND 230 - 10-8 ON THE TAN - CLEARED 1125 NB 5 JSO 14
11:39AM - PER SOUTHERN 4 ** COMMAND VEHICLE RESPONDING UP HERE - SOUTHERN 32 WAS ESCORTING THE VEHICLE - NEG RESP ON THE TAN - TRY THE BLU PLEASE FOR ETA
11:34AM - 1039 TO 78-L2 - WILL ADVISE ON LINE 49 VIA LL
11:33AM - SB SAN FRANCISQUITO CANYON FROM ELIZABETH LAKE IS OPEN
11:21AM - OFF DUTY LAFD - INQUIRING IF ELIZABETH LAKE RD AT SAN FRANCISQUITO CYN IS CLOSED & IF SO, CONFIRM CLOSED TO RESD ON ELIZABETH LAKE ??
11:12AM - PTY LL NEEDS TO RESP TO 1148 A RESIDENT @ 43678 LAKE ELIZABETH RD WHO IS BEING EVACUATED AND NEEDS TO KNOW INFO ON RD CLOSURES FRM AGUA DULCE AREA
10:02AM - PER S5 THEY CAN GO DOWN SB SAN FRANCISQUITO CANYON FROM ELIZABETH LAKE RD
9:54AM - PER S5 - THROUGH TRAFFIC FOR ANY RESIDENTS IEMMEDIATELLY OFF THE FIRE AREA - OFF 138 AND NO FURTHER THAN 3 POINTS - LET HIM THROUGH
9:48AM - PER F2 - PRESIDENT OF THE TOWN COUNCIL - LARRY MEYERS - 107 WITH F2 - TRYING TO GET TO COMMAND POST
9:47AM - SOUTHERN 4 ENROUTE TO FIRE AREA
9:02AM - 1039 COMMAND POST - LAKE HUGHS RESIDENTS CAN ACCESS FOR ANIMAL RETRIEVAL ONLY **** NO OTHER ACCESS ALLOWED
8:53AM - ELIZABETH LAKE RD : ONLY RESIDENTS ALLOWED ARE THE ONES EVACUATING ANIMALS AND LIVESTOCK TNX
7:05AM - R4 WILL ROLL TO 170TH WEST AT WR138
6:54AM - REQUING CALTRANS TO ROLL A CREW TO CLOSE SR138 AT GORMAN POST ROAD - LONG TERM CLOSURE
5:53AM - *** AGAIN - RDWY IS OPEN BUT FIRE DEPT WANTS TO BE PREPARED FOR A QUICK CLOSURE ****
5:52AM - PER FIRE DEPT TO 78-R2 - THE 138 MAY HAVE TO BE SHUT DOWN AND UNITS WILL BE IN POSITION FOR A QUICK CLOSURE * ALSO REQ CT TO START THIS WAY NOW
5:52AM - PER 78-R2 - CT ** PLS ACTIVATE CMS SIGNS ON THE 14 TO WATCH FOR FIRE/SLOW DOWN ON THE 138—RDWY IS STILL OPEN AT THIS TIME | {
"pile_set_name": "Pile-CC"
} |
Mechanical ventilation (MV) is the principle supportive care in ALI/ARDS patients. MV can be associated with several negative side effects and lung injury (VILI). In recent years several randomized trials tried to focus the optimal ventilatory strategy in ALI/ARDS patients aimed to avoid or minimize the VILI \[[@B1]-[@B3]\]. In this study we evaluated how MV has been employed in recent years in ALI/ARDS patients in our intensive care unit (eight beds). We retrospectively collected data of all ALI/ARDS patients, from 2001 to August 2004. To be included in the study the patient must to be ventilated for at least 48 hours without an unfavorable short-term prognosis. Sixty-two patients were enrolled; the mean age and the body mass index were not different between the years (54 ± 17, 62 ± 12, 56 ± 16 and 55 ± 20 years and 24 ± 3, 24 ± 2, 25 ± 6 and 25 ± 4 kg/m^2^, respectively). The variables in Table [1](#T1){ref-type="table"} were not different at day 3 and day 7 between the four years. We did not find any difference in our \'local\' lung ventilatory setting through the years regarding level of PEEP or tidal volume. Instead, to set the tidal volume based on body weight we prefer to set taking into account the airway plateau pressure.
Year 2001 2002 2003 2004
----------------------------------- ----------- ----------- ----------- ------------
Patients (*n*) 14 16 21 11
PaO~2~/FiO~2~ 189 ± 74 134 ± 77 173 ± 64 146 ± 63
Tidal volume (ml/kg) 9.9 ± 1.9 8.5 ± 2.1 9.9 ± 1.8 10.1 ± 2.5
Airway plateau pressure (cmH~2~O) 26 ± 6 27 ± 6 27 ± 6 27 ± 4
PEEP (cmH~2~O) 8 ± 3 10 ± 4 9 ± 4 9 ± 4
Primary ARDS 11 (79%) 12 (75%) 12 (57%) 3 (27%)
ICU stay (days) 35 ± 21 37 ± 28 36 ± 35 32 ± 25
Mortality 3 (21%) 8 (50%) 8 (38%) 3 (27%)
| {
"pile_set_name": "PubMed Central"
} |
Press Release:
(Fort Collins, CO) – In May, 2013, Odell Brewing will release its first variety 12 pack, Montage. The new package will feature two brewery flagships, a seasonal offering, and a new brew, Loose Leaf American Session Ale.
Loose Leaf was developed on the brewery’s five barrel pilot system. The Odell brewers wanted to create a beer that was lighter in color, lower in alcohol content, but also flavorful and distinct. The final recipe is crisp and balanced with lower IBU’s and a bright hop aroma. At 4.5% ABV, it’s delicate and refreshingly drinkable with a clean finish.
The brewery plans to issue three different versions of Montage mirroring its seasonal release schedule. Each edition of the variety pack will include brewery favorites 90 Shilling and IPA, the current seasonal offering as well as a new limited release pilot system inspired brew.
“Montage gives our brewers a new avenue to share their innovations,” said Eric Smith, Director of Sales & Marketing for Odell Brewing. “Not only does it include our flagship offerings, but it also gives us the opportunity to share our seasonal brews, and it extends our creative pilot system inspired offerings beyond the brewery walls.”
Odell Brewing will release Montage initially just in Colorado in May. The brewery will continue to launch the package in its other markets beginning in June. The brewery plans to celebrate the release with a special Loose Leaf tapping during the weekend of May 10th.
Loose Leaf American Session Ale will also be available on draft exclusively at all Colorado Old Chicago locations during the week of May 13th – 19th as part of a special American Craft Beer Week® – Odell Brewing tap takeover.
Founded in 1989, Odell Brewing was started by Doug Odell, his wife Wynne, and his sister Corkie. Twenty-four years later, the culture of family and collaboration still thrives fostering a brewery full of beer-centric people. It is this passion for beer that inspires Odell Brewing to create quality, hand-crafted, innovative brews. As a regional craft brewery, Odell Brewing is committed to serving the communities in which it distributes by sourcing local raw materials, and through its charitable giving program known as Odell Outreach. Odell Brewing was recently named a “Top Company of 2010” byColoradoBiz Magazine and is an award winning brewery, nationally and internationally: 2012 Brewers Association Recognition Award, 2011 Great American Beer Festival® – gold medal for Friek. 2010 North American Beer Awards – gold medal for Woodcut No. 3. 2009 BrewNZ Awards – gold for 5 Barrel Pale Ale. 2008 World Beer Cup® – gold for IPA. 2007 Great American Beer Festival® – gold medal for IPA. | {
"pile_set_name": "OpenWebText2"
} |
Establishment of TUSMi004-A, an induced pluripotent stem cell (iPSC) line from a 32-year old Chinese Han patient with Obsessive-Compulsive Disorder (OCD).
A 32-year old Obsessive-Compulsive Disorder (OCD) male patient donated his Peripheral blood mononuclear cells (PBMC). The non-integrating episomal vector system used to reprogram PBMCs with the human OKSM transcription factors. The pluripotency of transgene-free iPSCs was confirmed by immunocytochemistry for pluripotency markers and by the ability of the iPSCs to differentiate spontaneously into 3 germ layers in vitro. In addition, the iPSC line displayed a normal karyotype. Our model might offer a good platform to further study the pathological mechanisms, to identify early biomarkers, and also for drug testing studies in OCD. Resource Table. | {
"pile_set_name": "PubMed Abstracts"
} |
David Krejci jumps off the bench to receive a pass from Chris Kelly to send #46 in alone for his first of the year.
Quickly after sending David Krejci on a breakway to put the Bruins up 1, Chris Kelly turns the puck over in front of the net, leading to a game tying Tatar Goal. Jimmy Howard makes a great save on a point blank shot from Bruins forward Brad Marchand.
Reilly Smith scores a rebound goal after a failed wrap around attempt by Patrice Bergeron to put the Bruins up 2-1. Gustav Nyquist snipes a goal on the Power Play to tie the game at 2-2.
David Krejci scores to lead of the shootout. Reilly Smith scores on Jimmy Howard to give the Bruins the victory over Detroit 3-2. | {
"pile_set_name": "OpenWebText2"
} |
“Mitici anni ’80, imbattibili! […]. Volevamo divertirci, che c’è di male?”
The Wrestler, 2008
Sono nelle serie tv, nella moda, nella musica e secondo qualcuno anche nella politica. Difficile dire cosa abbia riportato in auge gli sfavillanti anni ’80 all’interno di tutte queste nicchie della nostra quotidianità. Un decennio chiusosi quasi trent’anni fa che torna a manifestarsi risorgendo dalle proprie ceneri, proprio mentre molti ne celebrano il funerale culturale e morale, dipingendolo come un mostro da lasciarsi alle spalle in un’ottica di eterno ritorno nefasto, che guarda avanti alla ricerca di vecchi demoni da scacciare. Vhs, tecnologia analogica, spalline, lustrini e vecchi film per ragazzi tornano quindi alla ribalta, infarcendo l’estetica dei nostri media, creando ibridi che in fatto di intrattenimento hanno poco da invidiare agli originali.
La serie Netflix Stranger Things è certamente il frutto più gustoso di questo ultimo raccolto. Un telefilm ispirato dal filone dei Goonies e dei Gremlins che mixa sapientemente una trama ben assortita ad infinite citazioni sugli 80s fino a trasformarsi nella serie principe degli ultimi anni. Ma perché questo massiccio revival? La maggior parte dei fan che oggi seguono la serie – attendendo anche 2 anni per potersi gustare appena 8 nuovi episodi – non erano ancora nati in quel periodo, o al limite erano dei bambini. Non si tratta, dunque, di vera e propria nostalgia. Eppure più la guardi e più te ne rendi conto: Stranger Things non potrebbe essere ambientata in nessun’altra epoca, se non negli anni ’80.
Alcune di queste risposte ce la da proprio uno dei cult movie di quegli anni: Ritorno al Futuro. Pellicola che non a caso i ragazzi di Stranger Things si gustano al cinema in una puntata della terza serie. Nel primo capitolo di questa saga, Marty McFly viaggia 30 anni indietro nel tempo, fino ad arrivare negli anni ’50 dove conosce i propri giovani genitori. Un salto temporale del tutto analogo a quello tra oggi e gli sfavillanti 80s. Del resto in quel periodo gli anni ’50 andavano decisamente di moda, all’interno di un revival simile a quello che viviamo oggi. A quanto pare questa “nostalgia”, non è affatto un fenomeno isolato.
Anche pezzi di storia della cultura pop che risalgono agli anni ’80, come Ritorno al Futuro, appunto, ma anche Karate Kid, sono elementi da non sottovalutare quando si cerca di spiegare perché Stranger Things sia ambientato proprio in quegli anni. Davvero pochi decenni negli ultimi 50 anni sono così ricchi di produzioni iconiche e rimaste nel nostro immaginario. Certo, gli spettatori di Stranger Things non erano ancora nati, ma chi di loro non ha mai visto La Storia Infinita, ad esempio? E questo spiega anche come sia stato possibile inserire per intero la canzone The NeverEnding Story proprio prima dell’epico finale di stagione di Stranger Things 3 senza suscitare alcuna lamentela, ma al contrario grande intrattenimento.
Come se non bastasse, poi, gli anni ‘80 erano carichi di un’estetica estremamente riconoscibile, che guardava si al futuro, ma ad uno diverso da quello che poi abbiamo vissuto. Uno in cui persino le auto più tecnologiche avevano l’aspetto della Delorean, ad esempio, e non quello delle più sobrie Tesla di oggi. Un periodo in cui la tecnologia analogica riempiva stereo e Polaroid di tasti e indicatori colorati: tutti sinonimi di funzioni avanzatissime, per chi non si aspettava di certo l’arrivo di smartphone e i-Pad touch minimalisti e senza bottoni.
Età frivola, dunque, questi 80? Beh certo, come tante altre. Ma anche ricchi ed eclettici musicalmente, oltre che sullo schermo. Capaci di riunire anime divergenti come Depeche Mode, Michael Jackson e Guns’n’Roses. Proprio questi ultimi ci offrono un ultimo tassello per ricomporre il puzzle, insieme a Mickey Rourke nel film The Wrestler del 2008, dove interpreta Robin, un vecchio lottatore che ha raggiunto l’apice negli anni ’80.
Robin: “Cazzo, non ne fanno più di canzoni così!”
Pam: “Mitici anni ’80, imbattibili!”
Robin: “Eh, ci puoi giurare! I Guns N’ Roses sono i più forti!”
Pam: “I Crüe…”
Robin: “Sì…”
Pam: “I Def Lep…”
Robin: “Poi Cobain (Kurt), quel finocchio, è arrivato a rovinare tutto… Fine!”
Pam: “Volevamo divertirci! Che c’è di male?”
Robin: “L’ho odiata quella merda degli anni ’90!”
Pam: “Facevano schifo!”
Robin: “I 90′ facevano schifo!”.
In fondo tutto si riduce a questo. Ogni epoca nel bene e nel male ci lascia qualcosa, una specie di emozione capace di sintetizzare anni di avvenimenti e spiriti molto diversi tra loro. Se film come “Trainspotting” non potrebbero essere ambientati altro che negli anni ’90 di Kurt Cobain – tanto odiati da the Wrestler – è altrettanto vero che Stranger Things è al 100% figlio degli ’80, e non potrebbe essere altrimenti. | {
"pile_set_name": "OpenWebText2"
} |
[
{
"type": "repeat",
"stages": [
{
"type": "wait",
"until": {
"type": "delay",
"period": "PT0.1S"
}
},
{
"type": "trigger",
"behaviour": {
"type": "status",
"status": 404
},
"trigger": {
"type": "always"
},
"until": {
"type": "countdown",
"count": "10"
}
}
],
"until": {
"type": "deadline",
"endTime": "2020-01-01T00:00:00Z"
}
}
] | {
"pile_set_name": "Github"
} |
Bruce, if you want a mossy oak scope you might be better off buying one that already has that pattern. If you have the scope already I think you can have it dipped but I've never done this so I don't know if they can do a scope. Camoflage patterns are only for the hunters, if your commo breahs up the outline it doesn't matter what pattern it is.Ron
This is a great thread with LOTS of ideas and some links to other threads too. You'll see there are a lot of scopes that are camo'd with rattle cans. I did it and its a piece of cake. I used the broom bristle method and it came out sweet. Others use leafs, flyswatters, and all sorts of different things to make the pattern. Your imigination is the only limitation!http://www.californi...d...&hl=mini-14
Why does anybody do anything to their gun? I want to! The rifle is already completely camoed in a MO Brush pattern and I have a new scope on the way and want it to match. Do I need a better reason?
Of course you don't need a better reason...wanting to do it covers it."Why does anybody do anything to their gun?" Usually for some kind of performance enhancement...which is why I asked the question to begin with...but your answer "because I want to" covers it.
Bruce, remington makes shooting glasses with Mossy Oak pattern lenses. You can buy a pair of those and wear them every time you pick up that gun and it will make that scope look like it is painted mossy oak.
I have been thinking about this for a while. You could wrap the scope in cord/string and then paint it either with brush or paint cans. If it does not turn out like you wanted it either repaint or start over. We are talking mear pennies...for a very class camo job.
Bruce, remington makes shooting glasses with Mossy Oak pattern lenses. You can buy a pair of those and wear them every time you pick up that gun and it will make that scope look like it is painted mossy oak.
Though you cannot tell because of the shadow...I am actually wearing those sunglasses in my avatar pic. LOL My dogs freaked when I walked out wearing this stuff.The scope still looks black. | {
"pile_set_name": "Pile-CC"
} |
Members of the Toronto Police Association are being asked to ratify a new contract that includes an 8.35 per cent salary increase over four years, the Star has learned.
The proposed wage hike will increase salaries by 2.75 per cent this year, followed by 1.95 next year and 1.9 and 1.75 per cent in the final two years of the new deal.
The tentative deal — reached last week between the union and the Toronto Police Services Board — also eliminates a longstanding perk for new recruits that has allowed police employees to bank up to 18 sick days a year.
The service pays out about $12 million annually in unused sick leave to employees who quit or retire.
“That has been in dispute for a long time, and will result in big savings in the long run,” said a police source.
The deal, if ratified, will also increase the amount of time it takes to reach the status of first-class constable, whose base pay is $90,000.
Members will vote on the tentative contract using mail-in balloting over the next few weeks.
TPA president Mike McCormack and board members unveiled details of the deal to members at a meeting Thursday night in an east-end hotel.
Leaving the venue, he declined to comment except to repeat his earlier comment — last week — that the deal is fair to employees and the city.
In 2011, the service’s officers were awarded an 11.4 per cent raise over four years. The last contract expired Dec. 31.
This year’s city budget didn’t include the impact of the salary settlement, though city staff said an estimated provision was built into it. | {
"pile_set_name": "OpenWebText2"
} |
There was an hour on the clock when England rugby boss Stuart Lancaster turned to Freddie Burns.
He looked at the 22-year-old, a fly-half without a cap to his name – then at the opposition, the world champion All Blacks.
He cleared his throat and gave the instruction. Lancaster had no doubt Burns was up to the task. And neither did the player.
“I’m naturally very confident and I always have been,” said the Gloucester star, who marched onto the Twickenham turf and calmly kicked two penalties to close out a famous England win.
“At the top level you can’t doubt yourself, especially in the fly-half position. Everyone looks at you to be the guy who calls the shots, who runs the game. If you have doubts or fears it’s going to go through the team.”
Burns insists he felt none when Lancaster gave him the nod that momentous afternoon at Twickenham.
“I saw it as a great opportunity rather than a daunting prospect,” he explained.
“I went out telling myself to leave nothing in the changing room – to channel all the positive emotions I’ve had with Gloucester into a big performance.”
This has been a year associated more with England’s other young fly-half. Owen Farrell steered the national team to second in the Six Nations and was shortlisted for World Player of the Year.
But 2013 could very well belong to Burns. “England are looking for creativity and Freddie has the game they are desperate to play,” Dean Ryan, who signed Burns from Bath Academy when he was Gloucester coach, said recently.
He has been electric this season for the Cherry and Whites, who go into today’s home game with Exeter lying fifth in the Premiership and unbeaten in Europe. “I think we’ve taken everyone a little bit by surprise with how quickly we’ve turned things round from last year,” said the Bath-born ace.
Burns was never one to waste time. By the age of six he was playing and, according to dad Jerry, making quite an impression.
“It was his awareness around the pitch,” said Burns snr. “From a young age he could spot things other people didn’t see.”
It helped growing up in a rugby-mad house with three brothers – Jack, Sam and Billy – and a 54-year-old father who all still play.
“Me and my brothers are all backs and dad’s a second row,” said Freddie. “I don’t know where we got our skills from – unless mum was a decent player when she was little!”
Twice this year he has been voted Premiership player of the month and he went into the All Blacks game with Mirror columnist Matt Dawson among those hoping he would start.
Burns, who signed a contract extension with Gloucester in the spring, insists there is still “a lot more to come from me”.
He added: “What Owen has achieved at his age is incredible, the guy’s got nerves of steel.
“I’m a slightly different player to Owen, but hopefully he and I can push each other and bring the best out of each other.” | {
"pile_set_name": "Pile-CC"
} |
A retrospective study of Arab American mental health clients: trauma and the Iraqi refugees.
The purpose of this study was to clarify the mental health needs of Iraqi immigrants who arrived in the United States in the 1990s after the Persian Gulf War. The records of 375 clients were examined at a clinic that serves Arab Americans. More posttraumatic stress disorder and health problems were found in Iraqi refugees than in other clients. Results suggest the need for further research on immigrants with traumatic histories to facilitate effective treatments. | {
"pile_set_name": "PubMed Abstracts"
} |
Main menu
Category Archives: Life After Divorce
Post navigation
I tossed a bag of bread into the darling basket labeled bread, only filled with everything but.
What the hell happened to just plopping the bread on the shelf, America? Why does everything have to be so catalog cute?
I looked around this too big house that has homed us for a year and felt how I do most days, like a child playing grown-up.
Since I’ve been able to form thoughts, I’ve been a square peg in a round hole, a slippery fish out of water, an occasionally inept girl whose britches are way too big. I have lived in doubt, managing to be just loud and self-deprecating enough to somehow convince the world otherwise.
I can be chaotic and that’s an understatement. My thoughts are scattered far and wide. The state of my closet mimics that of my mind — reasonably accessible but sort of all over the place — piles of clothes and thoughts shoved into corners in the hope they’ll dissapear, if only for a minute. I am forgetful and I procrastinate and I don’t always love to cook dinner. I lack a filter and walk around most days with my foot planted firmly in my mouth. I don’t know when to shut up. I am all or nothing. I can be defensive and exhaustingly mistrustful. Some might even say that I’m a bit of a handful. I prefer work in progress.
***
When I moved into this house I ran as fast as I could from the old one, and even faster from the girl who had lived there. The girl who’d been duped into believing it was she, rather than her relationship, that was defective.
In an effort to be loveable, I knew I had to get my shit together. So I organized my house with the sweetest bins and baskets, and held tight to the hope that my mind would soon follow. I tried my best to close cabinet drawers and doors, and labeled everything I could think to label — myself included. I hung up the piles of clothes and threw the thought pile in a box labeled “fragile handle with care.” I bought a huge calendar and wrote things like “soccer practice” and “snack day” with a pretty new Sharpie. I signed up to be room mom for both boys’ classes, which seemed like a lot but still totally manageable, considering I had that new calendar. I tried fitting myself into so many boxes.
Each morning I carefully put on my gosh she sure does has life by the balls mask. I was Allison 2.0 – now with less shit show! I went to the grocery store to buy responsible adult food I wouldn’t eat, but had already made a god damn label for.
I played the role of proper adult well, despite how fast my head was spinning.
Once I had everything all nice and prettied up, I went out on cookie cutter dates decked out as the new improved lovable me. The men I dated matched me perfectly on paper. Between that and my foolproof plan, I was sure to find a prince who found my quirks and shenanigans endearing.
I waited behind my towering wall to feel the magic that had always eluded me.
And waited…
And waited…
And waited…
Just as I was researching convents, I met a man who felt different but incredibly familiar. He was nice. So nice, in fact, that I erred on the side of extraordinary caution, because I’d already seen that movie a few times and the ending sucked.
Initially, I kept myself tucked safely behind the wall, but over time I grew bolder and began peeking over more and more. But with every peek, I inadvertently exposed more of my real self – the one with all the unlovable piles. After each exposure, I waited patiently for the inevitable fallout. Oddly enough, however, every time I peeked over the wall he was still standing there and even closer. With every slip of my mask and break in character, he laughed louder and held me closer. It was almost as if he actually liked the real me — even, or maybe especially, the messy unlovaeble parts I was trying so hard to hide from him.
Slowly, I grew more confident and showed him more of my unlovable.
“I am broken and terrified. You should run.”I told him.
And he did run. Only he ran towards me, rather than away. Through it all, he held my fears softly and patiently until I was ready to let them go. He knew I needed more assurance than any confident women should, and he gave it to me time and again with a smile.
“I’m not going anywhere. Period.” he has said to me more times than I can count, without an ounce of annoyance.
In time, his side of the wall began feeling much safer. And considering I could be myself, hot mess and all, it was also much less work.
I’m still getting used to being with a man who always puts me first, even when it’s not the most convenient; a man who accepts all of me, even those parts I was convinced were defective; a man I’ve been searching for my whole life. He is kind to his core and honest. He is better to my boys than I am. He makes me feel safe. And it doesn’t hurt that he’s hot and makes me belly laugh like no other.
I still apologize more than I should. The fear that I’ll lose this still creeps up, albeit less and less. But, this place that I am in — oh this lovely place — has me being kinder and gentler to myself, and inspired to jump back in to all the things I love.
***
We are moving again next week, to a smaller house on a street where the boys can ride bikes…to a house that feels like home, much like he does. This house will have piles on the closet floor, but it will be free of masks.
And this time around I’m taking myself along. Turns out, I’m not so bad after all.
I signed alongside a few Xs today, and now the house with the strong bones is no longer mine. Still, when it’s brought to the ground, it will take some of me with it. Poof! All of it turned into … Continue reading →
I’m not sure which appeared first, the cracks in these walls or the ones in my marriage. Regardless, I noticed the walls first and, truth be told, thought they were charming. I’ve never been drawn to anything perfect. Sadly, though, … Continue reading →
I can still feel the fear running through my veins like it was yesterday. When I was standing on the front lines, and voluntarily so, arguing with myself. “What are you thinking you crazy girl? You freak out eating at the … Continue reading →
I’ve crawled down that street everyday, going on two weeks now, in search of hope. I drive slowly, at a snail’s pace, barely breathing. To my right sits old money and, to my left, even older oak trees. I make … Continue reading → | {
"pile_set_name": "Pile-CC"
} |
What Makes Price Per Head Software a Great Choice for Bookies?
However difficult it may seem in the first go, the interesting concept ofprice per head is not getting accepted by bookies all around the world for betting. The primary concept behind this software is to help the bookies or bookmarkers to easily scale up their onshore business and additionally cater to customers online at any given point in time.
Even if one of the existing clients wishes to shift his location and travel to some other country, you may easily provide him dedicated services and retain your premium and high-quality clientele without being bogged down by issues related to location. If some people request anonymous listing for their usage, you may easily provide them with automated price per head services by utilizing advanced sports betting software systems specifically crafted for you and your client’s usage online.
The services that are being currently offered by price per head based system software are extremely different from the traditional sports betting software, which were not error-proof due to more dependency on manual work. At times, manual calculations easily led to some erroneous calculation and that amounted to heavy losses in terms of losing valuable clients and money as well. The latest technology used by the online bookie platforms provides the bookies with better-earning prospects, assured service quality to all types of clients, and quick responses. Clients can choose to access their account at any given point of time and that doesn’t require your physical presence at all. Although the bookmarkers aka bookies who has been running their business in a traditional setup for a long time are afraid to switch to an online module, those who have been brave enough to take up the challenge have seen immense profit earning potential, which was not seen in the earlier setup.
If an existing bookie joins the online price per head platform, then he can easily speculate how he can easily scale up his business as compared to his competitors who are still using the traditional setup to provide services to their clients. While their services are limited to only the office’s working hours, the bookie who recently switched can easily promote his 24*7 services online and attract a number of clients, which can be considered the best decision for his business.
With the online platform, clients can access the platform to not only bet for sports books but they can also choose to start betting for a horse race and live casinos (provided these services have been availed by the bookie himself and extended to his existing and new clients).
The PPH concept is now being widely accepted by most of the bookies because of its ease of use and world-class customer support and technical support at any given point in time.
Once this software is incorporated in any business, the bookie can focus on expanding his client base and attract more clients while the backend team can focus on extending support to the existing clients. | {
"pile_set_name": "Pile-CC"
} |
Q:
Is it possible to re-encode a BMP made of JPG back to JPG without loss of quality?
Sometimes I save JPG images as uncompressed bitmap (BMP/PNG), to keep the quality when I make changes to the image.
I was wondering, is it theoretically possible to re-encode the bitmap back to its original JPG format, without losing any quality, (except for the areas I edited) ?
Edit: I was thinking somehow to brute-force it to find the original JPG information setting for that block of BMP data, and thus generating JPG out of BMP (which was JPG before) without any difference to original JPG. I don't know enough about JPG format to say if it's even possible, but I can't think why not, at least in some finite time you could brute-force 8x8 block?
A:
JPEG compression is lossy, so you will lose some information in the .bmp when you re-encode it as a JPEG. If the image is trivial (for example 1 black pixel1 black all black for example, 1 pixel) you may be able to re-encode without loss.
You can see an example of JPEG being re-encoded multiple time here.
You can do some operations on a JPEG which are lossless, from wikipedia :
A number of alterations to a JPEG image can be performed losslessly
(that is, without recompression and the associated quality loss) as
long as the image size is a multiple of 1 MCU block (Minimum Coded
Unit) (usually 16 pixels in both directions, for 4:2:0 chroma
subsampling). Utilities that implement this include jpegtran, with
user interface Jpegcrop, and the JPG_TRANSFORM plugin to IrfanView.
Blocks can be rotated in 90 degree increments, flipped in the
horizontal, vertical and diagonal axes and moved about in the image.
Not all blocks from the original image need to be used in the modified
one.
The top and left edge of a JPEG image must lie on a 8 × 8 pixel block
boundary, but the bottom and right edge need not do so. This limits
the possible lossless crop operations, and also prevents flips and
rotations of an image whose bottom or right edge does not lie on a
block boundary for all channels (because the edge would end up on top
or left, where – as aforementioned – a block boundary is obligatory).
When using lossless cropping, if the bottom or right side of the crop
region is not on a block boundary then the rest of the data from the
partially used blocks will still be present in the cropped file and
can be recovered.
It is also possible to transform between baseline and progressive
formats without any loss of quality, since the only difference is the
order in which the coefficients are placed in the file.
Furthermore, several JPEG images can be losslessly joined together, as
long as the edges coincide with block boundaries.
| {
"pile_set_name": "StackExchange"
} |
Q:
Why is the value - despite of using GREATEST() - still out of range?
I tried the following to decrease a counter after a post was deleted by using GREATEST() so it does not result a negative value:
SELECT GREATEST(0, posts - 1)
FROM users
WHERE id = 123
but it returns:
[Err] 1690 - BIGINT UNSIGNED value is out of range in '(`db1`.`users`.`posts` - 1)'
posts returns 0:
SELECT posts
FROM users
WHERE id = 123
An the following returns 0 as expected:
SELECT GREATEST(0, 0 - 1)
So what I'm doing wrong?
A:
If your posts column is BIGINT UNSIGNED then unsigned values have to be 0 or more, so -1 is out of range.
MariaDB [test]> desc tbl1;
+-------+---------------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+-------+---------------------+------+-----+---------+-------+
| id | bigint(20) unsigned | YES | | NULL | |
+-------+---------------------+------+-----+---------+-------+
1 row in set (0.01 sec)
MariaDB [test]> select id -1 from tbl1;
ERROR 1690 (22003): BIGINT UNSIGNED value is out of range in '(`test`.`tbl1`.`id` - 1)'
| {
"pile_set_name": "StackExchange"
} |
Editio Critica Maior
Editio Critica Maior (ECM) is a critical edition of the Greek New Testament being produced by the Institut für neutestamentliche Textforschung (Eng. "Institute for New Testament Textual Research") - which is famous, for example, for the Novum Testamentum Graece (or Nestle-Aland) - in collaboration with other international institutes. The ECM is the printed documentation of the expressions of Christian faith communities as they transmitted the New Testament in time through Greek manuscripts, translations, and ancient citations in the first 1,000 years of New Testament transmission. The difference between earlier and later readings is shown by the concept of 'direction' in the ECM, with the earliest expression of the Christian readings printed in the main text. The Coherence Based Genealogical Method (CBGM) is the method being used to construct the ECM. The CBGM has two components - pregenealogical coherence and genealogical coherence. Pregenealogical coherence is a text-critical method which uses computer tools to compare the places of variation New Testament Witnesses to determine if the readings are related. Then, critical principles are applied by a textual scholar to make a decision on the directionality of the reading itself. Places where the direction cannot be determined, or split readings, are indicated by a diamond in the text. The ECM is the first critical edition of the Greek New Testament to include a 1) systematic assessment of witnesses, 2) a mature consideration of those witnesses, 3) a reconstruction of the oldest form of the recoverable text, or initial text, 4) a complete and systematic apparatus, and 5) a full explanation and justification of its methodology and conclusions.
The Editio Critica Maior project is supported by the Union of German Academies of Sciences and Humanities. It is to be completed by the year 2030.
The beginnings
Since the founding of the Institut für neutestamentliche Textforschung (INTF) in 1959 by Kurt Aland, manuscripts – some of which had previously been unknown or lost, were traced and photographed, and all known manuscripts were photographed and cataloged. The INTF thus acquired over 90% of the known material on microfilm or photo.
First evaluation
Initially, in the mid-1980s, a text program was developed. Uniform texts, which, as is well known, constituted the main part in the transmission of the High Middle Ages, were set aside. After the elimination of these texts, the relevant material for the textual history of primarily the first millennium was made available; in addition to the manuscripts of this period, there are numerous later manuscripts that reflect an older textual history.
The still very high number of relevant manuscripts has gradually been transcribed and subjected to a complete text comparison (Vollkollation). The results are recorded in databases. For the first time, this allows a computer-assisted exploration of all the material. Above all, the acquisition of genealogical data is important, as it illuminates the textual history and especially its beginnings. The specific problems that exist in the transmission history are now evident.
The digitization
The institute uses digital methods at all levels of its philological work: handwriting is captured in the highest possible quality as digital photos. These photos are the basis for the transcriptions to be made on the computer. Additionally digitized material from other institutes is taken into consideration. The software Collate is then used to prepare the critical edition.
The result
In 1997 the first installment of this edition was published by Barbara Aland. The Catholic Letters and also The Acts of the Apostels are available printed and digitally (by using the New Testament Virtual Manuscript Room). In cooperation with the International Greek New Testament Project, the Johannesevangelium is in preparation. In addition an edition, that has been published in printed form, a digital platform - the New Testament Virtual Manuscript Room has been developed, which allows online access to all data collected during the work on the ECM. This includes above all the diplomatic transcripts of all the manuscripts used in the ECM and the databases on which the ECM is based on.
The recent editions of Nestle-Aland in the 28th edition and the 5th edition of the Greek New Testament of the United Bible Societies are in the text after the ECM, as far as it has been published so far.
Current editions
The Novum Testamentum Graecum Editio Critica Maior is published by Deutsche Bibelgesellschaft (the German Bible Society).
The Acts of the Apostels
Novum Testamentum Graecum Editio Critica Maior, III/1.1, The Acts of the Apostels, Part 1.1, Text, Chapter 1-14,
Novum Testamentum Graecum Editio Critica Maior, III/1.2, The Acts of the Apostels, Part 1.2, Text, Chapter 15-28,
Novum Testamentum Graecum Editio Critica Maior, III/2, The Acts of the Apostels, Part 2, Supplementary Material,
Novum Testamentum Graecum Editio Critica Maior, III/3, The Acts of the Apostels, Part 3, Studies,
Catholic Letters
Novum Testamentum Graecum Editio Critica Maior, IV/1, Catholic Letters, Part 1, Text,
Novum Testamentum Graecum Editio Critica Maior, IV/2, Catholic Letters, Part 2, Supplementary Material,
Parallel Pericopes
Novum Testamentum Graecum Editio Critica Maior, Parallel Pericopes,
External links
Website about the ECM from The German Bible Society
Information about the ECM on the website of INTF
More information about the ECM
See also
Editio Octava Critica Maior
References
Category:Biblical criticism
Category:Greek New Testament
Category:Textual scholarship | {
"pile_set_name": "Wikipedia (en)"
} |
After publishing my article on War Elephants, TripleAAA commented asking about Militia and Minuteman (Minutemen?) against War Elephants. I hadn’t tested that but it seemed like useful information to have, so I put it on the to-do list.
While running these new tests I also ended up with some questions of my own regarding the ranged attack of Mahouts. Was the damage of the melee and ranged attacks identical? Was the damage modifier different for the ranged and melee attacks (i.e. would one attack deal different damage to the other depending on what unit was being hit)?
I set out to answer both sets of questions, and ended up stumbling into a few surprising answers in the course of doing so.
This article is a followup to War Elephant: Most Powerful Unit in the Game? If you haven’t already done so, you may find it helpful to read through that before continuing.
Note: All tests were conducted extensively in EE, followed by some less rigorous verification tests in T&P 2905 (done for roughly half of the EE tests) just to check whether there were any sneaky changes like those I found in the previous article. As far as I can tell based on my results, all information in today’s article applies equally to both game versions, and therefore presumably all versions of T&P / Gold / EE.
Militia hitting War Elephants and Mahouts
Let’s get straight to it with the tests for a single Militia hitting a single War Elephant. These tests used 20-hit samples to minimise margin of error.
This first test was when the War Elephant had +2 armor from a nearby General:
Attacking unit Defender Defender's orientation Damage per hit Militia War Elephant front 1 Militia War Elephant side 1 Militia War Elephant rear 1
Looks like we’re reaching the minimum 1-damage-per-hit rule regardless of flanking bonus, although I suppose it’s more of a general guideline than a rule.
Here are the same tests but with no nearby General for the War Elephant (so no armor bonus):
Attacking unit Defender Defender's orientation Damage per hit Militia War Elephant front 2 Militia War Elephant side 3 Militia War Elephant rear 3
These damage numbers are.. interesting. Flanking damage bonuses are normally +50% damage for rear hits and +100% damage for side hits, heavily rewarding good flanking in combat. However, the interaction between Militia and Elephants implies that they don’t follow the expected rules for flanking damage bonuses.
I was curious if this was simply a limitation of Militia as a unit – maybe they just didn’t receive normal flanking bonuses? However in further testing they appeared to follow the normal +50% and +100% damage rules when attacking Age 2 Persian HI. That means it’s not just a Militia thing.
I did some brief followup testing because this was so intriguing. It looks like this is actually a cavalry-wide difference – instead of taking 50% and 100% extra damage for rear and side hits respectively, it looks like cavalry instead suffer a much smaller penalty – tested with Militia and Bowmen against age 2 light, heavy, and ranged cavalry. I might look into this further at a later date to see if I can figure out what exactly the formula is for each unit type.. but back on-topic for now.
Let’s look at the results for Militia against Mahouts, which are Medieval Age (III) units. I’d also like to note here for the sake of completeness that Militia are Classical Age (II) units. The first test gave the Mahout a General providing +2 armor, and each result used a sample of at least 10 hits:
Attacking unit Defender Defender's orientation Damage per hit Militia Mahout front 10 Militia Mahout side 13 Militia Mahout rear 11
That’s a marked improvement over what the Militia accomplished against War Elephants.
I repeated the same test but with no nearby General for the Mahout (and therefore no armor bonus). I only took 4-hit samples here because the numbers looked so straightforward:
Attacking unit Defender Defender's orientation Damage per hit Militia Mahout front 12 Militia Mahout side 15 Militia Mahout rear 13
Just +2 damage per hit when removing the +2 armor bonus, regardless of flanking bonus.
War Elephants and Mahouts hitting Militia and Citizens
For these tests the Militia / Citizen never had a nearby General or Patriot (so no armor bonus), since that’s the most likely scenario for such units. The four tests in this section have very small hit-samples (1-2 hits) because more than that would’ve killed the defender. Low hit-samples are pretty trivial for such high damage numbers though, so the margin of error is still quite low despite the reduced sample.
Starting off with War Elephants hitting Citizens:
Attacking unit Defender Defender's orientation Damage per hit War Elephant Citizen front 23 War Elephant Citizen side/rear 23
It appears Citizens do not receive any kind of damage penalty for being flanked, which seems reasonable in order to make damage taken more consistent when raiding.
What about if the Citizens are converted to Militia?
Attacking unit Defender Defender's orientation Damage per hit War Elephant Militia front 19 War Elephant Militia side 39 War Elephant Militia rear 29
That would be a yes – you can see that excluding the Militia’s one armor (reducing damage by the same amount), the War Elephant is dealing 100% more damage on side hits and 50% more damage on rear hits.
Let’s switch over to Mahouts and do the same two tests:
Attacking unit Defender Defender's orientation Damage per hit Mahout Citizen front 25 Mahout Citizen side/rear 25
Attacking unit Defender Defender's orientation Damage per hit Mahout Militia front 24 Mahout Militia side 49 Mahout Militia rear 36
Compared to their predecessor, looks like Mahouts get a small damage bump across the board when hitting Citizens and Militia.
Minutemen and War Elephants
In Enlightenment Age (V) you have the option to switch from the Militia (armed with pitchforks and axes) to the Minuteman (armed with a musket). I don’t often see Minutemen actually get used with much regularity, but let’s see how they fare against the Classical Age War Elephant. The first two tests were taken with a sample of 7-8 hits, depending on how many I could get off without outright killing the defender.
Improvised-musketmen hitting a War Elephant with a nearby General providing +2 armor:
Attacking unit Defender Defender's orientation Damage per hit Minuteman War Elephant front 19 Minuteman War Elephant side 22 Minuteman War Elephant rear 20
Same test but no General (so no armor bonus):
Attacking unit Defender Defender's orientation Damage per hit Minuteman War Elephant front 21 Minuteman War Elephant side 24 Minuteman War Elephant rear 22
Following the expected pattern with armor so far, with each extra point of armor reducing damage taken by the same amount.
Now let’s flip the tables and get the War Elephant to do the shooting prodding. Small samples here (1-4) due to impending death of target unit. As before, no armor for the Minuteman:
(╯°□°)╯︵ ┻━┻
Attacking unit Defender Defender's orientation Damage per hit War Elephant Minuteman front 16 War Elephant Minuteman side 34 War Elephant Minuteman rear 25
And one final test with a War Elephant just to see whether the Minutemen upgrade has any effect on Citizens other than increasing their max HP:
Attacking unit Defender Defender's orientation Damage per hit War Elephant Citizen (MM upgrade) front 23 War Elephant Citizen (MM upgrade) side/rear 23
Negative on hidden effect – phew. Damage taken is stable at 23 just like it was with only the Militia upgrade.
Minutemen and Culverin Mahouts
War Elephants fighting Minutemen is an unusually large tech-mismatch, so let’s see what happens when we bump the War Elephant up to the age-appropriate Culverin Mahout (Age V)
Starting with Culverin Mahouts having a General providing +2 armor:
Attacking unit Defender Defender's orientation Damage per hit Minuteman Culverin Mahout front 28 Minuteman Culverin Mahout side 33 Minuteman Culverin Mahout rear 31
And now the same test but without the General (so no armor bonus):
Attacking unit Defender Defender's orientation Damage per hit Minuteman Culverin Mahout front 30 Minuteman Culverin Mahout side 35 Minuteman Culverin Mahout rear 33
It looks like there’s no tricky stuff going on with armor on Elephants; each test just shows a damage reduction equal to the armor bonus provided. Given how weird elephants are in some circumstances it’s good to have that confirmed though.
Once again we’ll reverse the situation and get the Culverin Mahout to hit the Minuteman (1-3 hit sample; no extra armor bonus):
Attacking unit Defender Defender's orientation Damage per hit Culverin Mahout Minuteman front 19 Culverin Mahout Minuteman side 39 Culverin Mahout Minuteman rear 29
This result is interesting, because Minutemen actually have two armor. I wonder if guns (or perhaps explosives?) have some kind of innate armor penetration effect, since these results look like what you’d expect against a target with just one armor. Tests for another day perhaps.
One thing that has been consistent is that Militia-type units follow the normal rule with flanking damage – 50% extra for hits from the rear and 100% extra for hits taken from the side.
Let’s see how effective a Culverin Mahout is against a Citizen. 1-hit samples and no armor bonus. These have the Minuteman upgrade, not that it should matter:
Attacking unit Defender Defender's orientation Damage per hit Culverin Mahout Citizen (MM upgrade) front 37 Culverin Mahout Citizen (MM upgrade) side/rear 37
The answer is quite. Looks like it’d be quite a dangerous raider, especially given how difficult it would be to actually kill.
That wraps up the Militia / Minuteman / Citizen tests, but there are still a few more questions to answer.
Mahouts: Melee / Ranged attack?
So what about the melee / ranged attack of Mahouts? How did I factor that in when saying how much damage the Mahout / Culverin Mahout did in the tests above?
As it turns out, Mahouts — including Gun Mahouts and Culverin Mahouts — do not really have both a melee and ranged attack.
Confused? So was I, so I did a whole bunch of tests to figure this whole thing out. Turns out that the melee attack which these units have is not a functional attack – it doesn’t actually deal any damage. It will show an attack animation — and it will even have accompanying SFX — but the target (unit or building, military or civilian) will not die faster compared to just the Mahout exclusively attacking from range.
This is not an Extended Edition exclusive issue either since I tested this fully on T&P 2905 as well for all three Mahout variants and had the same results. I’m also not convinced it’s a bug (or at least, not one unknown to the original developers, who then balanced the unit with its deficiency in mind). I actually think this was done intentionally, or at least left in intentionally (if it was discovered that a simultaneous melee and ranged attack was not able to be implemented) so that players treated the Mahout and its upgrades as front-line units, rather than as glorified ranged cavalary (which they more closely resemble in a functional sense).
An alternative explanation with the same outcome is that in melee range the unit toggles damage off on its ranged attack, and toggles it on on its melee attack. This might be the case with the Age 3 Mahout, as in the first of the two tests above I can’t see any arrow coming out from the left Mahout. Either way the result is the same: when using both attacks, the damage does not increase compared to just using the ranged attack.
As a quirky bit of trivia, Mahouts set to the Hold Ground stance will evidently not use their fake melee attack automatically, even if there are targets adjacent to them. However, once you switch them to a different stance (e.g. Aggressive or Defensive), or tell them manually to attack something, they’ll charge at it and look all strong and mighty while not actually doing any extra damage. This is likely a side effect of whatever black magic is being used to create the false melee attack.
In a way this relevation of a fake melee attack shouldn’t be that surprising – after all, the Mahout units do not have the melee object mask (flag) in unitrules.xml, but do have the ranged object mask. At least one person elsewhere on the web has alluded to the supposed dual ranged-and-melee masks of these units, but in actuality these are what they have:
Unit Masks Referring to Mahout OMR O = "Horse Archer"
M = "Mounted" (not melee)
R = "Archery" Gun Mahout OMG O = "Horse Archer"
M = "Mounted"
G = "Gun" Culverin Mahout OMGX O = "Horse Archer"
M = "Mounted"
G = "Gun"
X = "Explosive"
These object masks are the same between T&P 2905 and EE. Since elephants (as well as their respective nations in the Indians and Persians) were only introduced in the Thrones and Patriots expansion, it appears that Mahouts never had a true melee attack in any released version of the game.
As an aside, the Culverin Mahout doesn’t appear to have any splash damage on its ranged attack despite the explosion effect that the attack produces. This is supported by its stats in unitrules.xml, where it has a splash damage percentage declared a declared splash range of zero.
Closing Remarks
Those were quite a few Militia / Minuteman / Citizen tests, so hopefully that covers everything that TripleAAA was looking for.
With regard to the false melee attack of Mahouts, I would’ve expected that somebody somewhere before me would have noticed something fishy and made this discovery already, but I haven’t been able to find any mention of it anywhere else. It’s possible that there are those who did uncover it in past years, but then they either didn’t share it publicly, or they did but their posts have been lost or buried with time.
In any case, that wraps up the loose leads on elephants for now. I think the only question still in the air is when the huge buffs covered in the previous article were introduced. At the very least it would be helpful to test what the situation is on T&P 0800 (the patched version of the game), but I only have EE and 2905 on hand and I think I’m all elephanted-out after all the research and testing of both these articles. If anyone else would like to pick up that mantle then I would be happy to edit your findings in.
I’ll leave you with this very short clip of a Culverin Mahout getting a quick triple kill. RoN attack timings sometimes have a high level of variance, but I think this one takes the cake.
Testing custom scenarios
Militia testing was done on v13 and v13b. Note that I found v13 to be rather unstable (multiple game crashes during testing), which is why I eventually made 13b to finish the tests.
Minuteman testing was done on 13c and 13d. The name says Classical because that was the original scenario, but relevant players are appropriately aged (you’ll still need to research red’s Minuteman upgrade at the tower in the south), but that also gives you the chance to technically just use 13d to do all tests with.
T&P 2905 testing was done on v2 of the previous article’s 2905 map.
Update May 2020: All EE maps used here have now been uploaded to the Steam Workshop. | {
"pile_set_name": "OpenWebText2"
} |
Du Cong
Du Cong (杜悰) (794?-873?), courtesy name Yongyu (永裕), formally the Duke of Bin (邠公), was an official of the Tang dynasty of China, serving two terms as chancellor during the reigns of Emperor Wuzong and Emperor Wuzong's cousin Emperor Yizong. He was traditionally considered a skilled politician who maintained his high position throughout his lengthy career, but not a capable chancellor.
Background and early career
Du Cong came from a prominent aristocratic family, with his grandfather Du You having served as a chancellor during the reigns of Emperor Dezong, Emperor Dezong's son Emperor Shunzong, and Emperor Shunzong's son Emperor Xianzong. Du Cong's father Du Shifang (杜式方) was Du You's second son, and served several terms as minister or regional governor. The famed poet Du Mu was his cousin (son of Du Shifang's brother Du Congyu (杜從郁)).
Because Du Cong's heritage, he entered civil service early, and as his third assignment he served as a staff member of the Crown Prince. When the imperial scholar Dugu Yu (獨孤郁) offered to resign on account of the fact that his father-in-law Quan Deyu had just been made chancellor, Emperor Xianzong, who was impressed with Dugu's talent, stated, "How is it that Quan Deyu gets a son-in-law like Dugu Yu and I do not?" Therefore, for his own daughters, he turned away from the tradition of selecting their husbands from the households of the nobles and the accomplished generals, instead requesting the officials in charge to select their husbands from scholarly officials whose sons had literary talents. Most of the candidates declined, but Du Cong did not. In 814, Emperor Xianzong therefore had him marry Emperor Xianzong's daughter Princess Qiyang, the oldest daughter of Emperor Xianzong's wife Consort Guo. It was said that Princess Qiyang was humble, unlike many princesses of the day, and, to avoid a situation where her servants would look down on the Du household, she declined to take them with her. Little was known about Du's career the rest of Emperor Xianzong's reign, or the reigns of his son Emperor Muzong and Emperor Muzong's son Emperor Jingzong, other than that he eventually became minister of agriculture (司農卿, Sinong Qing).
During Emperor Wenzong's reign
In 832, during the reign of Emperor Jingzong's younger brother Emperor Wenzong, Du Cong was made the mayor of Jingzhao Municipality (京兆, i.e., the region of the Tang capital Chang'an). At that time, he was considered a close associate of the chancellor Li Zongmin, a leader of the faction later known as the Niu Faction (named after Li Zongmin's ally Niu Sengru) in the Niu-Li Factional Struggles. He tried to broker a peace between Li Zongmin and Li Deyu, a leader of the rival Li Faction (after whom the Li Faction was named), by suggesting that Li Zongmin offer to recommend Li Deyu to oversee the imperial examinations. Li Zongmin rejected the idea, but agreed to Du's alternate proposal of recommending Li Deyu as chief imperial censor; Li Deyu was pleased, but when Li Zongmin subsequently reneged, the possibility of peace between Li Zongmin and Li Deyu was broken.
In 833, Du was sent out of Chang'an to serve as the military governor (Jiedushi) of Fengxiang Circuit (鳳翔, headquartered in modern Baoji, Shaanxi), as well as the mayor of its capital Fengxiang Municipality. Thereafter, he briefly left government service to observe a mourning period when his mother died. In 834, he was recalled to government service as the military governor of Zhongwu Circuit (忠武, headquartered in Xuchang, Henan). In 835, there was a time when Emperor Wenzong was set to replace him with the general Li Ting (李聽), but Li Ting's commission was cancelled when Emperor Wenzong's close associate Zheng Zhu falsely accused Li Ting of corruption, and Du thus remained at Zhongwu.
Around the new year 838, Du was recalled to Chang'an to serve as the minister of public works (工部尚書, Gongbu Shangshu) and acting director of finances. At that time, Princess Qiyang died; as a result of observing a mourning period for her — as it was customary for princesses' husbands to observe a three-year mourning period for them, although that was not required of ordinary widowers — he did not meet Emperor Wenzong to thank him for the commission, which surprised Emperor Wenzong. The chancellor Li Jue explained the reason why Du was not meeting him and commented, "This is half of the reason why prominent clans' members do not want to engage in marriages with the imperial household." Emperor Wenzong commented that he did not know of this custom, and subsequently issued an edict abolishing it. In 838, Du was made minister of census (戶部尚書, Hubu Shanshu) and continued to act as the director of finances.
During Emperor Wuzong's reign
Emperor Wenzong died in 840 and was succeeded by his younger brother Emperor Wuzong, supported by the powerful eunuchs Qiu Shiliang and Yu Hongzhi (魚弘志), against the wishes of the chancellors Li Jue and Yang Sifu. Therefore, after Emperor Wuzong took the throne, he had Yang and Li Jue removed from their chancellor positions and sent out of the capital. In 841, after further accusations by Qiu against Yang, Li Jue, as well as two eunuchs that Emperor Wenzong had favored, Liu Hongyi (劉弘逸) and Xue Jileng (薛季稜), Emperor Wuzong ordered Liu and Xue to commit suicide, and sent messengers to Tang Prefecture (潭州, in modern Changsha, Hunan), where Yang was serving as the governor of Hunan Circuit (湖南), and Gui Prefecture (桂州, in modern Guilin, Guangxi), where Li Jue was serving as the governor of Gui District (桂管), to order Yang and Li Jue to commit suicide as well. When Du Cong heard of this, he met Li Deyu (who had become the lead chancellor by this point) and warned Li Deyu that Emperor Wuzong, being still a young emperor, should not become accustomed to kill high-level officials. Li Deyu and his fellow chancellors Cui Gong, Cui Dan, and Chen Yixing thus interceded on Yang's and Li Jue's behalf. Emperor Wuzong relented and spared Yang's and Li Jue's lives, although they were further demoted.
As of 844, Du was serving as the military governor of Huainan Circuit (淮南, headquartered in modern Yangzhou, Jiangsu), when Emperor Wuzong issued an order to the eunuch monitor of Huainan Circuit that he should select 17 prostitutes who were capable in drinking games and send them to the palace. The eunuch monitor asked Du to be involved in the selection process, and further contemplated training regular women to learn the drinking games and then submitting them. Du refused to be involved. In anger, the eunuch monitor submitted an accusation against Du. When Emperor Wuzong received the report, however, he reconsidered and came to believe that his original order was inappropriate, and cancelled it. Later in the year, he recalled Du to serve as chancellor with the designation Tong Zhongshu Menxia Pingzhangshi (同中書門下平章事), and also to serve as the director of finances and the director of the salt and iron monopolies. When Du met with him to thank him, he praised Du and compared Du to the early Tang chancellor Wei Zheng. Later in the year, after the imperial campaign against the warlord Liu Zhen resulted in Liu's officer Guo Yi (郭誼) killing Liu and surrendering Liu's Zhaoyi Circuit (昭義, headquartered in modern Changzhi, Shanxi) to the imperial government, Li Deyu argued that Guo was treacherous and should be put to death as well. Emperor Wuzong agreed with Li Deyu. Du, pointing out that at that time the imperial treasury was exhausted, argued for Guo to be tolerated, thus drawing Emperor Wuzong's displeasure. In 845, he was thus removed from his chancellor post. He was soon sent out of the capital to serve as the military governor of Dongchuan Circuit (東川, headquartered in modern Mianyang, Sichuan), and later was transferred to Xichuan Circuit (西川, headquartered in modern Chengdu, Sichuan).
During Emperor Xuānzong's reign
As of 849, by which time Emperor Wuzong had died and been succeeded by his uncle Emperor Xuānzong, Du Cong was at Xichuan. That year, with Tang's rival to the west Tufan in internal turmoil and various Tang circuit armies set out to recover territory that Tang had previously lost to Tufan, Du's Xichuan Circuit recovered Wei Prefecture (維州, in modern Ngawa Tibetan and Qiang Autonomous Prefecture, Sichuan).
Later, Du was transferred back to Huainan Circuit. In 855, Huainan was suffering from a severe famine, but it was said that Du was spending his time in feasting and gaming, not managing the famine relief. When Emperor Xuānzong received report of this, he sent the chancellor Cui Xuan to Huainan to serve as its military governor, and made Du a senior advisor to the Crown Prince, but with his office at the eastern capital Luoyang. A year or so later, he was made the defender of Luoyang. Sometime after, he was returned to Xichuan to serve as its military governor.
During Emperor Yizong's reign
As of 861, by which time Emperor Xuānzong had died and been succeeded by his son Emperor Yizong, Du Cong was back at Chang'an and serving as Zuo Pushe (左僕射, one of the heads of the executive bureau of government (尚書省, Shangshu Sheng)) and the director of finances, when he was made Menxia Shilang (門下侍郎), the deputy head of the examination bureau (門下省, Menxia Sheng) and chancellor again with the designation Tong Zhongshu Menxia Pingzhangshi. It was said that there was a time when Emperor Yizong issued a secret order to him through the eunuch Yang Gongqing (楊公慶) that the other chancellors at the time, Bi Xian, Du Shenquan, and Jiang Shen should be punished for having failed to suggest Emperor Yizong's succession late in Emperor Xuānzong's reign. Du argued against it, pointing out to Yang and the other eunuchs that getting the emperor accustomed to killing would also hurt them in the future. As a result, nothing was eventually done against Bi, Du Shenquan, or Jiang. While serving as chancellor, he was also given the honorific title of Taifu (太傅) and created the Duke of Bin.
At that time, Tang was engaged in a war with Nanzhao over Tang's refusal to bestow imperial sanction on the succession of Nanzhao's new king Qiulong (酋龍) over Qiulong's name being violative of the naming taboo for Emperor Xuanzong (who was named Li Longji). Du suggested that new Tang emissaries be sent to Nanzhao to mourn the death of Qiulong's father Fengyou (豐祐) and inform Qiulong that as soon as he changed his name, Tang would sanction his succession. Emperor Yizong agreed, but before the emissaries could be sent, Nanzhao launched an attack on Xi Prefecture (巂州, in modern Liangshan Yi Autonomous Prefecture, Sichuan) and Qionglai Pass (邛崍關, in modern Ya'an, Sichuan), and so the mission was cancelled.
In 863, Du was sent out of Chang'an to serve as the military governor of Fengxiang, continuing to carry the Tong Zhongshu Menxia Pingzhangshi title as an honorary title. He was eventually transferred to Jingnan Circuit (荊南, headquartered in modern Jingzhou, Hubei). In 873, when Nanzhao attacked both Xichuan and Qianzhong Circuits (黔中, headquartered in modern Chongqing), the defender of Qianzhong, Qin Kuangmou (秦匡謀) had too weak of an army to defend against the Nanzhao attack, and he abandoned it and fled to Jingnan. Du arrested Qin and submitted an accusation against Qin. Emperor Yizong, in response, issued an edict ordering that Qin be executed and that his assets and family be forfeited. This was not a response that Du expected, and, in shock, he suffered an illness and died. He was given posthumous honors.
The traditional accounts of Du's career indicated that he was not talented—that while he served as general and chancellor, he only cared about protecting himself and did not advance the careers of talented people.
Notes and references
Old Book of Tang, vol. 147.
New Book of Tang, vol. 166.
Zizhi Tongjian, vols. 239, 244, 245, 246, 247, 248, 249, 250, 252.
Category:794 births
Category:873 deaths
Category:Chancellors under Emperor Wuzong of Tang
Category:Chancellors under Emperor Yizong of Tang
Category:Mayors of Xi'an
Category:Tang dynasty jiedushi of Fengxiang Circuit
Category:Tang dynasty jiedushi of Xuanwu Circuit
Category:Tang dynasty jiedushi of Huainan Circuit
Category:Tang dynasty jiedushi of Dongchuan Circuit
Category:Tang dynasty jiedushi of Xichuan Circuit
Category:Tang dynasty jiedushi of Jingnan Circuit
Category:Du clan of Jingzhao | {
"pile_set_name": "Wikipedia (en)"
} |
Goal projections for GW1 of the 2017/18 EPL season are put under the microscope
The summer has flown by and the start of the 2017/18 English Premier League season is almost upon us.
Season-long fantasy football managers will be busy compiling their squads, while Gameweek 1 slates are also available with many of the operators that focus exclusively on daily fantasy football. Therefore, we have run the latest Premier League betting odds through the computer to see what the Old Enemy are making of the opening set of fixtures, in terms of expected goals and the chances of a home victory in each match.
Arsenal get the new season under way when they entertain dethroned champions’ Leicester City at the Emirates and it will be fascinating to see whether Alexis Sánchez, the 2016/17 Fantasy Player of the Season, is still at the club or not.
Fantasy managers will be keen to get a first look at some of the big summer moves as well, so the performances of the likes of Romelu Lukaka, Alexandre Lacazette, Álvaro Morata and Mohammed Salah are sure to come in for plenty of scrutiny.
However, it’s at the other end of the table where fantasy contests can be won and lost so don’t neglect the value of those precious clean sheets.
Champions’ Chelsea look to have a great chance of securing their first win of the season against a Burnley side that did not travel well in 2016/17, while no side secured more shutouts than Manchester United in the last campaign.
However, former United favourite Javier Hernández might have something to say about that if he lines up for the Hammers at Old Trafford, having now completed the switch to the London Stadium!
The bookmakers seem to be predicting a low scoring affair at the Hawthorns so that could be a game to keep an eye on.
Only 4 teams conceded more goals than Bournemouth last season, including 2 of those that were relegated and this is highlighted by their WhoScored team characteristics data.
The summer acquisitions of Nathan Aké and Asmir Begovic from Chelsea would suggest that manager Eddie Howe is keen to address those shortcomings, so it will be fascinating to see how the Cherries fare against an Albion side that scored 20 of its 43 Premier League goals from set pieces last time around.
The big Kick-Off is almost upon us – will you be focusing on defence or the big-hitters in GW1? | {
"pile_set_name": "OpenWebText2"
} |
Lipolytic remnants of human VLDL produced in vitro. Effect of HDL levels in the lipolysis mixtures on the apoCs to apoE ratio and metabolic properties of VLDL core remnants.
To determine the role of high-density lipoprotein (HDL) as an acceptor of lipolytic surface remnants of very low density lipoprotein (VLDL) in the metabolism of VLDL core remnants, we examined the effect of HDL levels in the VLDL lipolysis mixture on 1) the morphology and the apoCs to E ratio in VLDL core remnants and 2) the metabolic properties of VLDL core remnants in human hepatoma cell line HepG2 and human hepatocytes in the primary culture. Normolipidemic VLDL was lipolyzed in vitro by purified bovine milk lipoprotein lipase (LpL) in a lipolysis mixture containing a physiologic level of VLDL and albumin (30 mg VLDL-cholesterol (CH)/dl and 6% albumin) in the absence and presence of either a low HDL level (VLDL-CH:HDL-CH = 3:1) or a high HDL level (VLDL-CH:HDL-CH = 1:4). Lipolysis of VLDL in either the absence or presence of HDL resulted in the hydrolysis of >85% of VLDL-triglycerides (TG) and the conversion of VLDL into smaller and denser particles. In the absence of HDL, heterogeneous spherical particles with numerous surface vesicular materials were produced. In the presence of low or high HDL, spherical particles containing some or no detectable vesicular surface components were produced. The apoCs to apoE ratios, as determined by densitometric scanning of the SDS polyacrylamide gradient gel, were 2.89 in control VLDL and 2.27, 0.91, and 0.22 in VLDL core remnants produced in the absence and in the presence of low and high HDL levels, respectively. In vitro lipolysis of VLDL markedly increased binding to HepG2 cells at 4 degrees C and internalization and degradation by human hepatocytes in primary culture at 37 degrees C. However, the HDL-mediated decrease in the apoCs to apoE ratio had a minimal effect on binding, internalization, and degradation of VLDL core remnants by HepG2 cells and human hepatocytes in primary culture. In order to determine whether HepG2 bound VLDL and VLDL core remnants are deficient in apoCs, (125)I-labeled VLDL and VLDL core remnants were added to HepG2 culture medium at 4 degrees C. The bound particles were released by heparin, and the levels of (125)I-labeled apoCs and apoE, relative to apoB, in the released particles were examined. When compared with those initially added to culture medium, the VLDL and VLDL core remnants released from HepG2 cells had a markedly increased (113%) level of apoE and a reduced (30-39%), but not absent, level of apoCs. We conclude that apoCs, as a minimum structural and/or functional component of VLDL and VLDL core remnants, may not have an inhibitory effect on the binding of VLDL or VLDL core remnants to hepatic apoE receptors. | {
"pile_set_name": "PubMed Abstracts"
} |
/* Copyright 2019-2020 Canaan Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
#pragma once
#include <iostream>
#include <xtl/xspan.hpp>
namespace nncase
{
namespace runtime
{
class binary_reader
{
public:
binary_reader(std::istream &stream)
: stream_(stream)
{
}
template <class T>
T read()
{
T value;
read(value);
return value;
}
template <class T>
void read(T &value)
{
stream_.read(reinterpret_cast<char *>(&value), sizeof(value));
}
template <class T>
void read_array(xtl::span<T> value)
{
stream_.read(reinterpret_cast<char *>(value.data()), value.size_bytes());
}
std::streampos position() const
{
return stream_.tellg();
}
void position(std::streampos pos)
{
stream_.seekg(pos);
}
void skip(std::streamoff off)
{
stream_.seekg(off, std::ios::cur);
}
size_t avail()
{
auto pos = stream_.tellg();
stream_.seekg(0, std::ios::end);
auto end = stream_.tellg();
stream_.seekg(pos);
return size_t(end - pos);
}
private:
std::istream &stream_;
};
}
}
| {
"pile_set_name": "Github"
} |
the program can never run, because n is less than one. n-- means use n, then subtract one, but --n means subtract one from n, and then use n. so for (n = 1; n >= 1; --n) means that n is 0, and therefore, for (count = 1; count <= n; ++count) can never work, because count is 1, and greater than 0 | {
"pile_set_name": "Pile-CC"
} |
Sugar Free (song)
"Sugar Free" is a song from Australian pop group Wa Wa Nee. The song was released in December 1986 as the third single from their self-titled debut studio album. The song peaked at number 10 on the Australian singles chart, and number 35 in the US on the Billboard Hot 100.
The song is featured in the film, Cassandra.
Track listing
7" (CBS – BA3516)
Side A "Sugar Free" – 4:27
Side B "Wild Days and Windy Nights" – 2:59
12"' (CBS – BA12233)
Side A "Sugar Free" (Dance Mix) – 7:08
Side B "Sugar Free" ((The Spanking Dub Mix) – 3:50
Side B "Wild Days and Windy Nights" – 2:59
Charts
Weekly charts
Year-end charts
References
Category:1985 songs
Category:1986 singles
Category:Wa Wa Nee songs
Category:Songs written by Paul Gray (songwriter)
Category:CBS Records singles | {
"pile_set_name": "Wikipedia (en)"
} |
// Copyright 2018 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// https://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// Applies a deformation field (vector field) to a given image with different
// interpolation, extrapolation and conversion options.
#ifndef MULTIDIM_IMAGE_AUGMENTATION_KERNELS_APPLY_DEFORMATION_H_
#define MULTIDIM_IMAGE_AUGMENTATION_KERNELS_APPLY_DEFORMATION_H_
#include <cmath>
#include <vector>
#include "multidim_image_augmentation/platform/types.h"
#include "third_party/eigen3/unsupported/Eigen/CXX11/Tensor"
#include "tensorflow/core/platform/logging.h"
namespace deepmind {
namespace multidim_image_augmentation {
enum InterpolationStyle {
// Nearest neighbour interpolation.
kNearest,
// Linear interpolation.
kLinear,
// Nearest neighbour interpolation in x0-direction, and linear interpolation
// in (x1, x2)-direction. This is useful, if there is a jitter between the
// slices, and you apply an non-integer scaling in the x0-direction.
kMixedNearestLinear,
kNumInterpolationStyles // Add new styles above here.
};
enum ExtrapolationStyle {
// Extrapolation by mirroring.
kMirror,
// Extrapolation by zero padding.
kZeroPadding,
// Extrapolation by padding with a given constant value.
kConstPadding,
kNumExtrapolatoinStyles // Add new styles above here.
};
enum ConversionStyle {
// No conversion of values (e.g. 5 channel input --> 5 channel output).
kNoConversion,
// Convert the indexed input segmentation map (1 channel with values like 3
// for class 3) to a one-hot-encoded output segmentation map (e.g. 8 channels
// with values like (0, 0, 0, 1, 0, 0, 0, 0) for class 3). The one-hot values
// will be collected from the neighbouring pixels. I.e. I.e. the result would
// be identical when first applying the one-hot mapping to the input image
// and then applying a deformation with linear interpolation to the resulting
// multi-channel image.
kIndexedToOneHot,
kNumConversionStyles // Add new styles above here.
};
// Helper function for extrapolation by mirroring. Maps a position outside of
// the valid interval to the corresponding position within the valid
// interval. The interval is [0, width). This function requires width >= 1.
//
// Example for N = 5 |<----valid---->|
// x: -7 -6 -5 -4 -3 -2 -1 | 0 1 2 3 4 | 5 6 7 8 9 10 11 12
// mapped_x: 1 2 3 4 3 2 1 | 0 1 2 3 4 | 3 2 1 0 1 2 3 4
inline int MirrorAtBoundary(int64 x, int64 width) {
// If x is within the interval, everything is fine.
if (0 <= x && x < width) return x;
// If the interval has only one element, all mapped positions point to that
// element.
if (width == 1) return 0;
// Map positions outside the boundaries to the corresponding position within
// the valid interval.
int64 mapped_x = std::abs(x) % (width * 2 - 2);
if (mapped_x >= width) {
mapped_x = (width * 2 - 2) - mapped_x;
}
return mapped_x;
}
// Interpolates a value in a 2D multi-channel array using nearest neighbor
// interpolation. The input array has the order (x0, x1, channel).
//
// Parameters:
// in Pointer to the input array.
// extent_x0 Extents of the array.
// extent_x1 -"-
// num_channels -"-
// x0, x1 Position for interpolation.
// pad_element Pointer to padding element (num_channels components). For
// zero padding the caller is responsible to provide a vector
// with zeros here.
// out Pointer to output element (num_channels components).
//
// The meaning of the different options for ExtrapolationStyle and
// ConversionStyle are described above at their definitions.
template <typename InType, typename OutType,
ExtrapolationStyle extrapolation_style,
ConversionStyle conversion_style>
void Interpolate2DNearest(const InType* in, int64 extent_x0, int64 extent_x1,
int64 num_channels, float x0, float x1,
const InType* pad_element, OutType* out) {
// Round coordinates.
int64 int_x0 = std::floor(x0 + 0.5f);
int64 int_x1 = std::floor(x1 + 0.5f);
// Pointer to source pixel.
const InType* p;
switch (extrapolation_style) {
case kMirror: {
// Mirror at boundaries.
int_x0 = MirrorAtBoundary(int_x0, extent_x0);
int_x1 = MirrorAtBoundary(int_x1, extent_x1);
const int64 stride0 = extent_x1 * num_channels;
const int64 stride1 = num_channels;
p = in + int_x0 * stride0 + int_x1 * stride1;
break;
}
case kZeroPadding:
case kConstPadding: {
if (int_x0 >= 0 && int_x0 < extent_x0 && int_x1 >= 0 &&
int_x1 < extent_x1) {
const int64 stride0 = extent_x1 * num_channels;
const int64 stride1 = num_channels;
p = in + int_x0 * stride0 + int_x1 * stride1;
} else {
p = pad_element;
}
}
}
// Iterate over all channels and copy the values. If requested, apply
// on-the-fly conversion from indexed to one-hot-encoding.
switch (conversion_style) {
case kNoConversion: {
std::copy_n(p, num_channels, out);
break;
}
case kIndexedToOneHot: {
out[static_cast<int64>(*p)] = 1;
}
}
}
// Interpolates a value in a 2D multi-channel array using linear interpolation.
// For documentation of the parameters, see Interpolate2DNearest above.
template <typename InType, typename OutType,
ExtrapolationStyle extrapolation_style,
ConversionStyle conversion_style>
void Interpolate2DLinear(const InType* in, int64 extent_x0, int64 extent_x1,
int64 num_channels, float x0, float x1,
const InType* pad_element, OutType* out) {
// Compute the floor and the residual part of each coordinate.
const int64 int_x0 = std::floor(x0);
const int64 int_x1 = std::floor(x1);
const float res_x0 = x0 - int_x0;
const float res_x1 = x1 - int_x1;
// Compute weights for the 8 neighbour elements.
const float w00 = (1.f - res_x0) * (1.f - res_x1);
const float w01 = (1.f - res_x0) * (res_x1);
const float w10 = (res_x0) * (1.f - res_x1);
const float w11 = (res_x0) * (res_x1);
// Setup pointers to the 8 neighbour elements,
// according to the extrapolation style.
const InType* p00;
const InType* p01;
const InType* p10;
const InType* p11;
const int64 stride0 = extent_x1 * num_channels;
const int64 stride1 = num_channels;
switch (extrapolation_style) {
case kMirror: {
// Compute valid positions for all neighbour 6 directions.
const int64 x0_0 = MirrorAtBoundary(int_x0, extent_x0);
const int64 x1_0 = MirrorAtBoundary(int_x1, extent_x1);
const int64 x0_1 = MirrorAtBoundary(int_x0 + 1, extent_x0);
const int64 x1_1 = MirrorAtBoundary(int_x1 + 1, extent_x1);
// Pointers to the 8 neighbour elements in the first channel.
p00 = in + x0_0 * stride0 + x1_0 * stride1;
p01 = in + x0_0 * stride0 + x1_1 * stride1;
p10 = in + x0_1 * stride0 + x1_0 * stride1;
p11 = in + x0_1 * stride0 + x1_1 * stride1;
break;
}
case kZeroPadding:
case kConstPadding: {
// Check which of the 6 neighbour directions are within bounds.
const bool valid_x0_0 = (0 <= int_x0 && int_x0 < extent_x0);
const bool valid_x1_0 = (0 <= int_x1 && int_x1 < extent_x1);
const bool valid_x0_1 = (0 <= int_x0 + 1 && int_x0 + 1 < extent_x0);
const bool valid_x1_1 = (0 <= int_x1 + 1 && int_x1 + 1 < extent_x1);
// Pointers to 8 neighbour elements, or to pad_element if out of bounds.
const InType* p = in + int_x0 * stride0 + int_x1 * stride1;
p00 = (valid_x0_0 && valid_x1_0) ? p + 0 * stride0 + 0 * stride1
: pad_element;
p01 = (valid_x0_0 && valid_x1_1) ? p + 0 * stride0 + 1 * stride1
: pad_element;
p10 = (valid_x0_1 && valid_x1_0) ? p + 1 * stride0 + 0 * stride1
: pad_element;
p11 = (valid_x0_1 && valid_x1_1) ? p + 1 * stride0 + 1 * stride1
: pad_element;
}
}
// Iterate over all channels and do the interpolation. If requested, apply
// on-the-fly conversion from indexed to one-hot-encoding.
switch (conversion_style) {
case kNoConversion: {
for (int64 i = 0; i < num_channels; ++i) {
out[i] = w00 * p00[i] + w01 * p01[i] + w10 * p10[i] + w11 * p11[i];
}
break;
}
case kIndexedToOneHot: {
// Distribute the contributions of all neighbouring pixels to
// the respective channel, i.e. do one-hot encoding on-the-fly.
out[static_cast<int64>(*p00)] += w00;
out[static_cast<int64>(*p01)] += w01;
out[static_cast<int64>(*p10)] += w10;
out[static_cast<int64>(*p11)] += w11;
}
}
}
// Interpolates a value in a 3D multi-channel array using nearest neighbour
// interpolation. The input array has the order (x0, x1, x2, channel).
//
// Parameters:
// in Pointer to the input array.
// extent_x0 Extents of the array.
// extent_x1 -"-
// extent_x2 -"-
// num_channels -"-
// x0, x1, x2 Position for interpolation.
// pad_element Pointer to padding element (num_channels components). For
// zero padding the caller is responsible to provide a vector
// with zeros here.
// out Pointer to output element (num_channels components).
//
// The meaning of the different options for ExtrapolationStyle and
// ConversionStyle are described above at their definitions.
//
template <typename InType, typename OutType,
ExtrapolationStyle extrapolation_style,
ConversionStyle conversion_style>
void Interpolate3DNearest(const InType* in, int64 extent_x0, int64 extent_x1,
int64 extent_x2, int64 num_channels, float x0,
float x1, float x2, const InType* pad_element,
OutType* out) {
// Round coordinates.
int64 int_x0 = std::floor(x0 + 0.5f);
int64 int_x1 = std::floor(x1 + 0.5f);
int64 int_x2 = std::floor(x2 + 0.5f);
// Pointer to source pixel.
const InType* p;
switch (extrapolation_style) {
case kMirror: {
// Mirror at boundaries.
int_x0 = MirrorAtBoundary(int_x0, extent_x0);
int_x1 = MirrorAtBoundary(int_x1, extent_x1);
int_x2 = MirrorAtBoundary(int_x2, extent_x2);
const int64 stride0 = extent_x1 * extent_x2 * num_channels;
const int64 stride1 = extent_x2 * num_channels;
const int64 stride2 = num_channels;
p = in + int_x0 * stride0 + int_x1 * stride1 + int_x2 * stride2;
break;
}
case kZeroPadding:
case kConstPadding: {
if (int_x0 >= 0 && int_x0 < extent_x0 && int_x1 >= 0 &&
int_x1 < extent_x1 && int_x2 >= 0 && int_x2 < extent_x2) {
const int64 stride0 = extent_x1 * extent_x2 * num_channels;
const int64 stride1 = extent_x2 * num_channels;
const int64 stride2 = num_channels;
p = in + int_x0 * stride0 + int_x1 * stride1 + int_x2 * stride2;
} else {
p = pad_element;
}
}
}
// Iterate over all channels and copy the values. If requested, apply
// on-the-fly conversion from indexed to one-hot-encoding.
switch (conversion_style) {
case kNoConversion: {
std::copy_n(p, num_channels, out);
break;
}
case kIndexedToOneHot: {
out[static_cast<int64>(*p)] = 1;
}
}
}
// Interpolates a value in a 3D multi-channel array using linear interpolation.
// For documentation of the parameters, see Interpolate3DNearest above.
template <typename InType, typename OutType,
ExtrapolationStyle extrapolation_style,
ConversionStyle conversion_style>
void Interpolate3DLinear(const InType* in, int64 extent_x0, int64 extent_x1,
int64 extent_x2, int64 num_channels, float x0,
float x1, float x2, const InType* pad_element,
OutType* out) {
// Compute the floor and the residual part of each coordinate.
const int64 int_x0 = std::floor(x0);
const int64 int_x1 = std::floor(x1);
const int64 int_x2 = std::floor(x2);
const float res_x0 = x0 - int_x0;
const float res_x1 = x1 - int_x1;
const float res_x2 = x2 - int_x2;
// Compute weights for the 8 neighbour elements.
const float w000 = (1.f - res_x0) * (1.f - res_x1) * (1.f - res_x2);
const float w001 = (1.f - res_x0) * (1.f - res_x1) * (res_x2);
const float w010 = (1.f - res_x0) * (res_x1) * (1.f - res_x2);
const float w011 = (1.f - res_x0) * (res_x1) * (res_x2);
const float w100 = (res_x0) * (1.f - res_x1) * (1.f - res_x2);
const float w101 = (res_x0) * (1.f - res_x1) * (res_x2);
const float w110 = (res_x0) * (res_x1) * (1.f - res_x2);
const float w111 = (res_x0) * (res_x1) * (res_x2);
// Setup pointers to the 8 neighbour elements,
// according to the extrapolation style.
const InType* p000;
const InType* p001;
const InType* p010;
const InType* p011;
const InType* p100;
const InType* p101;
const InType* p110;
const InType* p111;
const int64 stride0 = extent_x1 * extent_x2 * num_channels;
const int64 stride1 = extent_x2 * num_channels;
const int64 stride2 = num_channels;
switch (extrapolation_style) {
case kMirror: {
// Compute valid positions for all neighbour 6 directions.
const int64 x0_0 = MirrorAtBoundary(int_x0, extent_x0);
const int64 x1_0 = MirrorAtBoundary(int_x1, extent_x1);
const int64 x2_0 = MirrorAtBoundary(int_x2, extent_x2);
const int64 x0_1 = MirrorAtBoundary(int_x0 + 1, extent_x0);
const int64 x1_1 = MirrorAtBoundary(int_x1 + 1, extent_x1);
const int64 x2_1 = MirrorAtBoundary(int_x2 + 1, extent_x2);
// Pointers to the 8 neighbour elements in the first channel.
p000 = in + x0_0 * stride0 + x1_0 * stride1 + x2_0 * stride2;
p001 = in + x0_0 * stride0 + x1_0 * stride1 + x2_1 * stride2;
p010 = in + x0_0 * stride0 + x1_1 * stride1 + x2_0 * stride2;
p011 = in + x0_0 * stride0 + x1_1 * stride1 + x2_1 * stride2;
p100 = in + x0_1 * stride0 + x1_0 * stride1 + x2_0 * stride2;
p101 = in + x0_1 * stride0 + x1_0 * stride1 + x2_1 * stride2;
p110 = in + x0_1 * stride0 + x1_1 * stride1 + x2_0 * stride2;
p111 = in + x0_1 * stride0 + x1_1 * stride1 + x2_1 * stride2;
break;
}
case kZeroPadding:
case kConstPadding: {
// Check which of the 6 neighbour directions are within bounds.
const bool valid_x0_0 = (0 <= int_x0 && int_x0 < extent_x0);
const bool valid_x1_0 = (0 <= int_x1 && int_x1 < extent_x1);
const bool valid_x2_0 = (0 <= int_x2 && int_x2 < extent_x2);
const bool valid_x0_1 = (0 <= int_x0 + 1 && int_x0 + 1 < extent_x0);
const bool valid_x1_1 = (0 <= int_x1 + 1 && int_x1 + 1 < extent_x1);
const bool valid_x2_1 = (0 <= int_x2 + 1 && int_x2 + 1 < extent_x2);
// Pointers to 8 neighbour elements, or to pad_element if out of bounds.
const InType* p =
in + int_x0 * stride0 + int_x1 * stride1 + int_x2 * stride2;
p000 = (valid_x0_0 && valid_x1_0 && valid_x2_0)
? p + 0 * stride0 + 0 * stride1 + 0 * stride2
: pad_element;
p001 = (valid_x0_0 && valid_x1_0 && valid_x2_1)
? p + 0 * stride0 + 0 * stride1 + 1 * stride2
: pad_element;
p010 = (valid_x0_0 && valid_x1_1 && valid_x2_0)
? p + 0 * stride0 + 1 * stride1 + 0 * stride2
: pad_element;
p011 = (valid_x0_0 && valid_x1_1 && valid_x2_1)
? p + 0 * stride0 + 1 * stride1 + 1 * stride2
: pad_element;
p100 = (valid_x0_1 && valid_x1_0 && valid_x2_0)
? p + 1 * stride0 + 0 * stride1 + 0 * stride2
: pad_element;
p101 = (valid_x0_1 && valid_x1_0 && valid_x2_1)
? p + 1 * stride0 + 0 * stride1 + 1 * stride2
: pad_element;
p110 = (valid_x0_1 && valid_x1_1 && valid_x2_0)
? p + 1 * stride0 + 1 * stride1 + 0 * stride2
: pad_element;
p111 = (valid_x0_1 && valid_x1_1 && valid_x2_1)
? p + 1 * stride0 + 1 * stride1 + 1 * stride2
: pad_element;
}
}
// Iterate over all channels and do the interpolation. If requested, apply
// on-the-fly conversion from indexed to one-hot-encoding.
switch (conversion_style) {
case kNoConversion: {
for (int64 i = 0; i < num_channels; ++i) {
out[i] = w000 * p000[i] + w001 * p001[i] + w010 * p010[i] +
w011 * p011[i] + w100 * p100[i] + w101 * p101[i] +
w110 * p110[i] + w111 * p111[i];
}
break;
}
case kIndexedToOneHot: {
// Distribute the contributions of all neighbouring pixels to
// the respective channel, i.e. do one-hot encoding on-the-fly.
out[static_cast<int64>(*p000)] += w000;
out[static_cast<int64>(*p001)] += w001;
out[static_cast<int64>(*p010)] += w010;
out[static_cast<int64>(*p011)] += w011;
out[static_cast<int64>(*p100)] += w100;
out[static_cast<int64>(*p101)] += w101;
out[static_cast<int64>(*p110)] += w110;
out[static_cast<int64>(*p111)] += w111;
}
}
}
// Interpolates a value in a 3D multi-channel array using mixed interpolation:
// Nearest neighbor in x0-direction and linear interpolation in x1,x2-direction.
template <typename InType, typename OutType,
ExtrapolationStyle extrapolation_style,
ConversionStyle conversion_style>
void Interpolate3DMixedNearestLinear(const InType* in, int64 extent_x0,
int64 extent_x1, int64 extent_x2,
int64 num_channels, float x0, float x1,
float x2, const InType* pad_element,
OutType* out) {
// Round coordinate in x0 direction.
int64 int_x0 = std::floor(x0 + 0.5f);
// Pointer to source slice.
const InType* slice;
switch (extrapolation_style) {
case kMirror: {
// Mirror at boundaries.
int_x0 = MirrorAtBoundary(int_x0, extent_x0);
const int64 stride0 = extent_x1 * extent_x2 * num_channels;
slice = in + int_x0 * stride0;
break;
}
case kZeroPadding:
case kConstPadding: {
if (int_x0 >= 0 && int_x0 < extent_x0) {
const int64 stride0 = extent_x1 * extent_x2 * num_channels;
slice = in + int_x0 * stride0;
} else {
slice = pad_element;
}
}
}
// If we are on a valid slice, do 2D linear interpolation there
if (slice != pad_element) {
Interpolate2DLinear<InType, OutType, extrapolation_style, conversion_style>(
slice, extent_x1, extent_x2, num_channels, x1, x2, pad_element, out);
} else {
// copy pad_element to output. If requested, apply
// on-the-fly conversion from indexed to one-hot-encoding.
switch (conversion_style) {
case kNoConversion: {
std::copy_n(pad_element, num_channels, out);
break;
}
case kIndexedToOneHot: {
out[static_cast<int64>(*pad_element)] = 1;
}
}
}
}
// Perform an optimized Transform2D. The default implementation returns false to
// indicate it did not run.
template <typename InTensor, typename DeformTensor, typename OutTensor,
InterpolationStyle interpolation_style,
ExtrapolationStyle extrapolation_style,
ConversionStyle conversion_style>
class OptimizedTransform2D {
public:
static bool Run(const InTensor& in, const DeformTensor& deform,
const typename InTensor::Scalar* padding_constant,
OutTensor* out) {
return false;
}
};
// Performs the 2D deformation. Helper function for ApplyDeformation::Deform2D.
template <typename InTensor, typename DeformTensor, typename OutTensor,
typename Functor>
static void Transform2D(const InTensor& in, const DeformTensor& deform,
Functor Interpolator,
const typename InTensor::Scalar* padding_constant,
OutTensor* out_p) {
using InType = typename InTensor::Scalar;
using DeformType = typename DeformTensor::Scalar;
using OutType = typename OutTensor::Scalar;
OutTensor& out = *out_p;
const int64 in_extent_x0 = in.dimension(0);
const int64 in_extent_x1 = in.dimension(1);
const int64 num_channels = in.dimension(2);
const int64 out_extent_x0 = out.dimension(0);
const int64 out_extent_x1 = out.dimension(1);
// Use central part of deformation map if target image is smaller.
const int64 offset0 = (deform.dimension(0) - out_extent_x0) / 2;
const int64 offset1 = (deform.dimension(1) - out_extent_x1) / 2;
// create a zero-padding vector, if necessary
const InType* padding_constant_p;
std::vector<InType> zero_padding;
if (padding_constant != nullptr) {
padding_constant_p = padding_constant;
} else {
zero_padding.resize(num_channels, 0);
padding_constant_p = zero_padding.data();
}
const InType* in_p = &in(0, 0, 0);
for (int64 x0 = 0; x0 < out_extent_x0; ++x0) {
const DeformType* deform_iter = &deform(offset0 + x0, offset1, 0);
OutType* out_iter = &out(x0, 0, 0);
for (int64 x1 = 0; x1 < out_extent_x1; ++x1) {
Interpolator(in_p, in_extent_x0, in_extent_x1, num_channels,
deform_iter[0], deform_iter[1], padding_constant_p,
out_iter);
deform_iter += 2;
out_iter += out.dimension(2);
}
}
}
// Performs the 3D deformation. Helper function for ApplyDeformation::Deform3D.
template <typename InTensor, typename DeformTensor, typename OutTensor,
typename Functor>
static void Transform3D(const InTensor& in, const DeformTensor& deform,
Functor Interpolator,
const typename InTensor::Scalar* padding_constant,
OutTensor* out_p) {
using InType = typename InTensor::Scalar;
using DeformType = typename DeformTensor::Scalar;
using OutType = typename OutTensor::Scalar;
OutTensor& out = *out_p;
const int64 in_extent_x0 = in.dimension(0);
const int64 in_extent_x1 = in.dimension(1);
const int64 in_extent_x2 = in.dimension(2);
const int64 num_channels = in.dimension(3);
const int64 out_extent_x0 = out.dimension(0);
const int64 out_extent_x1 = out.dimension(1);
const int64 out_extent_x2 = out.dimension(2);
// Use central part of deformation map if target image is smaller.
const int64 offset0 = (deform.dimension(0) - out_extent_x0) / 2;
const int64 offset1 = (deform.dimension(1) - out_extent_x1) / 2;
const int64 offset2 = (deform.dimension(2) - out_extent_x2) / 2;
// create a zero-padding vector, if necessary
const InType* padding_constant_p;
std::vector<InType> zero_padding;
if (padding_constant != nullptr) {
padding_constant_p = padding_constant;
} else {
zero_padding.resize(num_channels, 0);
padding_constant_p = zero_padding.data();
}
const InType* in_p = &in(0, 0, 0, 0);
for (int64 x0 = 0; x0 < out_extent_x0; ++x0) {
for (int64 x1 = 0; x1 < out_extent_x1; ++x1) {
const DeformType* deform_iter =
&deform(offset0 + x0, offset1 + x1, offset2, 0);
OutType* out_iter = &out(x0, x1, 0, 0);
for (int64 x2 = 0; x2 < out_extent_x2; ++x2) {
Interpolator(in_p, in_extent_x0, in_extent_x1, in_extent_x2,
num_channels, deform_iter[0], deform_iter[1],
deform_iter[2], padding_constant_p, out_iter);
deform_iter += 3;
out_iter += out.dimension(3);
}
}
}
}
// Applies a deformation field (vector field) to a given image. The deformation
// field describes the backward transformation, i.e. for each position in the
// _output_ image it specifies the corresponding position in the _input_ image:
//
// O(x) = I(D(x))
//
// where (in the case of 3D single-channel images):
//
// x in R^3
// I: R^3 --> R (input image)
// O: R^3 --> R (output image)
// D: R^3 --> R^3 (deformation field)
//
// The implementation iterates over all positions in the output image. For each
// output position it fetches the corresponding input position from the
// deformation field, interpolates (or extrapolates) the value at this position
// in the input image and stores the resulting value in the output image. The
// vectors in the deformation field must be provided as raw pixel coordinates,
// i.e. (x0, x1, x2) relative to the upper-left-front corner of the array.
// Example usage:
//
// // 3D images with 2 channels.
// Eigen::Tensor<float, 4, Eigen::RowMajor> in(4, 10, 7, 2);
// Eigen::Tensor<float, 4, Eigen::RowMajor> out(4, 10, 7, 2);
// Eigen::Tensor<float, 4, Eigen::RowMajor> deform(4, 10, 7, 3);
//
// // Initialize images.
// in.setRandom();
// out.setZero();
//
// // Initialize deformation field as identity transformation.
// for (int x0 = 0; x0 < deform.dimension(0); ++x0) {
// for (int x1 = 0; x1 < deform.dimension(1); ++x1) {
// for (int x2 = 0; x2 < deform.dimension(2); ++x2) {
// deform(x0, x1, x2, 0) = x0;
// deform(x0, x1, x2, 1) = x1;
// deform(x0, x1, x2, 2) = x2;
// }
// }
// }
//
// // Apply deformation with linear interpolation, zero padding extrapolation
// // and no conversion of the intensities.
// ApplyDeformation<kLinear, kZeroPadding, kNoConversion>::Deform3D(in,
// deform,
// &out);
//
// The meaning of the different options for InterpolationStyle,
// ExtrapolationStyle and ConversionStyle are described above at their
// definitions.
//
// All tensors (in, out, deform) must be 3-D or 4-D (for Deform2D and Deform3D
// respectively) and have a RowMajor layout with shape `[extent_x0, extent_x1,
// (extent_x2,) num_channels]` . `in` and `out` can be single-channel
// (num_channels = 1) or multi-channel images. In case of `kNoConversion` the
// number of channels must be identical. For `kIndexedToOneHot` the input image
// (usually a segmentation map) must be single-channel, and the output image
// must have enough channels to store the one-hot-encoding.
//
// ATTENTION (for `kIndexedToOneHot`): If the input segmentation map contains a
// value outside the interval [0, number of output channels), this function
// will die with an error.
//
// The input image and the output image can have arbitrary spatial extents. The
// deformation field must be as large as the output image or larger. If it is
// larger, the central part of the deformation field is used. This is
// especially useful when a segmentation network takes a larger input image
// than the output segmentation map (e.g. a u-net that uses valid convolutions
// only), but both need to be deformed with the same deformation field.
//
template <InterpolationStyle interpolation_style,
ExtrapolationStyle extrapolation_style,
ConversionStyle conversion_style,
bool use_avx_optimizations = true>
class ApplyDeformation {
public:
// Deforms a 2-D multi-channel array (3-D Tensor). See class documentation for
// details.
template <typename InTensor, typename DeformTensor, typename OutTensor>
static void Deform2D(
const InTensor& in, const DeformTensor& deform, OutTensor* out_p,
const typename InTensor::Scalar* padding_constant = nullptr) {
OutTensor& out = *out_p;
static_assert(interpolation_style != kMixedNearestLinear,
"`kMixedNearestLinear` can not be used for 2D deformation.");
static_assert(static_cast<int>(InTensor::Layout) == Eigen::RowMajor,
"Input Tensor must have row major layout.");
static_assert(InTensor::NumIndices == 3, "Input Tensor must be 3-D.");
static_assert(static_cast<int>(DeformTensor::Layout) == Eigen::RowMajor,
"Deform Tensor must have row major layout.");
static_assert(DeformTensor::NumIndices == 3, "Deform Tensor must be 3-D.");
static_assert(static_cast<int>(OutTensor::Layout) == Eigen::RowMajor,
"Output Tensor must have row major layout.");
static_assert(OutTensor::NumIndices == 3, "Output Tensor must be 3-D.");
if (conversion_style == kIndexedToOneHot) {
DCHECK_EQ(in.dimension(2), 1) << "Input image must have 1 channel for "
"indexed-to-one-hot conversion.";
// Check if all values in the input segmentation map are in the allowed
// interval [0, number of output channels)
for (int64 i = 0; i < in.size(); ++i) {
DCHECK_GE(in.data()[i], 0)
<< "Input image (segmentation map) must only contain "
"positive values. Value at index "
<< i << " failed.";
DCHECK_LE(in.data()[i], out.dimension(2))
<< "Value " << in.data()[i]
<< " in input segmentation map at position " << i
<< " cannot be represented as one-hot-encoding in a vector with "
"only "
<< out.dimension(2) << " elements.";
}
} else {
DCHECK_EQ(in.dimension(2), out.dimension(2))
<< "`in` and `out` must have same number of channels, if no "
"conversion is selected.";
}
DCHECK_EQ(deform.dimension(2), 2)
<< "Deformation field must have 2 channels.";
DCHECK_GE(deform.dimension(0), out.dimension(0))
<< "Deformation field size in x0 direction must be greater or equal "
"than output image size.";
DCHECK_GE(deform.dimension(1), out.dimension(1))
<< "Deformation field size in x1 direction must be greater or equal "
"than output image size.";
DCHECK_EQ((deform.dimension(0) - out.dimension(0)) % 2, 0)
<< "Difference bewteen deformation field size and output image size "
"in x0 direction must be even.";
DCHECK_EQ((deform.dimension(1) - out.dimension(1)) % 2, 0)
<< "Difference bewteen deformation field size and output image size "
"in x1 direction must be even.";
using InType = typename InTensor::Scalar;
using OutType = typename OutTensor::Scalar;
// For one-hot-encoding, initialise the output tensor to zero.
if (conversion_style == kIndexedToOneHot) {
out_p->setZero();
}
if (use_avx_optimizations &&
OptimizedTransform2D<InTensor, DeformTensor, OutTensor,
interpolation_style, extrapolation_style,
conversion_style>::Run(in, deform,
padding_constant, out_p)) {
return;
}
switch (interpolation_style) {
case kNearest: {
Transform2D(in, deform,
Interpolate2DNearest<InType, OutType, extrapolation_style,
conversion_style>,
padding_constant, out_p);
break;
}
case kLinear: {
Transform2D(in, deform,
Interpolate2DLinear<InType, OutType, extrapolation_style,
conversion_style>,
padding_constant, out_p);
break;
}
default: {
LOG(ERROR) << "Unsupported interpolation style.";
break;
}
}
}
// Deforms a 3-D multi-channel array (4-D Tensor). See class documentation for
// details.
template <typename InTensor, typename DeformTensor, typename OutTensor>
static void Deform3D(
const InTensor& in, const DeformTensor& deform, OutTensor* out_p,
const typename InTensor::Scalar* padding_constant = nullptr) {
OutTensor& out = *out_p;
static_assert(static_cast<int>(InTensor::Layout) == Eigen::RowMajor,
"Input Tensor must have row major layout.");
static_assert(InTensor::NumIndices == 4, "Input Tensor must be 4-D.");
static_assert(static_cast<int>(DeformTensor::Layout) == Eigen::RowMajor,
"Deform Tensor must have row major layout.");
static_assert(DeformTensor::NumIndices == 4, "Deform Tensor must be 4-D.");
static_assert(static_cast<int>(OutTensor::Layout) == Eigen::RowMajor,
"Output Tensor must have row major layout.");
static_assert(OutTensor::NumIndices == 4, "Output Tensor must be 4-D.");
if (conversion_style == kIndexedToOneHot) {
DCHECK_EQ(in.dimension(3), 1) << "Input image must have 1 channel for "
"indexed-to-one-hot conversion.";
// Check if all values in the input segmentation map are in the allowed
// interval [0, number of output channels)
for (int64 i = 0; i < in.size(); ++i) {
DCHECK_GE(in.data()[i], 0)
<< "Input image (segmentation map) must only contain "
"positive values. Value at index "
<< i << " failed.";
DCHECK_LE(in.data()[i], out.dimension(3))
<< "Value " << in.data()[i]
<< " in input segmentation map at position " << i
<< " cannot be represented as one-hot-encoding in a vector with "
"only "
<< out.dimension(3) << " elements.";
}
} else {
DCHECK_EQ(in.dimension(3), out.dimension(3))
<< "`in` and `out` must have same number of channels, if no "
"conversion is selected.";
}
DCHECK_EQ(deform.dimension(3), 3)
<< "Deformation field must have 3 channels.";
DCHECK_GE(deform.dimension(0), out.dimension(0))
<< "Deformation field size in x0 direction must be greater or equal "
"than output image size.";
DCHECK_GE(deform.dimension(1), out.dimension(1))
<< "Deformation field size in x1 direction must be greater or equal "
"than output image size.";
DCHECK_GE(deform.dimension(2), out.dimension(2))
<< "Deformation field size in x2 direction must be greater or equal "
"than output image size.";
DCHECK_EQ((deform.dimension(0) - out.dimension(0)) % 2, 0)
<< "Difference bewteen deformation field size and output image size "
"in x0 direction must be even.";
DCHECK_EQ((deform.dimension(1) - out.dimension(1)) % 2, 0)
<< "Difference bewteen deformation field size and output image size "
"in x1 direction must be even.";
DCHECK_EQ((deform.dimension(2) - out.dimension(2)) % 2, 0)
<< "Difference bewteen deformation field size and output image size "
"in x2 direction must be even.";
using InType = typename InTensor::Scalar;
using OutType = typename OutTensor::Scalar;
// For one-hot-encoding, initialise the output tensor to zero.
if (conversion_style == kIndexedToOneHot) {
out_p->setZero();
}
switch (interpolation_style) {
case kNearest: {
Transform3D(in, deform,
Interpolate3DNearest<InType, OutType, extrapolation_style,
conversion_style>,
padding_constant, out_p);
break;
}
case kLinear: {
Transform3D(in, deform,
Interpolate3DLinear<InType, OutType, extrapolation_style,
conversion_style>,
padding_constant, out_p);
break;
}
case kMixedNearestLinear: {
Transform3D(in, deform,
Interpolate3DMixedNearestLinear<
InType, OutType, extrapolation_style, conversion_style>,
padding_constant, out_p);
break;
}
}
}
};
} // namespace multidim_image_augmentation
} // namespace deepmind
#ifdef __AVX2__
#endif
#endif // MULTIDIM_IMAGE_AUGMENTATION_KERNELS_APPLY_DEFORMATION_H_
| {
"pile_set_name": "Github"
} |
Video Results for: swap
Swap Porn and Sex Videos - BEEG
Offering exclusive content not available on Pornhub. The Pornhub team is always updating and adding more porn videos every day. We have a huge free DVD selection that you can download or stream. Pornhub is the most complete and revolutionary porn tube site.
Best wife swap czech swingers videos
Come with us as we dive deep into Indian sexuality, as well as cherry-pick some of the best videos and stories about sex from VICE around the world. Read more here. In many Christian traditions across many cultures, sex outside of marriage is frowned upon, if not strictly off-limits. Online groups and forums provide many sexually adventurous church-goers with a virtual community, resources, and support. One couple, who go by Mr. | {
"pile_set_name": "Pile-CC"
} |
More people in Britain attend
mosques than the Church of England. It is for the first time that
Muslims have overtaken Anglicans. According to figures 930,000
Muslims attend a place of worship at least once a week, whereas only
916,000 Anglicans do the same. Muslim leaders are now claiming that,
given such a rise of Islam in Britain, Muslims should receive a
share of the privileged status of the Church of England.
A spokesman for
David Hope, the Archbishop of York, second in the church hierarchy,
said the archbishop had conceded defeat, but added: "He believes
that many more people have an affinity to the church than the number
recorded as having attended once on a Sunday." The figures were
compiled from government and academic resources.
According to the
2001 census, three-quarters of the British population regards itself
as Christian. Although there are no registers kept at mosques
regarding attendance, but the census had included a question about
religious adherence. Those figures have been further supported by
surveys to give the first assessment of worshipping Muslims.
Although the census
recorded 1.59 million Muslims but Ceri Peach, professor of social
geography at Oxford University said the census could not record the
correct balance because the question was voluntary. Academics
believe the figure to be at least 1.8 million.
Tariq Modood, a
professor of sociology at Bristol University has found that 62 per
cent of Muslims pray in places of worship. The figure, after
excluding young children, most of whom do not worship in mosques, is
about 930,000. The figure is said to underestimate the number of
practising Muslims. Many, it is said, pray at home.
Immigration from
Eastern Europe and conversions are believed to be adding to the
number of Muslims. Lord Ahmad Patel, a Labour peer said 10 extra
seats should be allocated to other religions. The Church of England
has 26 seats in the House of Lords. However, the recent figures do
not include Catholics. The Catholic church has 1.5 million British
worshippers | {
"pile_set_name": "Pile-CC"
} |
Perhaps the tensest moment in Saturday's Republican presidential debate came when Donald Trump finally said something so outrageous that the other candidates onstage and even the debate audience closed ranks against him. Here is what Trump did: He accused George W. Bush of launching the Iraq War based on a lie: You do whatever you want. You call it whatever you want. I want to tell you. They lied. They said there were weapons of mass destruction, there were none. And they knew there were none. There were no weapons of mass destruction. Trump's 10-second history of the war articulated it as many Americans, who largely consider that war a mistake, now understand it. And, indeed, Bush did justify the war as a quest for Iraqi weapons of mass destruction, which turned out not to exist. The other Republican candidates, who have had this fight with Trump before, did not defend the war as their party has in the past, but rather offered the party's standard line of the moment, which is that Bush had been innocently misled by "faulty intelligence." But neither version of history is really correct. The US primarily invaded Iraq not because of lies or because of bad intelligence, though both featured. In fact, it invaded because of an ideology. A movement of high-minded ideologues had, throughout the 1990s, become obsessed with deposing Saddam Hussein. When they assumed positions of power under Bush in 2001, they did not seek to trick America into that war, but rather tricked themselves. In 9/11, and in fragments of intelligence that more objective minds would have rejected, they could see only validation for their abstract and untested theories about the world — theories whose inevitable and obvious conclusion was an American invasion of Iraq. This is perhaps not as satisfying as the "Bush lied, people died" bumper sticker history that has since taken hold on much of the left and elements of the Tea Party right. Nor is it as convenient as the Republican establishment's polite fiction that Bush was misled by "faulty intelligence." If the problem were merely that Bush lied, then the solution would be straightforward: Check the administration's facts. But how do you fact-check an ideology, particularly when that ideology is partially concealed from the public view? How do you guard against that ideology, which still dominates much of the GOP, and some of whose ideas are shared by more hawkish Democrats, from leading us astray again? The moment at Saturday's debate should highlight the degree to which many Americans, from voters right up to presidential candidates, still misunderstand — and failed to learn from — the story of how America came to expend 4,500 of its citizens' lives in a war that would kill well over 100,000 Iraqis, destroy an entire nation, and help send the Middle East spiraling into chaos. Why did the United States invade Iraq? To understand the American decision to invade Iraq, and to learn the lessons of that mistake, one must begin not with George W. Bush's claims of Iraqi WMDs or with the 9/11 attacks, but rather with a series of initially obscure ideological debates on elements of the American right. Those debates, which played out throughout the 1990s, had their roots in disagreements within the Republican Party over American power — and in the evolution of a right-leaning but surprisingly heterodox intellectual movement known as neoconservatism. Neoconservatism, which had been around for decades, mixed humanitarian impulses with an almost messianic faith in the transformative virtue of American military force, as well as a deep fear of an outside world seen as threatening and morally compromised. This ideology stated that authoritarian states were inherently destabilizing and dangerous; that it was both a moral good and a strategic necessity for America to replace those dictatorships with democracy — and to dominate the world as the unquestioned moral and military leader. Neoconservatism's proponents, for strategic as well as political reasons, would develop an obsession with Saddam Hussein's Iraq. That obsession would, by the end of the decade, congeal into a policy, explicitly stated: regime change. Their case was always grandly ideological, rooted in highly abstract and untested theories about the nature of the world and America's rightful place in it. Their beliefs were so deeply held that when 9/11 shook the foundations of American foreign policy, they were able to see only validation of their worldview, including their belief in the urgent need to bring democracy to Iraq. It was this ideological conviction, more than any piece of intelligence or lie told about it, that primarily led America into Iraq. Weapons of mass destruction were the stated justification, but they were never the real reason, nor was bad intelligence. The lesson of the Iraq mistake is not the dangers of lying or of anything as narrow as faulty intelligence, but rather of sweeping ideologies and ambitions that can take on a momentum all their own. That particular ideology, neoconservatism, remains a major force in the Republican Party, and a number of its tenets are held by some Democrats as well. Its mandate for war, and its faith in the power of American military force, still animates that ideology, particularly toward the Middle East. It is remarkable and alarming that more than a decade and thousands of lives later, neither Republicans nor Americans more broadly have fully confronted how that ideology developed to lead us into a catastrophic war — and the dangers that it, or any other blindly fervent ideology on the right or the left, could still pose. The radical ideas that led to the neoconservative obsession with Iraq The story of neoconservatism's evolution in the 1990s begins and ends with Iraq, but at its start it was a disagreement among Republicans. In late 1990, Saddam Hussein's Iraq invaded the oil-rich neighboring kingdom of Kuwait, and a few months later President George H.W. Bush led a brief military intervention to expel Saddam. But where many Americans saw a rousing success, and the start of a decade that they would experience as overwhelmingly peaceful, a dissident faction of Republicans in and outside of the administration experienced it as a formative moment of national disgrace. As the American-led mission wound down, the elder Bush urged Iraqis to rise up. But Bush had stopped the war short of destroying Saddam's Republican Guard or his helicopter units, which were able to quickly crush the short-lived Iraqi uprising. "A decision was not made — a decision happened and you can't say when or how" Some administration officials, particularly then-Under Secretary of Defense Paul Wolfowitz, argued that the US should intervene against Saddam's crackdown — if not to aid in regime change, then at least to stop the slaughter. Wolfowitz "wanted to finish Saddam's regime, and not only did he want to finish it, he believed there was a strong basis for doing so," Richard Perle, another major neoconservative figure, told the journalist George Packer for his book The Assassins' Gate. Wolfowitz, an idealist and humanitarian, had long believed in America's responsibility to promote democracy abroad. In the mid-1980s, as Ronald Reagan's assistant secretary of state for East Asia, Wolfowitz successfully pushed for the US to abandon Filipino dictator Ferdinand Marcos, who, though a reliable anti-communist, was violent and corrupt. For Wolfowitz and other neoconservatives in the elder Bush administration, the 1991 Gulf War embodied of everything that was morally wrong — and indeed dangerous — with America's practice of tolerating dictators. Throughout the 1990s, Saddam Hussein only became more defiant and disobedient, ignoring United Nations mandates on weapons inspections and issuing increasingly anti-American rhetoric. While many Middle East analysts suspected Saddam's actions were primarily designed to help him save face at home after his humiliating 1991 defeat against the Americans, neoconservatives saw not just American humiliation but alarming evidence of American decline. This played into a growing school of thought among the dissident Republicans, which went far beyond Iraq. It said that America had a special responsibility to spread democracy for the betterment of humanity, that Republicans had forgotten the world-changing idealism of Ronald Reagan, and that the end of the Cold War was not an excuse for America to retreat from its military adventurism but rather the moment when it was needed most. A historian and scholar named Robert Kagan helped lead this charge. He argued that America's unilateral assertion of power — the mere fact of American military action — was not just strategically but morally necessary. It would spread democracy and thus human rights, but also deter rogue states and thus promote peace. In 1996, Kagan co-authored, along with Weekly Standard editor Bill Kristol, a seminal essay in Foreign Affairs calling on America to bring about an era of "global benevolent hegemony." They predicted that the world would welcome American military dominance as a force for stability and for the promotion of values such as democracy and human rights. In this view, nearly any expression of American military dominance was an act of moral good, whereas the absence of US dominance would invite chaos and, ultimately, threats against the US. The neoconservatives' attention would inevitably return, over and over, to Iraq and to the anti-American dictator who had wrongly escaped justice. Iraq was a perfect example of their criticisms of Democrats and Republicans alike, its defiance a seemingly undeniable argument for their worldview. Building the case for war In 1997, the year after their Foreign Affairs essay, Kagan and Kristol helped found a group called the Project for a New American Century, meant to instill these foreign policy ambitions in a Republican Party that had tilted away from Reagan-style idealism. PNAC included in its members Wolfowitz and Perle, as well as other senior Reagan administration officials and neoconservatives such as Elliott Abrams, James Woolsey, and Donald Rumsfeld. From the start, it made Iraq its central issue. In January 1998, PNAC published an open letter to the Clinton administration warning that "we may soon face a threat in the Middle East more serious than any we have known since the end of the Cold War." It urged a new strategy that "should aim, above all, at the removal of Saddam Hussein’s regime from power." "Fuck Saddam. We're taking him out." Partly this was specific to Iraq. The world was generally pliant to American will in the 1990s, but the defiantly anti-American Iraq stood out as a glaring exception; neoconservatives simply had few other examples to justify their view of a dangerous world that had to be subjugated by American power. Perhaps just as importantly, Iraq was seen in Washington as a policy failure for Bill Clinton — tempting many Republicans, whether they were particularly invested in neoconservatism or not, to take hard-line positions from which to attack him. But more than that, this was about using Iraq as a proving ground for the neoconservatives' larger and more ideological mission. "They saw Iraq as the test case for their ideals about American power and world leadership," Packer writes. "Iraq represented the worst failure of the nineties and the first opportunity of the new American century." As it happened, PNAC and its allies had an unprecedented opening to harden their radical proposal into mainstream Washington consensus. In 1998 came the Monica Lewinsky scandal, in which congressional Republicans, sensing Clinton's political weakness, sought opportunities to both embarrass him on other fronts and win concessions he might have otherwise resisted. Iraq gave them both: That October, seizing on PNAC's call for regime change, congressional Republicans passed the Iraq Liberation Act, which stated that regime change was US policy. Clinton caved to the pressure, signing the Iraq Liberation Act and thus announcing to Saddam Hussein, and to the world, that America was bent on his removal. Saddam, in retaliation, expelled UN weapons inspectors that same day. These two acts would prove crucial in laying the groundwork for the US invasion five years later. In Washington, regime change had suddenly and with little thought become a comfortably bipartisan policy position. And the George W. Bush administration would later argue that Saddam had expelled the inspectors not as political retaliation, but rather to restart his 1980s chemical and biological weapons programs. In the final year of Clinton's presidency, Kristol and Kagan co-edited a book of essays titled Present Dangers, meant to argue for a new era of neoconservative Republican foreign policy. It included an essay by Richard Perle that argued the US should not just promote an Iraqi uprising but also provide US ground troops to assist them. Perle also urged installing in Saddam's place an exile group known as the Iraqi National Congress, which was headed by Ahmed Chalabi — the very man the US would try to install three years later. A few months later, Texas Gov. George W. Bush became president. Moved by neoconservatism's idealistic faith in democracy and perhaps sympathetic to its fixation on Iraq — Saddam had attempted to assassinate Bush's father — Bush filled several top positions with members of PNAC and other neoconservative adherents, including Rumsfeld as defense secretary and Wolfowitz as deputy secretary of defense. Richard Perle chaired the Pentagon's defense policy advisory board.
What 9/11 really had to do with the Iraq War Despite longstanding conspiracy theory to the contrary, it is not the case that Bush came into office secretly plotting to invade Iraq or that he seized on the 9/11 attacks as cynical justification. While there is a line between the attacks and the invasion of Iraq, that line is not as direct as many Americans might think. The attacks left Bush, a foreign policy neophyte, adrift. He had little experience with the Middle East or the complex social and political forces that had culminated, seemingly out of nowhere, in the deaths of some 3,000 Americans. He grasped for an answer; the neoconservatives in his administration just happened to have one ready. Since long before 9/11, these officials had argued that terrorism like that of al-Qaeda had to be understood as a symptom of the Middle East's real problems as they saw it: an absence of democracy and of American-dominated "benevolent hegemony." This worldview did not necessarily require that Saddam Hussein had been behind the 9/11 attacks or that he had sheltered Osama bin Laden. Nonetheless, the neoconservatives, so steeped in abstract ideological convictions that put Saddam at the center of the Middle East's problems, were unable to resist the temptation to see the 9/11 attacks as validating their grand theories about the world. And those theories inevitably culminated, as they always had, in the need for America to go to war with Iraq. On 9/11 itself, Packer recounts in his book, "Within minutes of fleeing his office at the devastated Pentagon, Wolfowitz told aides that he suspected Iraqi involvement in the attacks." On September 12, 2001, as rescue workers still swarmed the downed Twin Towers, Bush asked his counterterrorism team to investigate Iraqi links. "See if Saddam did this. See if he's linked in any way. ... I want to know any shred," he said, according to then-counterterrorism chief Richard A. Clarke's recollection to Packer. On September 15, at a high-level Camp David meeting to discuss the US response to the attacks, Wolfowitz repeatedly raised Saddam Hussein as not just a possible link but the most important target for retaliation. On September 17, according to Packer's account, Bush told his war council, "I believe Iraq was involved." In subsequent months, the Bush administration would gesture at a case for Iraqi involvement in 9/11, but would ultimately settle on a very different argument that Saddam possessed WMD programs that threatened the US. Bush's flexibility in how he justified the war was telling. It was not any particular issue, whether terrorism or WMDs, that prompted the war; rather, it was always about ideological convictions. Those convictions took on a momentum of their own. The administration's neoconservatives argued not just for possible links between Saddam and Osama bin Laden, but that al-Qaeda was an outgrowth of the Middle East's larger problems as they had long identified them. Toppling Saddam would not just solve these root problems — it would transform the Middle East for the better, and begin an era of welcomed American dominance over the region. These arguments relied increasingly on a small circle of Middle East scholars such as Fouad Ajami, whose 1998 book Dream Palace of the Arabs had rooted the region's problems in a self-perpetuating social and political rot. Only a major jolt could end the cycle and awaken the once-proud Arabs. This jolt, Ajami argued, would be best delivered by an American invasion to topple Saddam and "liberate" Iraqis with democracy — thus surely inspiring a regional awakening. By that December, long before the Bush administration would produce any of the so-called smoking guns proving Iraqi WMDs, it had already begun preparing to sell the public on a war with Iraq. David Frum, the Bush-era speechwriter who would later coin the term "axis of evil," described this moment in his memoir, The Right Man: "Here's an assignment. Can you sum up in a sentence or two our best case for going after Iraq?" It was late December 2001, and Mike Gerson was parceling out the components of the forthcoming State of the Union speech. His request to me could not have been simpler: I was to provide a justification for war. Frum clarifies that other speechwriters were working on alternate drafts that were to be less "hawkish"; his assignment, he believes, did not indicate that the administration was yet dead set on war. But Frum's anecdote, like so many others from that time, shows the building momentum, within the administration, for war — a momentum, propelled by ideological conviction, that would ultimately overtake reason and critical thinking in the White House. In March 2002, Bush dropped into a meeting between National Security Adviser Condoleezza Rice and three senators to tell them, "Fuck Saddam. We're taking him out." That June, Richard Haass, the State Department director of policy planning, visited Rice's office for their regular meeting. When he raised the State Department's misgivings about the "bureaucratic chatter" of a possible war, Rice cut him off. "Save your breath," she told him. "The president has already made up his mind." "It was an accretion, a tipping point," Haass told Packer, recounting the incident. "A decision was not made — a decision happened and you can't say when or how." How the Bush administration fooled even itself The neoconservative ideological convictions — a preoccupation with Saddam Hussein, a radical ambition to remake the Middle East from within, an almost blind faith in American military power as a force for positive transformation — led them to desire a war with Iraq as the solution to not just terrorism but a litany of problems, and to see validation for that desire even in the obviously flawed intelligence that would be their justification. The White House inserted itself directly into an intelligence dissemination and vetting process that is typically handled by the agencies themselves. After 9/11, Bush and Vice President Dick Cheney instituted a new system known as "Top Secret Codeword/Threat Matrix," under which they demanded to personally review raw intelligence. "The mistake was not to have proper analysis of the intelligence before giving to the president," Roger Cressey, who served in Bush's National Security Council, told Jane Mayer for her book The Dark Side. "There was no filter. Most of it was garbage. None of it had been corroborated or screened. But it went directly to the president and his advisers, who are not intelligence experts. That's when mistakes got made." In the months after the attacks, US intelligence agencies came under heavy pressure to investigate the administration's suspicions of links between Saddam Hussein and 9/11, or of ongoing Iraqi WMD programs. It does not appear that the administration encouraged them to lie, but rather that deep-rooted biases led top officials to dismiss the mountains of intelligence that undercut their theories and to favor deeply problematic intelligence that supported it. In 2001, for example, a man named Ibn al-Sheikh al-Libi, whom the US had picked up in Afghanistan and then shipped to Egypt to be tortured, claimed that Saddam had provided al-Qaeda with chemical and biological weapons training. The Defense Intelligence Agency warned that Libi's information could not be trusted. But Bush treated it as credible, and repeated Libi's claim as established fact in his case for war. The US also relied heavily on claims by an Iraqi exile living in Germany named Rafid Ahmed Alwan, code-named "curveball," who claimed to have direct knowledge of secret Iraqi WMD programs. Though both German and UK intelligence said Alwan was unstable and his information unreliable, the US embraced his claims, which provided the basis of much of its case for war. Years later, Alwan admitted he had made it all up to help instigate the American invasion of Iraq. But the White House believed him for the simple reason that it badly wanted to. Within months, the momentum for war within the administration had overtaken the normal processes of decision-making — and certainly had overtaken the public case for war. By all appearances, administration officials believed their allegations of Iraqi WMDs were true and that this was indeed sufficient justification. Why else would the US launch a desperate, high-profile search for WMDs after invading — which only ended up drawing more attention to how false those allegations had been? Rather, they had deceived themselves into seeing half-baked intelligence as affirming their desire for war, and then had sold this to the American people as their casus belli, when in fact it was secondary to their more high-minded and ideological mission that would have been too difficult to explain. That, more than overstating intelligence on WMDs, was the really egregious lie. The lie bigger than WMDs: claiming the war was because of WMDs "We know they have weapons of mass destruction. We know they have active programs. There isn't any debate about it," Rumsfeld said in September 2002. "Saddam Hussein still has chemical and biological weapons, and is increasing his capabilities to make more. And he is moving ever closer to developing a nuclear weapon," Bush said the next month, warning that Saddam would "threaten America and the world with horrible poisons, and diseases, and gases, and atomic weapons." Then-National Security Adviser Condoleezza Rice claimed that Saddam was running a clandestine nuclear program that was only "six months from a crude nuclear device." In fact, none of this was true. Iraq had discontinued its chemical and biological weapons programs in the 1980s. A 1998 US-led bombing campaign had destroyed much of the remains. But even if Bush's allegations had been true, they would not have accurately described his administration's real reasons for invading Iraq. The neoconservative mission of upending a tyrant and bringing democracy to the Middle East was mentioned only as a secondary benefit, or deployed as a later justification when no WMDs materialized. This was, in part, how the Bush administration backed itself into such shoddy intelligence — shutting down Iraqi WMDs was never really the point, so Bush officials had little reason to fully vet the intelligence suggesting those programs were already gone. At the same time, in keeping their actual reasons for war from the public, the Bush administration lost the opportunity for those reasons to be openly debated, at which point more grounded Middle East or military scholars might have revealed them as dangerously misguided. America needs to finally confront the lessons of Iraq — before we repeat them As Donald Trump's stunt showed, America's public debate over Iraq, now 13 years later, still turns largely on Bush's claims and their truth. But even if Saddam had turned out to possess weapons of mass destruction, if Bush had been right, what would it really change? The war would still have cost some 4,500 American lives and well over 100,000 Iraqi lives. It would still have destabilized Iraq, opened up the country for violent extremism, and contributed directly to the rise of ISIS. And it would still have been launched in pursuit of an ideological mission that turned out to be dangerously misguided. Abstract and radical neoconservative ideas that had developed during the Clinton years, bouncing around a tiny echo chamber of like-minded idealists who had little desire to challenge one another, had suddenly and with no real public debate become the basis of a war that would quickly cost many thousands of lives. But those ideas are still very much a part of America's foreign policy discourse, and some day, even as soon as this January, their adherents could return to the White House. Americans have rightly litigated the question of Bush's honesty on WMDs. But we have still not interrogated the deeper force behind the catastrophic war: the radical convictions of a neoconservative ideology that remains central to the Republican Party's foreign policy — particular among establishment-backed presidential candidates such as Marco Rubio and Jeb Bush. These candidates, in how they discuss hostile nations such as Iran, Russia, and Syria, do not sound so different from the neoconservatives of the 1990s. You hear this in their belief in the power and virtue of unilateral American force, in the need to express hegemonic American dominance over the Middle East, and in the apparently earnest fear that any challenge to American power, no matter how slight, is just the start of a potentially global unraveling. You see it in Marco Rubio's highly ideological but analytically groundless belief that dismantling the Iran nuclear deal and adopting a policy of maximal belligerence toward Tehran would advance freedom and peace in the Middle East. This is not to say that neoconservative candidates are secretly plotting, or would necessarily execute, another war in the Middle East — although it is concerning to see them so focused on Iran as an implacable and grave threat that can only be addressed by subjugating the regime or bringing about its downfall. It is concerning to see Rubio advocating forceful regime change in Syria and hiring a foreign policy adviser who advocates it in Iran, all along similar high-minded ideological lines as the neoconservative obsession with Iraq 20 years ago. It is worrying to hear hawks like Sen. Tom Cotton, embraced by neoconservative luminaries, explicitly advocate that the US abandon the nuclear deal to instead force regime change or even launch military strikes. To be clear, the ideas of neoconservatism are not all exclusive to the Republican Party; Democrats such as Hillary Clinton and Samantha Power have pursued some, though far from all, similarly high-minded policies, particularly a belief in humanitarian interventions. (Indeed, Clinton voted for the Iraq War.) And many Republicans do oppose neoconservatism, instead advocating a return to the hard-nosed realism of George H.W. Bush. The lesson is not that neoconservatism should be a disqualification from the presidency. Indeed, the ideology has made important and undervalued contributions to American foreign policy, such as its focus on human rights and its warning that supporting friendly dictatorships is both morally wrong and, in the long term, strategically unviable. But these ideas, like neoconservatives' more dangerous faith in the transformative power of American military force, deserve to be evaluated and then either embraced or rejected on their merits. In the Iraq War, we had the purest possible test of many of this ideology's core beliefs about the inherent virtue of American military power, about the supposedly transformative power of regime change, and about the supposed demand for American hegemony. These ideas all proved not just false but disastrously so. We have not taken those lessons into account, preferring instead to litigate the narrower and politically easier question of Bush's personal honesty. The lesson, which extends to both parties, is that a potential president's ideological views are just as important to examine and vet as are his or her policy proposals; that the line between obscure policy journals and American military action can be much shorter than we'd like to think. That is true of any ideology, but it is especially true of neoconservatism, which we have still not chosen to vet, remarkably, even after we invested billions of dollars and thousands of lives in testing it directly in Iraq, to results apparently so damning we have still not fully absorbed them. | {
"pile_set_name": "OpenWebText2"
} |
Q:
Javascript objects vs setInterval
While I was looking for an good example of an analog clock implemented only using Javascript I found this interesting clock written by Emanuele Feronato, using a very robust Javascript library called Raphaël
I was playing with it for a while and then I wanted to have multiple clocks with different times on those, maybe according to different timezones but that's not the case here.
So what I did was create separate clock objects and set different times. It worked but the problem comes when the script hits the setInterval() function, it didn't really work the clocks' hands are not rotating.
I'm not so good at Javascript built-in functions and I couldn't find a solution to prevent this issue, any way I'm posting my code here.
function createClocks(){
/* for the time being assume these Date objects are unique */
var diffDate_01 = new Date();
var diffDate_02 = new Date();
/* create separate clock Objects */
var clok_01 = new clock(diffDate_01);
var clok_01 = new clock(diffDate_02);
/* calling update_clock function wrapped within setInterval to make the clock's hand rotatable */
setInterval("clok_01.update_clock(diffDate_01)", 1000);
setInterval("clok_01.update_clock(diffDate_02)", 1000);
}
function clock(diffDate){
/* this is the place where the base implementation of the clock stands I removed the setInterval("update_clock()",1000); because I want to call it from outside as per Object separately */
function update_clock(diffDate){
var now = diffDate;
var hours = now.getHours();
var minutes = now.getMinutes();
var seconds = now.getSeconds();
hour_hand.rotate(30*hours+(minutes/2.5), 100, 100);
minute_hand.rotate(6*minutes, 100, 100);
second_hand.rotate(6*seconds, 100, 100);
}
}
For the HTML part I'm creating dynamic clock <div> tags and append all those to one a <div> tag exists on the body of the HTML document.
Thanks.
A:
Please, please don't use strings with setInterval() ever. That causes scope problems and potentially other problems.
When you use a string, that string is evaluated with eval() at the global scope. As such it has NO access to any of your local variables. There were also a number of other problems, including the fact that you didn't make update_clock a method of the clock object.
Here's a working, rewritten and cleaned up version of the code that is much more object oriented and supports several new methods: http://jsfiddle.net/jfriend00/wKVC7/
And, here's the code:
function clock(id, initialTime) {
// we store each clock in global map clock.clocks
// create global clock map if it doesn't already exist
clock.clocks = clock.clocks || {};
// store this newly created clock in the map
clock.clocks[id] = this;
this.id = id;
// canvas for this clock (remembered as an instance variable)
this.canvas = Raphael(id, 200, 200);
// draw clock face
var clockFace = this.canvas.circle(100,100,95);
clockFace.attr({"fill":"#f5f5f5","stroke":"#444444","stroke-width":"5"})
// draw clock tick marks
var start_x, start_y, end_x, end_y;
for(i=0;i<12;i++){
start_x = 100+Math.round(80*Math.cos(30*i*Math.PI/180));
start_y = 100+Math.round(80*Math.sin(30*i*Math.PI/180));
end_x = 100+Math.round(90*Math.cos(30*i*Math.PI/180));
end_y = 100+Math.round(90*Math.sin(30*i*Math.PI/180));
this.canvas.path("M"+start_x+" "+start_y+"L"+end_x+" "+end_y);
}
// draw the three hands (hour, minutes, seconds)
// save each path as an instance variable
this.hour_hand = this.canvas.path("M100 100L100 50");
this.hour_hand.attr({stroke: "#444444", "stroke-width": 6});
this.minute_hand = this.canvas.path("M100 100L100 40");
this.minute_hand.attr({stroke: "#444444", "stroke-width": 4});
this.second_hand = this.canvas.path("M100 110L100 25");
this.second_hand.attr({stroke: "#444444", "stroke-width": 2});
// draw center pin
var pin = this.canvas.circle(100, 100, 5);
pin.attr("fill", "#000000");
// update with the actual time
this.drawTime(initialTime);
}
clock.prototype = {
// start the clock running automatically
start: function() {
// we have just one global timer running
// check to see if it is going - if not start it
if (!clock.timer) {
clock.timer = setInterval(function() {
var clocks = clock.clocks; // get global map
for (var i in clocks) {
if (clocks.hasOwnProperty(i)) {
if (clocks[i].running) {
clocks[i].update();
}
}
}
}, 1000);
}
// if we weren't already running, start this clock
if (!this.running) {
var now = new Date();
this.timeOffset = now - this.currentTime;
this.update();
this.running = true;
}
return(this);
},
// stop the clock
stop: function() {
this.running = false;
},
destroy: function() {
this.stop();
delete clock.clocks[this.id];
},
// update the clock according to time of day
update: function() {
var now = new Date();
this.drawTime(new Date(now - this.timeOffset));
},
// update the clock - if no time is passed in, then it will use the current time
drawTime: function(customDate) {
var now = customDate || new Date();
var hours = now.getHours();
var minutes = now.getMinutes();
var seconds = now.getSeconds();
this.hour_hand.rotate(30*hours+(minutes/2.5), 100, 100);
this.minute_hand.rotate(6*minutes, 100, 100);
this.second_hand.rotate(6*seconds, 100, 100);
this.currentTime = now;
}
};
| {
"pile_set_name": "StackExchange"
} |
Q:
Intermediate results in a recursive function
I'm trying to write a function which will find all prime numbers from 2 to 100. I can do so by testing if a number is prime by looking at remainder when the number n is divided by all numbers in 2..n-1.
I would however like only to test against prime numbers I have already found. This is how far I got writing my recursive function, but I don't know how to replace [2..t-1] with the prime numbers I have already worked out (the intermediate result of my recursive function I guess). How can I do that?
primes = go [2..100]
where
go l@(t:ts)
| all (\x -> t `rem` x /= 0) [2..t-1] = t:(go ts)
| otherwise = go ts
go [] = []
main = print primes
A:
Here's one possible way:
primes = 2 : go [3..]
where
go (t:ts)
| all (\x -> t `rem` x /= 0) (takeWhile (\x->x*x<=t) primes) = t:(go ts)
| otherwise = go ts
Here we use the already-calculated part of primes up to the square root of t. Note that we need not to specify the upper bound in primes, it will simply produce an infinite list that you can later chop:
print $ take 1000 primes
Note also that we need to bootstrap primes such that the very first prime is not calculated from previous primes, so that takeWhile could work.
A:
You could do that with tail recursion. Where you carry the already calculated primes ps along.
primes = go [2..100] []
where go (t:ts) ps
| all (\x -> (t `rem` x) /= 0) ps = go ts (t:ps)
| otherwise = go ts ps
go [] ps = ps
Note that primes are now reversed. It is faster to concatenate them than appending it.
You can also limit the number of divisions:
| all (\x -> (t `rem` x) /= 0) (takeWhile (\x -> x*x <= t) ps) = go ts (t:ps)
Also use Int if speed matters because it is unboxed.
| {
"pile_set_name": "StackExchange"
} |
I'll have you know, I saw my ex on r/gonewild with another man And I only cried for a few hours
199 shares | {
"pile_set_name": "OpenWebText2"
} |
SLEEPING ON YOUR SIDE MIGHT BE GOOD FOR YOUR BRAIN
The position in which you sleep might be have an impact on more than your just your posture; it could also impact your mental health. New research suggests that it might be related to how the brain removes waste chemicals, and that some positions might be better for this than others.
The researchers found that sleeping on one’s side, compared to sleeping on one’s back or stomach, appeared to allow the body to more efficiently clear waste chemicals from the brain. “It is interesting that the lateral [side] sleep position is already the most popular in humans and most animals – even in the wild – and it appears that we have adapted the lateral sleep position to most efficiently clear our brain of the metabolic waste products that build up while we are awake,” explained Maiken Nedergaard from the University of Rochester in New York, who was involved with the study.
The scientists used “dynamic contrast” magnetic resonance imaging (MRI), which uses a special chemical to improve visability of internal structures, to image what’s called the brain’s “glymphatic pathway.” This is the system whereby cerebrospinal fluid (CSF), the clear liquid found in the brain, filters through the brain and exchanges with interstitial fluid (ISF), the liquid found around all other cells in the body. This allows chemicals and waste that build up in the brain to be removed, such as amyloid beta and tau proteins, which are associated with Alzheimer’s and Parkinson’s.
It’s been known that this process happens more when we’re sleeping, with clinical studies showing that sleep drives the removal of amyloid beta from the brain, but this study shows that apparently the position in which we sleep might also influence this clearance. The team anesthetized rats, and then tracked the efficiency of the glymphatic pathway when the rodents were sleeping in one of three positions, either lateral (on their side), prone (on their bellies), or supine (on their backs).
It’s interesting in that many mammals naturally tend to sleep on their sides, from dogs to cats and even elephants, although the authors do note that a wild animal’s sleeping behavior is also probably influenced by survival, and thus might be different when compared to humans. As this study was done on rats, it’s not yet known whether the same conclusions can be drawn for humans, but considering it’s been shown that both the rodents and people tend to favor sleeping on their side, it’s not such a wild idea.
“Many types of dementia are linked to sleep disturbances, including difficulties in falling asleep,” concludes Nedergaard. “It is increasingly acknowledged that these sleep disturbances may accelerate memory loss in Alzheimer’s disease. Our finding brings new insight into this topic by showing it is also important what position you sleep in.” | {
"pile_set_name": "Pile-CC"
} |
Matt Damon Remembers Robin Williams in 'Good Will Hunting' on Second Anniversary of His Death
Robin Williams had a celebrated career, which was sadly cut short when he took his own life on August 11, 2014. But aside from his own personal success, he helped launch the careers of other stars he worked with, too. One of them, Williams's "Good Will Hunting" costar Matt Damon, recalled the late actor's greatness in that flick in a new interview commemorating the second anniversary of his passing.
Damon was still a fresh-faced presence in Hollywood when he and best friend Ben Affleck penned the screenplay for "Hunting," which wound up winning them the Oscar for Best Original Screenplay, and Williams the Oscar for Best Supporting Actor. But before all of that, they were just a group of actors on set, and as Damon recalled in a new interview with JOE.ie, he was absolutely in awe of Williams's performance.
Damon told JOE.ie that when they were shooting the famous scene on the bench in Boston Common, Williams "was just crushing it on the first take."
"I just went, 'This is gonna be really good,'" Damon recalled of the electrifying moment.
The actor added that he recently visited that bench again with his family -- though his children are too young to have seen the movie yet -- and remembered his former costar.
"It was nice to go back and think about him back there," Damon told JOE.ie.
That bench had become a makeshift memorial to the actor after his passing, and we imagine it will remain it will remain an important spot for his fans to reflect for quite some time. RIP, Robin. | {
"pile_set_name": "Pile-CC"
} |
Quantitation of myocardial infarct size from thallium-201 images: validation of a new approach in an experimental model.
A new computer-based method has been developed to quantitate myocardial infarct size from the size of the regional thallium-201 deficit. The operator outlines the left ventricular myocardial activity with an ellipse. The program then plots the background-corrected activities of the highest mean value in a 3 pixel myocardial band perpendicular to and within the ellipse. The approach uses a new interpolative background correction. To determine the accuracy of this approach in assessing regional thallium deficit size, acute myocardial infarction was produced in six dogs by 24 hour occlusion of the proximal left anterior descending coronary artery. Infarct size was assessed from planar thallium images of the dog heart in three views, each with the chest opened and closed and with the heart excised and placed in a cradle. Before removal of the heart, triphenyltetrazolium chloride was infused to delineate normal from infarct tissue. Transverse slices of left ventricle were made and thallium images of the slices acquired. Infarct size delineated by triphenyltetrazolium chloride staining was expressed as a percent of the total left ventricular slice surface area (planimetric infarct size). Infarct size from whole heart and left ventricular slice thallium images was expressed as a percent of the total length of the left ventricular perimeter (perimetric infarct size). This was determined from points below a certain percent of normalized peak thallium activity in the computer-generated thallium activity curve.(ABSTRACT TRUNCATED AT 250 WORDS) | {
"pile_set_name": "PubMed Abstracts"
} |
Tag: javier baez
What have we learned about Noah Syndergaard (1-0) after Sunday night’s NLCS Game 2? His changeup has really improved over the course of the year for one. At the beginning of the year, Syndergaard was all fastballs and balls that broke for dirt. In June, the pitch was considered in his repertoire in name only … | {
"pile_set_name": "Pile-CC"
} |
About Lextre
Our Story
Lextre is one of the most exciting and fastest growing international games development studios. Its first release, drag racing game Perfect Shift, launched at the end of 2014, and in just two months it has shot to the top of the charts amassing over 2 million downloads around the world.
Lextre specialises in producing visually stunning and engaging action and adventure games that both experienced gamers and non-gamers love to play. Lextre’s young and vibrant team put ‘fun’ at the forefront of everything that they do, as is reflected in all of Lextre’s games. The team have over 10 years of combined industry experience, and have grown up living and breathing mobile games.
The Lextre team are committed to delivering excellence. They saw the shift from consoles to mobiles and tablets, and in the process were frustrated to see that too many studios focused their efforts on producing hero games for hero devices, meaning that people who did not have the best handsets often suffered from a worse gaming experience. That’s why Lextre was founded with one mission in mind: to deliver the most superior gaming experience across any device.
Lextre’s team are as passionate about perusing action and adventure offline as they are in the games that they develop. Lextre strives to:
Put quality at the forefront of everything that they do. From the games that they develop, to the way that they conduct business with partners
Always be visionary. Lextre strives to always lead the mobile gaming industry by taking risks, experimenting and exploring. Above all, Lextre strives to be highly creative in everything that they do
Be real, both in terms of company’s values and in the games developed. All of Lextre’s games have extremely realistic graphics with a strong focus on character-driven stories. But Lextre as a company also prides itself on its honesty and transparency to remaining true to its core values
Deliver happiness and enjoyment to the people playing Lextre’s games. Lextre’s games are always fun
Grow and retain the best team – because Lextre’s team is at the forefront of everything that it does
Specialising in the development, graphic design, marketing and management of games, the Lextre team pride also themselves on their professional approach which ensures that their games are always of the highest quality for users to enjoy.
Lextre is a small studio with big ambitions. In just two years Lextre has grown rapidly and now boasts a team of over 35 people, with development, core publishing and marketing services partners' offices in London, Nicosia and Moscow.
About Perfect Shift
Lextre released Perfect Shift, its first game to market, in late 2014. Perfect Shift is a slick and authentic drag racing game with professional soundtrack and 3D graphics. Easy to pick up but hard to put down, the high speed racing game has multiple levels and an immersive storyline with plenty of twists. The team behind Perfect Shift are all petrol heads and love the thrill of the race. The cult hit has already seen over 2 million downloads, resulting in over 10 million miles of track raced, and over 150,000 customisations to its cars.
Perfect Shift is rated one of the top free games on the Windows Phone Apps + Games Store. As well as seeing millions of downloads in a short period since launch, the game has also received media acclaim and positive reviews in major outlets including:
Auto Express: “We found it hard to put down”
Web User: “Simple yet addictive”
Win Beta: “A visually stunning drag racing game developed by Lextre”
Examiner.com: “The high octane drag racing game is now going on with its launch on the Apple App Store and Google Play”
iDownload Blog: “It’s time to put the pedal to the metal with a new racing game featuring highly stylized realistic 3D graphics” | {
"pile_set_name": "Pile-CC"
} |
Q:
How to end() a file stream
I am having a weird issue with a piece of sample code that I got here, the central part being this:
server.on('request', function(request, response) {
var file = fs.createWriteStream('copy.csv');
var fileSize = request.headers['content-length'];
var uploadedSize = 0;
request.on('data', function (chunk) {
uploadedSize += chunk.length;
uploadProgress = (uploadedSize/fileSize) * 100;
response.write(Math.round(uploadProgress) + "%" + " uploaded\n" );
var bufferStore = file.write(chunk);
console.log(bufferStore);
console.log(chunk);
if(!bufferStore)
{
request.pause();
}
});
file.on('drain', function() {
request.resume();
});
request.on('end', function() {
response.write('Upload done!');
response.end();
});
});
The problem is, the file copy.csv does not contain anything after the process is finished.
I tried to add file.end(); in the request.on('end'-callback, but it did not do the trick. However, if I add faulty code in said callback that causes an exception, the file is being written just fine (although this ofc can't be the final solution).
A:
To notify the stream that there are no more chunks to be read, you can simply call your_stream.push(null). You can read more about streams and push(null) from the excellent substack's stream guide.
| {
"pile_set_name": "StackExchange"
} |
Q:
Using groupBy on a List of Tuples in Scala
I tried to group a list of tuples in Scala.
The input:
val a = List((1,"a"), (2,"b"), (3,"c"), (1,"A"), (2,"B"))
I applied:
a.groupBy(e => e._1)
The output I get is:
Map[Int,List[(Int, String)]] = Map(2 -> List((2,b), (2,B)), 1 -> List((1,a), (1,A)), 3 -> List((3,c)))
This is slightly different with what I expect:
Map[Int,List[(Int, String)]] = Map(2 -> List(b, B), 1 -> List(a, A)), 3 -> List(c))
What can I do to get the expected output?
A:
You probably meant something like this:
a.groupBy(_._1).mapValues(_.map(_._2))
or:
a.groupBy(_._1).mapValues(_.unzip._2)
Result:
Map(2 -> List(b, B), 1 -> List(a, A), 3 -> List(c))
| {
"pile_set_name": "StackExchange"
} |
Special Early Bird price $444$100Depositor in full by 26 JulyAfter 26 July 2013 Full Price is: $495For Deposit please use Pay Nowbutton.
Essence of Angels
INSTRUCTIONS: Please submit your Depositto secure your Early Bird rate of $444by using the 'Pay Now' button above. Once your payment has been processed through Paypal you will be returned to this page to complete your booking using the Registration Form below. Upon receipt of your Deposit and submission you will receive an email containing your Workshop Registration Letter and Certification Form.The Essence of Angels® Workshop represents a very special time for me to share all I have learnt from being an Essence of Angels student through to Practitioner training and now as a Certified Essence of Angels® Practitioner and Teacherand feeling very honoured to share this divine workshop with you all.Participants receive the following gifts at each Essence of Angels Workshop:
♠ Comprehensive bound seminar workbook (over 170 pages)♠ Delicious morning and afternoon teas ♠ Free use of all Essence of Angels® vibrational remedies, sprays and products throughout the workshop♠ All participants receive a very special Angel gift to support their journey and healing in the days and weeks after the workshop** There are local shops 2 minutes away to purchase your lunch/beverages. Free street parking available all day in John St or Church Avenue | {
"pile_set_name": "Pile-CC"
} |
Community members held a prayer vigil Sunday night to show their support for a missing Colorado Springs woman.
No one has heard from Jepsy Amaga Kallungi since March 20. Kallungi's mother lives thousands of miles away in Hong Kong and told 11 News she usually speaks to her daughter three times a week over Facebook messenger. To go more than two months is unheard of.
On April 4, the Colorado Springs Police Department announced the homicide and assault units were investigating the case.
People attending the vigil in Colorado Springs Sunday night expressed their support for Kallungi's mother and their frustration that detectives have released so little from the investigation.
"We want to shout out that hey, we really care for this mother and hopefully you will answer us. But I know that it is a case that they keep it private and I understand that," Calexan Tschappett said.
"We are just praying and hopeful that soon enough we will hear another update if it is not the end of it. We still want an update, to help with people speculating, questioning ... it leads to more misunderstanding."
As 11 News
, Kallungi reportedly moved to Colorado to marry a man she met on a internet dating site. Her mother says the husband told her Kallungi left with a friend to go to the Philippines or Chicago with no phone or ID.
To date, no persons of interest have been announced in the case.
Anyone with information that could help police is asked to call CSPD at 719-444-7000. | {
"pile_set_name": "OpenWebText2"
} |
Slideshare uses cookies to improve functionality and performance, and to provide you with relevant advertising. If you continue browsing the site, you agree to the use of cookies on this website. See our User Agreement and Privacy Policy.
Slideshare uses cookies to improve functionality and performance, and to provide you with relevant advertising. If you continue browsing the site, you agree to the use of cookies on this website. See our Privacy Policy and User Agreement for details.
excel master series-Chi square-variance-test-in-excel
1.
How To Find Out If Your Customers Are Becoming More or Less Predictable In Their Spending With the Chi-Square Variance Test in Excel http:// blog.excelmasterseries.com /
2.
What Is the Chi-Square Variance Test? ● The Chi-Square Variance Test Determines Whether the Variance of a Population Has Changed ● Marketers Use the Chi-Square Variance Test To Find Out If the Range of Customer Spending Has Changed, Indicating That Something Has Affected the Customers’ Buying Habits. http:// blog.excelmasterseries.com /
3.
The 5 Steps of the Chi-Square Variance Test in Excel 1) Determine the Required Level of Certainty and Alpha 2) Take a Large (>30), Representative, Random Sample and Measure Its Standard Deviation 3) Calculate the Chi-Square Statistic 4) Calculate the Curve Area Outside the Chi-Square Statistic 5) Analyze Using the Chi-Square Statistic Rule
13.
Curve Area Outside Chi-Square Statistic in Outer Left Tail in Red Is Larger Than the 5% Yellow Alpha Region
14.
5) Analyze With the Chi-Square Statistic Rule The Population Standard Deviation ( σ ) Has Moved In The Direction of the Sample Standard Deviation (s) If The Curve Area Outside the Chi-Square Statistic (the Red Area) is Smaller Than Alpha Region (the Yellow Area) In Other Words If the Red Region Fits Inside the Yellow Region, the Population Standard Deviation ( σ ) Has Moved, Otherwise σ Hasn’t Moved.
15.
Population Standard Deviation Has Shifted to Right (Increased) Because Curve Area Outside Chi-Square Statistic in Red in Outer Right Tail Is Smaller Than the Yellow 5% Alpha Region
16.
Population Standard Deviation Has NOT Shifted to the Left (Decreased) Because Curve Area Outside Chi-Square Statistic in Outer Left Tail in Red Is Larger Than the 5% Yellow Alpha Region
17.
The Test We Ran An Internet marketing manager wanted to determine if the number of items purchased on individual orders had become more spread out (order size standard deviation had increased). She took a representative, random sample of 50 recent orders and measured the sample’s standard deviation in the number of items purchased per order to be 1.9 (n = 50 and s = 1.9). The standard deviation in number of items purchased per order had remained at 1.6 for a long time ( σ = 1.6). The Internet marketing manager wanted to determine within 95% certainty whether the population order size standard deviation had increased (Alpha = 1 – 95% = 0.05). | {
"pile_set_name": "Pile-CC"
} |
BUSINESS OPPORTUNITIES FOR NORWEGIAN COMPANIES IN MALAYSIA
The Renewables Sector
As an initiate to reduce CO2 emission, the Malaysia government has decided to increase the use of renewable energy by having more large scale grid connected solar PV power plants. 450 MW of large scale solar PV plants are planned for development in Malaysia in 2017-2018.
To hasten the implementation of large scale solar PV plants, the Energy Commission, Malaysia is currently conducting Request for Proposals in 2017 for another 460 MW to be commissioned in 2019/2020 of which 360 MW is in Peninsular Malaysia and 100 MW in Sabah and Labuan. The closing date of the bid is 1 August 2017. | {
"pile_set_name": "Pile-CC"
} |
Monthly Archives: March 2012
“You always tell the same story, or the same kind of story,” the student complained to the old master. “Your heroes are all the same.”
“History repeats itself,” the old man quoted: “Historians repeat each other. What kind of hero do you want?”
“Anything different. Not another brave, strong young man of humble origins and mysterious birth, anticipated by prophecy who overthrows an empire and brings peace to the land.”
The historian squinted at his student’s face. “What part of that tires you?”
“I’m tired of being preached to,” the student said, looking around the alewife’s house. Besides the two of them and the alewife, the only other occupant was a goose who had wandered through the open door, perhaps seeking shade from the hot September day. “I’m not a child to be bribed or threatened to good behavior with a fairy tale. –How about a story where the hero wins by being evil? Do you know a story like that?”
A kind of smile had crept onto the historian’s face. “You would be amazed the stories I know. History is the study of truth. It is truth in application. The human heart in the loner and the group, in the leader and in his followers. What people do and why. There is no situation you can imagine so screwed up I cannot give you a real story for it, and the real story will beat a fiction every time.
“You want to hear about a bad, weak man who goes from bad to good fortune. A hero thoroughly without virtue, who wins even so. I know such a man. You you will hear of him shortly. But while I call the details to mind, tell me — why do you want to hear such a story?”
“I don’t know,” the student replied, and thought a moment. “I suppose because it will be reassuring.”
“Reassuring!” the historian exclaimed. “How is that reassuring?”
“Well, if he’s able to keep what he doesn’t deserve then maybe I can keep what I have, too. Who can live up to a heroic standard? — Oh, are you just going to tell me a story where the Emperor wins? I know history has a lot of those.”
As the historian made to reply, a young man burst into the room, scaring the goose which went honking and flapping into the midst of the teacher and student. “Where’s your husband?” the intruder demanded of the alewife. “I need him right now.”
The alewife disappeared into the inner space of the small house. The man looked around impatiently.
The man did not answer immediately, but the question seemed to move him to a kind of rage. Just then the little town’s superintendant bustled in.
“Nothing I can do for you, Justin,” the superintendant said. “An investigation’s an investigation. Your father could have been killed. You will probably be named.”
“But surely I can pay my father’s debts with my father’s estate,” Justin replied. “That shark is demanding repayment on time, and control of the estate is locked away from me. He wants to steal the farm!”
The superintendent took on the pained expression of a man unable to make everyone happy. “You can sign control over to the clerk–”
“–and then I’ve lost,” Justin said, “because Terry the shark controls the clerk and they can keep the estate tied up for years while he sucks it dry. This is not justice!”
“Law doesn’t have anything to do with justice, Justin,” the superintendent said quietly. “It’s just a means of resolving conflict.”
“If that were true, no one would tolerate the law,” Justin snapped. “Do not screw me, Bobbie.”
“I’ll do what I can when I’m called on,” the village superintendant Bobbie told him. “But remember – I owe Terry too.”
Justin stepped outside the alewife’s house into the hot September day. He stood about five and a half feet tall, a young man who was husky without being heavy. He had brown hair and brown eyes. His eyes tended to move slowly, but constantly around, as if he were in the habit of looking for something, except when he spoke to someone. When he spoke to someone he tended to look at their face constantly, without the polite breaking off that most people learn to do, and this caused people, until they got to know him, to think he had a fiercely combative nature.
Justin was still standing outside the alewife’s house a moment later when his good friend Melinda approached hesitantly. “How’d it go?” she asked.
Melinda was a village girl who wore a long rough cotton dress. Her hair was blonde and back in a bun. She had never broken the childhood habit of walking on her toes, and this gave her the appearance of being strangely ready, as if she might need suddenly to run in any direction. As she waited for Justin to reply she kept her head cocked, and her expression had an openness about it that seemed to say she could not keep secrets.
Justin looked at her awhile before replying. “No help at all,” he told her. Terry’s just too powerful.”
Melinda laid her hand on his arm. “You’ll find something.”
Justin’s gaze slowly moved around the village they both had lived in all their lives, as if he were trying to find something new there. “I don’t think so, Melly. I think the same thing will happen that always does. Terry will win. The question is how much I throw away fighting him.”
“Don’t think like that!” Melly urged. “It can work. It has to work! What will you do without the farm? You’ll have to sign on as a hired hand, and even if anyone nearby had enough work that they needed someone, there are so many workers you’d never find a living wage.”
Justin nodded. “I’d have to work for Terry.”
Melinda started back, showing suddenly the fiery side of her character. “You would never do that!” she said.
“I can fight his grab and get him angry, or I can try to cut a deal while he’s still friendly,” Justin told her. “Those are my choices.”
They began walking, falling easily in step together, the way good friends on well-known paths do. They passed the old mill at the stream with the abandoned half-built wall. The huge rotting waterwheel creaked ominously on its axel. They cut diagonal across the fallow field of the Jenson widow and helped one another up the crumbling retaining wall along the south road. In this way they entered the orchard of Justin’s dead father and silently walked among the long untended trees until they reached the farmhouse.
Melly and Justin went inside. It was early afternoon and many things on the farm needed doing, but the animals would not need tending for a few hours. Justin sat in the big cushioned armchair that had been his fathers, put his booted feet up on the wooden footstool, and shielded his eyes with his hands.
Melly regarded him from the doorway a moment, and went into the kitchen where lit a fire and began making stew. “Are you going to take it to court?” she asked.
“It’s not like that,” Justin replied, not moving. “They’re calling his death suspicious so they hold execution of the will. It’s not until they rule he died accidentally or find someone guilty that the property is freed up. But they’re not holding up the debt payment schedule.”
Melly turned her head. “Would it clear things up if you were put on trial?”
“I would rather not have it said I was tried for killing my father,” Justin replied. “And Terry doesn’t want that either because then I could pay him. They just want it immobilized.”
A hard persistent knock came at the door. Melly came to the kitchen doorway as Justin reached the outside door. The door was unbolted. Justin opened it and stared at the hulking man with the broken-toothed grin who stood outside.
Emperor Jeremiah rode along the mountain road with his best general, Elizabeth Pierce, and a small security contingent charged with their safety. A few guards who had asked for danger pay had ridden ahead disguised as rich and carefree nobles to draw out ambushers, and by the time the contingent reached the small cascade they believed the area secure enough to dismount and let the horses drink.
The Emperor was a small, neat man with straight black hair and a slightly nasal voice. He was handsome, almost pretty, and this in combination with his matter of fact manner often caused people who met him to make the dreadful mistake of doubting his ruthlessness.
He and General Pierce made an odd couple. There was nothing romantic between them, and never had been, but long professional association and kindred thinking had lent them a kind of family bond, which neither were aware of, but which had caused many observers to believe they were having an affair. They were aware of each other in that careless way that does not require being focused on one another, which we see between family members and in the closest friends.
General Pierce was a big, well-shaped woman with red hair and an easy physical presence like that often seen in successful military men. Next to her the Emperor seemed fussy, and when they bickered he nit-picked and she laughed him off. Emperor Jeremiah was at eye level to her chest, which filled out her uniform nicely, but this fact, which was so striking to others – even those in the Emperor’s inner circle sometimes confided they had never gotten used to it – had faded from the awareness of these two long ago.
“Eliza, I’ve been thinking a lot about Adriana,” the Emperor said. General Pierce did not reply immediately and he looked at her sharply. “What is it? Military problem?”
General Pierce shook off her thoughts. “I’m just thinking about troop strengths and travel times. Do you mean Adriana the clone or Adriana your wife?”
“The clone, Eliza,” the Emperor told her, as if she should have known better. “You’ve been teaching her chess. How is that going?”
“She brings out her queen too early. I can’t break her of it.”
“That’s not what I want to know,” the Emperor told her.
“She plays for fun and not to win,” Pierce said.
The Emperor grinned briefly. “That’s still not what I’m looking for.”
“What’s the problem, Jerry?” Pierce asked quietly.
“She likes you and she trusts you. She’s – what? – fifteen? It’s time for someone to tell her she’s a clone.”
The general laughed suddenly, once, and stopped short. “Sir,” she said, “there is no way in hell I’m getting involved with that.”
The Emperor looked at his general sidelong.
“Family stuff,” the general told him: “I don’t get into it. Not with my subordinates, either. Strict policy. Someone tells me they have something going on, outside pressure and –” she held up both hands. “I say hey, do what you need to and don’t tell me. I treat it like hostile magic. No-touchie.”
Jeremiah considered that for a moment. “Well, do you have any advice?”
The general gave him a kind look. “Jerry, did you not just hear me?”
The Emperor gave it up. “Fair enough. Isn’t it too early to get this dark?”
“It’s because we’re among the mountains. Locals call them the Giants. We get sundown early here.”
A guard brought them each their canteens, filled from the cascade. “We are ready to move on,” he reported.
It was quite dark by the time the Emperor’s contingent reached the small mountain fortress carved into the mountain face. They could not see the surrounding territory at all, but only the fortress walls and then the courtyard, which stretched out into shadow under the fiercely bright mountain stars.
A few people, both the local King’s and his own, were still awake in the torchlit main hall, but as a whispering attendant told him the King had retired, the Emperor went to the chambers assigned him without making a public appearance.
His own chamber, at least, was properly lit with magical lighting. Unclasping his cape and still in his riding boots, he made for the bedroom.
“Sir, the princess Adriana is waiting for you in there,” his manservant said.
Jeremiah wheeled around. “What’s the kid doing in my bedroom?” he asked. His manservant made a ‘I’m only a servant’ shrug, and Jeremiah continued on, more slowly. “’Princess,’” he muttered.
It was dark in his bedroom, and he brought up the magical light. The room was apparently empty, but there was a conspicuous lump under the covers of the canopy bed. Jeremiah weighed his cloak in one hand before tossing it over the wooden chair at his traveling desk. He sat on the edge of the bed.
“Hmm, what is this,” the Emperor said in a loud voice. “Clearly some assassin hiding in my boudoir.” The lump began to move slowly, and he prodded it with a finger. “Out, assassin! Confess or I will stab you through the blanket!” He propped himself on one elbow and prodded her again.
The blanket was pulled aside from beneath so the girl, who had tangled black hair and striking green eyes, could glare at him. “I was sleeping,” she accused.
“In my bed,” the Emperor said, and began tickling her through the blanket. This had a convulsive effect on the girl, who seem almost to explode into a frenzy of thrashing and yelling.
“No!” she yelled. “No– Stop!”
“What is this?” Jeremiah demanded: “Armpit hair?”
The yelling and struggling finally reached its loudest when the girl shouted, “Dad-dy! Will you STOP!” And Jeremiah did stop.
The Emperor settled back on his elbow again and regarded the young girl. His whole demeanor became quite still.
“You shouldn’t be sneaking into strange men’s beds late at night,” he told her. “It’s not decent.”
Looking up at him, Adriana moved her head on the mattress from side to side. “You are,” she said. “I want you to be. Even though I’m adopted. I always wanted you to be my father, so that makes you my father.”
“Bah, get out of here,” Jeremiah told her. “You smell like that horse you ride. You know I’m allergic.” Indeed, his eyes had started watering.
“Why did you have to wake me up?” the girl demanded. “The bed is big enough for the two of us.”
“Because you stink!” Jeremiah said, laughing as the tears rolled down his cheeks. “I’ll have to get new sheets–Out!”
“I’m afraid of my room,” the girl complained. “And I do not smell like horse!”
“Then I must be allergic to YOU!” Jeremiah told her, and grabbed her by the collar of her loose-fitting nightgown as she started talking about the bath she had taken. “You’re a big girl and you’re sleeping in your own bed.”
He hauled her out of bed by main force and dragged her protesting to the doorway. At the doorway, though, she stopped him when a note of real panic came into her voice. “Daddy, PLEASE!”
“What is it, kid?” he asked, his face wet with tears, his expression still one of quieting laughter.
Jeremiah let her go as if she had stung him. “She wasn’t really your mother,” he said reflexively. “And there’s no way that could happen here. I wasn’t Emperor yet when that happened. People weren’t afraid.”
“I know, but – I think about her a lot. And I’ve never been this far from home.”
Jeremiah looked around. His manservant was standing far enough away not to be involved, but close enough to be addressed.
“Adriana is going to sleep in my bed tonight,” he told the man. “Bring a cot for me and place it alongside. Close enough the kid can touch me if she wants to in the night.”
“Sir,” the manservant said and, with a half-bow, left.
“I used to do that when I was a baby,” Adriana murmured, hugging him closely. “From my crib. You remember.”
“Yeah, sure,” the Emperor said, patting her. And he said, “This damn allergy. I wish I knew what it was.”
* * *
The document the unknown man with the broken-toothed smile, signed by the local baron, had shown Justin and Melly entitled him to sleep in the Wingate farmhouse and to eat at their table. They took him for a soldier, but he had explained he was a mercenary, a soldier for hire, and he told them he had further business with them when they were done with their farmwork. He had, he had said, a second document.
Justin had not permitted Melly to stay in the house alone with the strange man, and the two of them returned when the sky was dark.
They found the mercenary, whose name was Dennis, had lit a candle from his own pack and had already eaten half Melly’s stew.
He was sitting in the big armchair.
Melly and Justin ladled stew into bowls for themselves. Justin sat at the rough wooden table while Melly dug candles out of the room’s chest and lit them from Dennis’s.
“You said you had further business,” Justin told Dennis.
“We’re looking for a guide,” the big, hairy man told Justin in his strangely cheery manner. “Your name came up.”
“A guide to where?” Justin asked. “Why would my name come up?”
“We’re headed to West Rock,” the mercenary told him. “A few days’ travel from here. You’ve been there a number of times.”
“Many of us have been to West Rock and the way is clearly marked–” Melly objected, but Dennis held out a hand to quiet her.
“It’s well-paid, and you have come to the baron’s attention as someone in particular need of money. And you know the baron’s daughter well. He thinks she would trust you. Is that true?”
“We used to play as children. Is she coming?”
“She is already there. She was kidnapped out of the big house last night. My crew is traveling through, on our way to the capitol, and the baron commissioned us for the rescue mission.
“But we can’t just show up and expect her to cooperate with strange men. We need someone she knows.” Dennis from the armchair leaned forward to drop a sheft of papers on the table in front of Justin. “The job’s yours if you say yes by tomorrow morning. Otherwise we’ll bring a servant from the big house.”
“We played when we were kids,” Justin said carefully. “I wasn’t her favorite.”
“Maybe her favorite was the one who kidnapped her,” Dennis answered, standing. The mercenary took up his pack and his candle.
“Good night. The bed’s in here?”
He went into the master bedroom.
Justin picked up and read the paperwork. “What is it?” Melly asked.
“Permission from the baron to leave his land. And the promise of enough money to keep Terry quiet through winter. –‘Until I return’ is the wording.”
“Do you think the baron is keeping his eye on you?” Melly asked.
“More likely he wants to take Terry down a peg.” Justin shook his head. “It’s a marvel.”
“Oh, Justin, don’t do it,” Melly pleaded. “It’s not worth it.”
Justin grinned at her. “You’re just nervous, like women are. These things need to be done with courage.”
“No, Justin,” Melly told him. “Women have intuitions in situations like these. If you go you’re not coming back.”
“Oh, I’ll come back,” Justin said, and laughed. “I promise.”
Hearing that laugh, Melly shuddered.
* * *
Late that night a man walked in to a small, easily overlooked valley in the mountains. He wore a dark cloak and traveled light, with only a small pack on his back. The sentry, although alert, would easily have missed him if not for his boots crunching on the loose stone of the trail.
The man turned slowly. Even considering the deep darkness of the night, there was something about the traveler that was hard to see, giving rise in the sentry’s mind to the momentary thought that the man was a ghost.
“Harold,” the man said simply.
“Should I know you, Harold?” the sentry, who in the daytime was a farmer and a fool to no one, asked sharply.
“No, I don’t think so,” Harold replied. “No one knows me.”
“Step into the guard-house, Harold,” the sentry ordered. “I’d see your face.”
His hands folded in front of him, Harold meekly obeyed. Still gripping his pike, the sentry followed him into the guardhouse where a small fire burned.
Harold turned. He had a hook nose, but to call it a hook nose did not do it justice. Its bridge was so high and so thin it called to mind the prow of a boat. Harold’s narrow face was three or five days unshaven, with a scraggly beard in that unpleasing half-grown state that needs but defies trimming, and dirty. His expression was mild and everything about him gave the impression of weakness.
The sentry relaxed. His first impression of this man, Harold, on the trail, had been of coolness, of composure. Now he saw the man was sweating heavily. The man’s cloak had been mended many times, and had dirt worked deeply into its fabric. A few small holes had worn through at the corners of his battered canvas pack.
“Harold. What’s your trade and where were you born?”
“I was born in Teshreville. I’m a teacher.”
“You’re a long way from your school, Harold. What do you teach?”
“Writing and figures. A little history, a little verse. I know many things some and a few things well. I wouldn’t have thought a small valley like this would need a night guard.”
“There are rebels in these mountains,” the sentinel told him. “What are you doing here? Where is your school?”
“I don’t have a school, sir. I travel, teach a bit for food, and move on.”
“You mean you beg,” the sentry sneered.
“Yes, sir. I beg.”
“You’ll find no work in this valley, and less charity. Sleep in the guardhouse. You can cook at this fire if you have supplies. You’ll move on come morning.”
“Yes, sir–” Harold replied, but the sentry already, shaking his head with contempt, had left.
These were the ways, therefore, that these three men slept that moonless night. Emperor Jeremiah slept on a military cot beside the great canopy bed where the teenage clone of his dead wife slept, who once whispered “Daddy” and put her hand on his shoulder, which he grasped without really waking up. Justin slept in the old bed of his boyhood, which he had newly graduated out of on the death of his father, but had returned to because a mercenary named Dennis with a letter from the local baron had claimed the master bedroom. And Harold slept on the floor a little away from the small fire in a little guardhouse in an unknown valley of farmers who had started posting night watchmen against the possibility of rebels.
All across the Kingdoms of the Bowl, that collection of once-feuding kingdoms, that were ringed by mountains on three sides and facing the sea on the fourth, that had in the last generation been brought together by Emperor Jeremiah under one banner, other citizens of the Bowl slept as their circumstance allowed. Some, like Milly, felt for no known reason this would be the last untroubled night they would have for a long time. Others, like General Pierce, knew the reasons and forecast, with varying degrees of accuracy and with varied predictions, how events would unfold. Most slept in ignorance of what the future would bring.
War was coming to the Empire of the Kingdoms of the Bowl: a terrible civil war that was smelled on the wind by some already, and that would creep up on others unawares, but that would leave none of them untouched. During all the horrors that followed, this was the night that these three men would think back to and say: I slept well that night. I slept well because I did not know.
Share [conradcook]:
CONSERVATIVES – Now look. This really isn’t my thing, but: if you really honestly want to reduce the number of abortions — all those souls — why the FK wouldn’t you want Uncle Sam to fund contraception? Doesn’t it occur to you that you are creating the problem? | {
"pile_set_name": "Pile-CC"
} |
Minimizing underestimation rate of microcalcifications excised via vacuum-assisted breast biopsy: a blind study.
The main disadvantage of Vacuum Assisted Breast Biopsy (VABB) is the probability of underestimating atypical ductal hyperplasia (ADH) and ductal carcinoma in situ (DCIS). This study evaluates a modified way of performing VABB. 266 women with microcalcifications graded BI-RADS 3&4 underwent VABB (11G) on the Fischer's table. 133 women were allocated to the "standard" protocol and 24 cores were obtained (1 offset-main target and one additional offset). 133 women were randomly allocated to the "extended" protocol and 96 cores were excised (one offset- main target and 7 peripheral offsets). A preoperative diagnosis was established, and the removed volume was calculated. When precursor or malignant lesions were diagnosed, open surgery was performed. A second pathologist, blind to the preoperative results and to the protocol made the postoperative diagnosis. The discrepancy between preoperative and postoperative diagnoses was evaluated. When the standard protocol was applied, the underestimation rate for preoperative ADH, lobular neoplasia (LN), DCIS was 16.7%, 50% and 14.3% correspondingly. In the extended protocol, no underestimation was present in LN, ADH, but the underestimation rate for DCIS was 6.3%. In the extended protocol, no precursor/malignant tissue was left after VABB in all ADH cases, in 87.5% of LN cases, in 73.3% of DCIS, and in 50% of invasive carcinomas. The volume excised was 2.33 +/- 0.60 cc and 6.14 +/- 1.30 cc for the standard and the extended protocol, respectively. The rate of hematoma formation did not differ between the two protocols. This recently introduced, "extended" way of performing VABB in microcalcifications safely minimizes the underestimation rate, which may lead to a modified management of ADH lesions. | {
"pile_set_name": "PubMed Abstracts"
} |
164 F.3d 1071
Barbara A. GAVONI, Angela K. Rosendale, and Lela ReneeJordan, Plaintiffs-Appellants/Cross-Appellees,v.DOBBS HOUSE, INC., Defendant-Appellee/Cross-Appellant.
Nos. 97-3806, 97-3875.
United States Court of Appeals,Seventh Circuit.
Argued Oct. 27, 1998.Decided Jan. 13, 1999.
John T. Moran, Jr. (argued), D. Seth Holliday, Chicago, IL, James R. Koby, Parke, O'Flaherty, Ltd., LaCrosse, WI, for Plaintiffs-Appellants in No. 97-3806.
John T. Moran, Jr. (argued), Chicago, IL, James R. Koby, Elizabeth A. Wright, Parke, O'Flaherty, Ltd., LaCrosse, WI, for Plaintiffs-Appellees in No. 97-3875.
Robert M. Chemers, Edward B. Ruff, Scott L. Howie (argued), Pretzel & Stouffer, Chicago, IL, for Dobbs Houses, Inc. in No. 97-3806.
Scott L. Howie (argued), Pretzel & Stouffer, Chicago, IL, for Dobbs Houses, Inc. in No. 97-3875.
Before CUMMINGS, CUDAHY and FLAUM, Circuit Judges.
CUDAHY, Circuit Judge.
1
The plaintiffs, Barbara Gavoni, Angela Rosendale and Lela Jordan, worked together as manicurists and hair dressers at a salon in La Crosse, Wisconsin and attended a cosmetology convention at a Chicago-area hotel owned and operated by the defendant, Dobbs House. On the morning of March 29, 1993, the plaintiffs and another convention attendee boarded an elevator on the eighth floor hoping and expecting to go down to the lobby. Their hopes were dashed--literally and figuratively. The elevator went up to the eleventh floor, stopped briefly, descended to the lower lobby at an uneven rate and abruptly stopped. When the doors did not open, Jordan pressed the alarm button and tried calling for help on the emergency telephone. The riders eventually made enough noise to alert the hotel staff. A maintenance employee, Edward Johnstone, arrived within minutes and rescued the riders. Johnstone testified that when he opened the elevator doors he found "three young ladies standing there laughing and joking around." He also claimed that each held a drinking glass with liquid in it. The plaintiffs admitted that Jordan made a sarcastic comment when asked if they wanted to be rescued but denied both that they were jocular and that they had drinks in hand. The plaintiffs told their rescuers that they were not injured, sat down for several minutes and then proceeded to the convention. That afternoon, Jordan and Rosendale competed in a fingernail decoration competition while Gavoni returned to their room. The plaintiffs later dined together and attended a cocktail party and hair show. Each had several alcoholic drinks; Jordan and Rosendale were dancing.
2
The next day, on the bus back to La Crosse the plaintiffs discussed the elevator incident and decided to go to the emergency room together that same night. Each complained of neck, shoulder and upper back pain. Over the next several years, each of the plaintiffs continued to experience pain and discomfort.
3
The plaintiffs brought this diversity suit against Dobbs House and the elevator manufacturer, Westinghouse. Pursuant to FED.R.CIV.P. 68, Dobbs House made an offer of $10,000 "to be divided among all three plaintiffs, with costs then accrued." The plaintiffs rejected the offer. Prior to trial, the plaintiffs settled their claims against Westinghouse for a total of $105,000: Gavoni received $17,850; Rosendale got $44,100; and Jordan received $43,050.
4
The case against Dobbs House proceeded to trial. Dobbs House never denied that the elevator had malfunctioned; experts from both sides testified that the incident was likely caused by a faulty electrical connection. Instead, Dobbs House argued that the plaintiffs had inflated their injuries and consequent damages, making a federal case out of a minor accident. The plaintiffs presented testimony from a single doctor that the elevator incident caused each plaintiff a variety of ailments, from sore knees to cracked teeth to chronic back pain. Dobbs House presented expert testimony contradicting this conclusion. At closing, the plaintiffs sought $825,000 in damages: $230,000 for Gavoni, $320,000 for Rosendale and $275,000 for Jordan. The jury found against Dobbs House on liability but awarded the plaintiffs a relatively paltry $6500--$2000 for Gavoni, $2000 for Rosendale and $2500 for Jordan.
5
Following the verdict, the plaintiffs moved for costs as the prevailing party under FED.R.CIV.P. 54(d). They also moved under FED.R.CIV.P. 59 for a new trial on various grounds. Dobbs House, for its part, moved for costs pursuant to FED.R.CIV.P. 68. The court denied all three motions.
6
The plaintiffs raise five separate issues on appeal. Their first four complaints were originally included in their Rule 59 motion, and the plaintiffs do little, if anything, to develop these arguments on appeal. Although we might be free to ignore these undeveloped arguments, cf. Indurante v. Local 705, Int'l Bhd. of Teamsters, AFL-CIO, 160 F.3d 364, 366-67 (7th Cir.1998) (citing cases), we will nonetheless briefly consider them. The plaintiffs also appeal the denial of their Rule 54(d) motion for costs and the defendant cross-appeals from the denial of its Rule 68 motion for costs.
7
The plaintiffs first allege that the district court erred by allowing the jury to view one side (the left side) of an expert witness's chart. Dobbs House's elevator expert, John Donnelly, referred to a chart which compared the G-forces of a normally functioning elevator (the right side of the chart) with the G-forces of everyday human activity (the left side of the chart). Donnelly had prepared only the right side, but was familiar with the left. After a prolonged argument at side bar and a foundation voir dire by the judge and defense counsel, the district court ruled that the chart was "expert data that is utilized in the field and, therefore, for the limited purpose of his testimony I will permit the inquiry." We will not disturb a district court ruling on expert testimony unless it is manifestly erroneous, see Deimer v. Cincinnati Sub-Zero Prod., Inc., 58 F.3d 341, 344 (7th Cir.1995), and there is no such error here. The district court made an extensive inquiry into the foundation for the expert's testimony and admitted it subject to being later stricken. Further, and directly relevant to their present argument, the plaintiffs did not ask that the left side of the chart (not prepared by the witness) be covered or redacted. Defense counsel later displayed the entire chart during closing argument and the plaintiffs now argue that this was error. Failure to demand redaction or that the chart be covered waives these arguments on appeal. See Miksis v. Howard, 106 F.3d 754, 761 (7th Cir.1997); Holmes v. Elgin, Joliet & Eastern Ry. Co., 18 F.3d 1393, 1398 (7th Cir.1994).
8
The plaintiffs next claim that the district court erred by allowing the defense to argue that the plaintiffs' presentation of only one doctor as a witness created an inference of collusion and faked injuries. The plaintiffs claim this argument was prejudicial and violated a motion granted in limine. The record on appeal, however, contains no evidence of such a motion. And in any event, trial courts have broad discretion to allow or prohibit argument on close, see, e.g., Miksis, 106 F.3d at 764, and we find no abuse here.
9
The plaintiffs' third complaint is convoluted but also concerns the defendant's closing argument. During the plaintiffs' case in chief, plaintiffs' counsel attempted to have Gavoni testify about an alleged conversation with a hotel employee, John Cusimano. Dobbs House objected on hearsay grounds. At side bar, the plaintiffs argued that Cusimano's statements were party opponent admissions. The court heard more argument, referred to depositions and sustained Dobbs House's objection. During his summation, defense counsel, over the plaintiffs' objection, which was denied, argued that the plaintiffs' failure to produce the fourth elevator rider supported the defense theory that the plaintiffs' injuries were illusory. On appeal, the plaintiffs attempt to link these rulings. We do not see the connection--the two rulings were distinct in both time and content. More, the district court was well within its broad discretion both in allowing the argument on close, see, e.g., id., and in refusing to admit the hearsay testimony. See, e.g., Cook v. Navistar Int'l Trans. Corp., 940 F.2d 207, 212-13 (7th Cir.1991); id. at 215.
10
The plaintiffs' fourth complaint is that the jury's award was so minuscule, so unsupported by the evidence that it should have offended the conscience of the court. The district court was not persuaded, and neither are we. We review a district court's decision whether to set aside an award for an abuse of discretion, and the plaintiffs must show that "there is no rational connection between [the award] and the evidence." Raybestos Prod. Co. v. Younger, 54 F.3d 1234, 1244 (7th Cir.1995) (internal quotations and citations omitted). This is a heavy burden. Here, because the record on appeal can support the jury's award, the district court did not abuse its discretion. See Blumenfeld v. Stuppi, 921 F.2d 116, 118 (7th Cir.1990).
11
The plaintiffs' fifth complaint--that the district court erred in failing to award their costs pursuant to FED.R.CIV.P. 54(d)--is their most substantial challenge, but it too fails. FED.R.CIV.P. 54(d) provides in pertinent part that "costs other than attorneys' fees shall be allowed as of course to the prevailing party unless the court otherwise directs." The plaintiffs argue that because they prevailed, the district court was obliged to award them costs. This argument ignores both the language of the Rule and well-settled law in this Circuit. Rule 54(d) expressly grants the trial court discretion in awarding costs--the prevailing party wins "unless the court otherwise directs." Further, courts have especially broad discretion to award or deny costs in mixed result cases, see, e.g., Testa v. Village of Mundelein, 89 F.3d 443, 447 (7th Cir.1996), including cases in which liability was established but recovery was nominal relative to what was sought. See Northbrook Excess & Surplus Ins. Co. v. Proctor & Gamble Co., 924 F.2d 633, 641-42 (7th Cir.1991). The jury here awarded each plaintiff less than one percent of what she requested. The district court, in denying the plaintiffs' Rule 54(d) motion for costs, therefore did not abuse its discretion. Thus, on each of the plaintiffs' appeals, we affirm the district court.
12
We also affirm the denial of Dobbs House's motion for FED.R.CIV.P. 68 costs. Under FED.R.CIV.P. 68, if a plaintiff rejects a defendant's settlement offer and "the judgment finally obtained by the offeree is not more favorable than the offer," then the plaintiff "must pay the costs incurred [by the defendant] after the making of the offer." Dobbs House claimed that its unapportioned $10,000 offer triggered this cost-shifting provision. The district court disagreed, holding that it was impossible to determine whether the offer was more favorable than the individual jury awards. We review the district court's underlying factual findings for clear error. See Arkla Energy Resources v. Roye Realty & Dev., 9 F.3d 855, 866-67 (10th Cir.1993). To the extent the entitlement to costs rests on an interpretation of the Rule, we review the district court's legal conclusions de novo. See Herrington v. County of Sonoma, 12 F.3d 901, 906 (9th Cir.1993).
13
The defendant must show that the offer was more favorable than the judgment and that the mandatory cost-shifting provision was therefore triggered. See generally 12 CHARLES ALAN WRIGHT, ARTHUR R. MILLER, & RICHARD L. MARCUS, FEDERAL PRACTICE AND PROCEDURE § 3006.1 (2d ed.1997). Defendants should bear this burden for two reasons. First, because Rule 68's cost-shifting provision is mandatory and applies to individual parties--the "offeree must pay the costs incurred" by the offeror, FED.R.CIV.P. 68 (emphasis added); see also Webb v. James, 147 F.3d 617, 621 (7th Cir.1998); Mallory v. Eyrich, 922 F.2d 1273, 1279 (6th Cir.1991)--plaintiffs face serious consequences in either accepting or rejecting a Rule 68 offer. See Webb, 147 F.3d at 621. A judgment less favorable than the offer requires that a plaintiff pay the defendant's usually substantial post-offer costs. There must therefore be a clear baseline from which plaintiffs may evaluate the merits of their case relative to the value of the offer. See id.; Arkla, 9 F.3d at 866 ("the offeree must know what is being offered in order to be responsible for refusing the offer"); Radecki v. Amoco Oil Co., 858 F.2d 397, 402-03 (8th Cir.1988). Cf. Gay v. Waiters' & Dairy Lunchmen's Union, Local 30, 86 F.R.D. 500, 502 (N.D.Cal.1980) (Rule 68 offers of judgment function by forcing "an individual offeree to weigh his own exposure to liability for the offeror's subsequent costs against his own expected recovery, thereby encouraging a close evaluation of the merits of his claim") (emphasis added). Further, plaintiffs should not have to speculate how courts will interpret an offer; "a defendant should state his intentions clearly, and any failure to do so will be at his peril." Chambers v. Manning, 169 F.R.D. 5, 8 (D.Conn.1996) (citation omitted).
14
Second, courts also need easily comparable sums. In applying Rule 68, courts have "no discretion to alter or modify the parties' agreement." Webb, 147 F.3d at 621. Courts have therefore consistently resisted efforts by either party to qualify or explicate the terms of an accepted offer, see, e.g., Herrington, 12 F.3d at 907 (rejecting the plaintiffs' efforts to "inject ambiguity into the settlement offer"); Shorter v. Valley Bank & Trust Co., 678 F.Supp. 714, 720 (N.D.Ill.1988) (refusing to consider the history of the settlement negotiations to support the defendant's interpretation of the offer because "[t]he offer was what the written offer said"); Sas v. Trintex, 709 F.Supp. 455, 458 (S.D.N.Y.1989) (rejecting the defendant's argument that the court should determine the parties' intentions), or the terms of a rejected offer. See Johnston v. Penrod Drilling Co., 803 F.2d 867, 870 (5th Cir.1986) (rejecting multiple defendants' arguments that a court should itself divide an unapportioned rejected joint offer because Rule 68 demands that courts compare only "two clearly defined figures").1 And there is good reason for this narrow approach: after-the-fact efforts to clarify offers "undermine the Rule's purpose of encouraging settlement and avoiding protracted litigation." Webb, 147 F.3d at 621 (citing Sas, 709 F.Supp. at 458). On the other hand, clarifying the terms of an offer before it has been accepted or rejected furthers the purpose of the Rule. See Radecki, 858 F.2d at 403.
15
Dobbs House failed to carry its burden. Dobbs House urges us either to compare its $10,000 offer with the total value of the jury's verdict, $6500, or to use $3333, one-third of its offer, as a point of comparison with the individual verdicts. Alternatively, Dobbs House correctly points out that, mathematically, its offer would have had to be more favorable than the individual judgment of at least one of the plaintiffs. These varied constructions of the single offer only underscore its fatal problem: imprecision. The plaintiffs simply could not have evaluated the individualized values of the offer. Similarly, without two precise figures to compare, the district court was in no position to resolve the lack of precision. And neither are we. Cf. Webb, 147 F.3d at 621; Johnston, 803 F.2d at 870. The "lump sum" case which Dobbs House cites, Blumel v. Mylander, 165 F.R.D. 113 (M.D.Fla.1996), and others like it, are not to the contrary. Those cases descend from Marek v. Chesny, 473 U.S. 1, 105 S.Ct. 3012, 87 L.Ed.2d 1 (1985), in which the Supreme Court held that Rule 68 did not require a defendant to "itemize the respective amounts being tendered for settlement" between damages for the substantive claim and costs. Id. at 6, 105 S.Ct. 3012. The Court reasoned that such a requirement would "not in any way help plaintiffs know in advance whether the judgment at trial will exceed a defendant's offer." Id. at 7, 105 S.Ct. 3012. In the instant situation, these predictive considerations--whether precision in the offer aids a plaintiff's evaluation of the offer--cut the other way; an offer itemized by individual plaintiff would have helped Gavoni, Rosendale and Jordan independently evaluate the offer.
16
Dobbs House also pins its hopes on a policy argument. It argues that Rule 68 should be interpreted to further the Rule's underlying purpose of promoting settlement. Rule 68 is designed to encourage parties to evaluate objectively the strength of their cases; financial and judicial economy are at its core. See Marek, 473 U.S. at 5, 105 S.Ct. 3012. These important policies do frequently animate Rule 68 decisions, see, e.g., Delta Air Lines, Inc. v. August, 450 U.S. 346, 352-56, 101 S.Ct. 1146, 67 L.Ed.2d 287 (1981), but they are not subverted here.
17
Requiring a defendant to apportion a settlement offer made to multiple plaintiffs would not undermine Rule 68's policy toward settlement. A requirement of plaintiff-specific offers would not discourage a defendant from tendering a Rule 68 offer. A defendant with a total figure in mind could simply apportion the figure to reflect the relative values of the plaintiffs' alleged injuries. For example, Dobbs House could have gauged the plaintiffs' individualized offers using either the Westinghouse settlement ("because you, Gavoni, got 17% of the Westinghouse settlement, we offer you $1700") or the plaintiffs' requested damages ("because you, Gavoni, are going to ask for 28% of the total requested damages, we offer you $2800"). Alternatively, if a defendant did not have such a readily accessible benchmark or was reluctant to specify the relative values of the individual claims, it could offer each plaintiff an equal share of the fixed figure. In the instant case, Dobbs House could have offered Gavoni, Rosendale and Jordan $3333 each. At the very least, the precisely apportioned offers might precipitate further negotiations between the defendant and all or some of the plaintiffs.
18
Similarly, a precision requirement would not deter plaintiffs from accepting individualized offers. One or more of a group of plaintiffs might accept an individualized offer. For every settling plaintiff, the pressure on the remaining plaintiffs to settle increases, since they become potentially responsible for a greater share of the post-offer costs should their individual judgment be less favorable than the precise offer. For example, if Dobbs House had offered $3333 to each plaintiff and Gavoni had accepted, then Rosendale and Jordan would have been responsible for one-half, rather than one-third, of Dobbs House's post-offer costs. They would have risked more by going to trial. Had Gavoni and Rosendale both accepted such an individualized offer, Jordan would have faced an even bigger gamble. In any event, agreement of some plaintiffs to settle would probably at least stimulate further negotiations by the nonsettling plaintiffs. Individualized offers, then, would probably increase the incentives for both sides to settle. In short, we can see no reason why divided offers to multiple plaintiffs are more likely to frustrate settlement than Rule 68 offers to a single plaintiff.
19
Finally, explicitly apportioned offers might avoid potential derivative litigation. In this case, had the district court granted the defendant's motion for costs under Rule 68 based on Dobbs House's unapportioned $10,000 offer, the plaintiffs (or perhaps only one or two of them) would likely have appealed that order. For example, if the district court had held each plaintiff responsible for one-third of Dobbs House's post-offer costs, Gavoni, who received only 17% of the Westinghouse settlement and asked for only 28% of the total requested damages, might have appealed the district court's apportionment. (And, again, we would have been in no better position than the district court to apportion after the plaintiffs' rejection of the offer.) Gavoni might have sued her fellow plaintiffs to recover the difference between the one-third portion that the district judge assigned her, in this hypothetical, and the 17% from the Westinghouse settlement or the 28% from the requested damages. She might even have accused her co-plaintiffs of scuttling a deal she was ready to accept and ask for full reimbursement for costs plus her portion (whatever it might have been) of Dobbs House's rejected unapportioned $10,000 offer. Individualized offers might limit the chances of such derivative collateral litigation. Thus, precision is efficient and promotes finality. Cf. Webb, 147 F.3d at 621; Arkla, 9 F.3d at 867; Radecki, 858 F.2d at 402-03; Johnston, 803 F.2d at 870; Chambers, 169 F.R.D. at 8.2
20
The district court did not err in any of its evidentiary decisions or its post-trial rulings on the parties' motions for costs. The case is therefore A FFIRMED.
21
FLAUM, Circuit Judge, dissenting in part.
22
I join in all but one part of the Judge Cudahy's well reasoned opinion. Rule 68 allows a defendant to recover all costs incurred after a plaintiff rejects its settlement offer provided that "the judgment finally obtained by the [plaintiff is] not more favorable than the offer." This rule is designed to discourage wasteful litigation by forcing plaintiffs to soberly assess the value of their claims when a defendant makes a reasonable offer on the eve of trial. Marek v. Chesny, 473 U.S. 1, 5, 105 S.Ct. 3012, 87 L.Ed.2d 1 (1985). When a pre-trial offer is refused and the trial fails to produce a judgment more favorable to the plaintiff, all litigation subsequent to that rejected offer, and all costs associated with it, have produced no meaningful benefit. This is because both parties are left in a worse position than they would have been had the plaintiff accepted the offer. Id. at 11, 105 S.Ct. 3012. The rule seeks to hold responsible those parties who incur this expense by not realistically assessing the merits of their claims. Here, the plaintiffs rejected Dobbs House's reasonable, good faith offer, and the judgment they finally obtained was clearly less favorable than the offer had been. Thus, because the award of costs in this case is fully consistent with the purpose of Rule 68, I respectfully dissent from this part of the panel's decision.
23
Prior to trial, Dobbs House made a lump sum offer of judgment for $10,000 to the attorney representing all three plaintiffs. After trial, the plaintiffs were awarded a total of only $6,500. However, the panel is now denying the defendant the mandatory operation of Rule 68 because Dobbs House had not apportioned the offer among the three plaintiffs. I find nothing in the language or Rule 68, nor in the case law interpreting it, which compels this result. Id. at 6-7, 105 S.Ct. 3012. While a defendant must come forward with a clear offer under Rule 68, the question of who bears the burden of apportioning a lump sum offer is open. The policy goals underlying the rule suggest that the party best able to apportion it accurately should be required to do so. Dobbs House's single act of negligence was alleged to have caused the plaintiffs' injuries. No one was in a better position to assess the relative extent of the individual injuries than the plaintiffs themselves and they were capable of accurately dividing the offer. This is exactly what they had done with the earlier lump sum offer of $105,000 from Westinghouse. If a satisfactory agreement could not be reached among the plaintiffs, they could then have requested individual settlements with the defendant. They never made such a request. I suggest that requiring that individual settlement offers must in all cases originate from the defendant puts form over substance. The panel's decision relieves the parties best able to assess the relative degree of their own injuries from the responsibility to do so in the first instance. Instead it forces defendants to guess at how plaintiffs might divide a given settlement amount. Future plaintiff's can avoid Rule 68's cost shifting provision, and the sober assessment that it entails, by simply remaining silent anytime there is a lump sum offer.
24
I am also not convinced that, in a matter such as the one before us, an unapportioned offer makes it impossible to determine if that offer is more favorable than the final judgment. I read nothing in the statute which precludes comparing a lump sum offer with the sum of the judgments finally received where plaintiffs are represented by the same attorney and complain of similar injuries resulting from the defendant's undifferentiated act of negligence. In this case, Dobbs House offered $10,000 while Gavoni was awarded $2000, Rosedale $2000 and Jordan $2500. Because $10,000 is greater than $6,500, the sum of the plaintiffs' judgments, they should be required to pay the defendant's costs. Moreover, even if individual comparisons are required, the plaintiffs have provided an appropriate benchmark. After rejecting Dobbs House's offer, plaintiffs requested $825,000 in damages. Of this amount, Gavoni sought 28%, Rosendale 38% and Jordan 33%. We should assume that these percentages represented the plaintiffs' most accurate assessment of the relative merits of their different injury claims at the time of trial. I see nothing improper in holding the plaintiffs to the proportions of their own requests. Using these percentages, and applying them to the $10,000 Dobbs House offered, it is clear that each plaintiff's judgment was less favorable than their portion of the offer had been.1
25
Finally, I am not persuaded by the panel's warning that applying Rule 68 in cases like this would increase derivative litigation over how much of the costs each plaintiff should bear. This is essentially a situation created by the plaintiffs' decision to be represented by the same attorney. To the extent it imposes a burden on the plaintiffs by requiring them to be clear among themselves before accepting or rejecting an offer, this is certainly part of the responsibility they assumed by choosing to proceed together. The plaintiffs did not reject Dobbs House's offer because they could not figure out how to divide the lump sum among themselves. They rejected it because they thought they could do better before a jury. The plaintiffs' assessment proved to be incorrect and Dobbs House should not now be required to pay for their miscalculation.
1
Courts generally rely on contract principles to interpret Rule 68 offers. See, e.g., Webb, 147 F.3d at 620. The dissent appears to construe the rejected imprecise offer against the offerees. We believe that the offeror should bear the burden of precision; that is, we believe that the contract must be construed against the drafter
2
Here, there is a major difference between $10,000 and $6500, making it easy to argue, as does the dissent, that under a broad range of arguable apportionments, all the plaintiffs would have done better with the offer than with the judgment. In a slightly different case, however, where the judgment might more closely approximate the offer, the mode of apportionment would be quite critical in determining which plaintiffs could be penalized under Rule 68. For example, had the jury awarded $3000 to Gavoni (instead of $2000), the sum of the judgments would still be less favorable than Dobbs House's unapportioned offer ($7500 is less than $10,000). In this hypothetical, Gavoni gets: (1) a greater percentage of the total award than she requested (40% is greater than 28%); (2) more than her requested percentage (28%) as applied to the offer would have yielded ($3000 is greater than $2800); (3) a greater percentage than she received from the Westinghouse settlement (40% is greater than 17%) and; (4) a greater percentage of the award than an equal split (40% is greater than 33%); but still (5) less than what an equal split of the offer would have yielded ($3000 is less than $3333). In this scenario, was Gavoni's judgment more favorable than the offer? Gavoni, angling to get off the hook for costs, would answer in the affirmative, citing calculations (1) through (4); Rosendale and Jordan, potentially on the line for half rather than a third of the costs, would almost certainly disagree, relying on calculation (5). So, a minor deviation in the present numbers would complicate things greatly. Indeed, even as it now stands, an equitable division is not patently obvious; Gavoni's award of $2000 is about 31% of the total award, which is greater than the 17% of the Westinghouse settlement she received and, as we have indicated, only one of many possibilities a court assessing costs might choose. The dissent's assumption that the requested percentages best represent the relative merits of the plaintiffs' claims conflicts with the plaintiffs' different division of the Westinghouse settlement (as well as with the jury's award)
An appropriate rule here should not rest solely on the special facts of this case, but should be suitable for broader application. Requiring the offeror, always a defendant, to make an offer precise enough to enable each offeree, always a plaintiff, to assess the merits of her claim relative to the value of the offer is the better rule for this case and others.
1
Application of the requested percentages to Dobbs House's $10,000 offer, and a comparison of the result with each plaintiff's trial judgment, breaks down as follows: Gavoni: .28 X $10,000 = $2,800 > $2,000. Rosendale: .38 X $10,000 = $3,800 > $2,000. Jordan: .33 X $10,000 = $3,300 > $2,500
| {
"pile_set_name": "FreeLaw"
} |
1. Field
Embodiments may relate to a passively cooled electronic device, such as a laptop computer or a notebook computer.
2. Background
Notebook computers and/or laptop computers may generate heat when operating. A fan may be provided within the notebook computer and/or the laptop computer in order to dissipate the generated heat. | {
"pile_set_name": "USPTO Backgrounds"
} |
Q:
Is there an easier way to solve the linearization of this composite function?
Given $f(x)=x^4-4x^2-6x+4$ and $g(x) = 4x^3+3x^2-10x+2$
a. Approximate the change in $f(g(x))$ as x changes from 1 to 1.02.
b. Approximate $f(g(1.02))$
Is there an easier way to solve this problem without having to find what $f(g(x))$ is, finding the derivative of it, and also plugging in 1 and 1.02? Thanks.
Edit: answers are: a. $-0.32$ and b. $6.68$
A:
We don't have to find $f(g(x))$ explicitly.
We can use chain rule to get the derivative:
$$\frac{d}{dx}f(g(x))= f'(g(x))g'(x)$$
$$f(g(x)) =f(g(1)) + \frac{d}{dx}f(g(x))|_{x=1 }(x-1)$$
| {
"pile_set_name": "StackExchange"
} |
Wes Tillott
Wes Tillott is an Australian former professional rugby league footballer who played in the 1990s and 2000s. He played for South Sydney and North Sydney in the NRL competition.
Playing career
Tillott made his first grade debut for North Sydney in round 4 1999 against North Queensland at Lang Park which ended in a 26–18 victory. The match is remembered for having one of the lowest ever attendances in the NRL era with only 3382 spectators showing up for the match. At the end of the 1999 NRL season, North Sydney were forced to merge with arch rivals Manly-Warringah to form the Northern Eagles as part of the NRL's rationalisation policy.
In 2004, Tillott signed for South Sydney and made his debut for the club in round 8 against the Newcastle Knights. In round 14 2004, Tillott scored 2 tries as Souths defeated Melbourne 28–26 at the Sydney Football Stadium. Tillott's final game in the top grade came in round 25 2004 against the Brisbane Broncos which finished in a 34–34 draw with Tillott scoring a try.
In 2009, it was revealed that Tillott was playing for the Wyong Roos in the Central Coast Division Rugby League competition.
References
Category:Australian rugby league players
Category:South Sydney Rabbitohs players
Category:North Sydney Bears players
Category:Wyong Roos players
Category:1979 births
Category:Living people
Category:Place of birth missing (living people)
Category:Rugby league wingers
Category:Rugby league fullbacks | {
"pile_set_name": "Wikipedia (en)"
} |
Whilst every care has been taken in the preparation of these particulars, their accuracy is not guaranteed. They are intended as guides only and as such do not constitute a part of any contract. A prospective purchaser is advised to check all particulars and where appropriate and at his own expense to employ a qualified marine surveyor to carry out a survey and/or to have an engine trial conducted, which if conducted by us shall not imply any liability on our part. All dimensions, performance figures & range notes given are approximate and should be treated as a guide only. Pictures may be from a brochure or of a similar vessel. Engine hours shown are correct at the time of preparation of the particulars, but will vary with usage. Availability of specific vessels may be subject to change, we therefore recommend you call for confirmation. N.B. Ribs, dinghies, water toys and out-board engines are not included in the sale unless otherwise stated. All negotiations and agreements are subject to exchange of final written contracts. | {
"pile_set_name": "Pile-CC"
} |
package com.tonyodev.fetch2fileserver.database
import android.arch.persistence.room.Database
import android.arch.persistence.room.RoomDatabase
import com.tonyodev.fetch2fileserver.database.FileResourceInfoDatabase.Companion.DATABASE_VERSION
@Database(entities = [FileResourceInfo::class], version = DATABASE_VERSION, exportSchema = false)
abstract class FileResourceInfoDatabase : RoomDatabase() {
abstract fun fileResourceInfoDao(): FileResourceInfoDao
companion object {
const val TABLE_NAME = "fileResourceInfo"
const val COLUMN_ID = "_id"
const val COLUMN_LENGTH = "_length"
const val COLUMN_FILE = "_file"
const val COLUMN_NAME = "_name"
const val COLUMN_EXTRAS = "_customData"
const val COLUMN_MD5 = "_md5"
const val OLD_DATABASE_VERSION = 0
const val DATABASE_VERSION = 1
const val MAX_PAGE_SIZE = 100
}
} | {
"pile_set_name": "Github"
} |
******************************************************
The ‘‘officially released’’ date that appears near the
beginning of each opinion is the date the opinion will
be published in the Connecticut Law Journal or the
date it was released as a slip opinion. The operative
date for the beginning of all time periods for filing
postopinion motions and petitions for certification is
the ‘‘officially released’’ date appearing in the opinion.
In no event will any such motions be accepted before
the ‘‘officially released’’ date.
All opinions are subject to modification and technical
correction prior to official publication in the Connecti-
cut Reports and Connecticut Appellate Reports. In the
event of discrepancies between the electronic version
of an opinion and the print version appearing in the
Connecticut Law Journal and subsequently in the Con-
necticut Reports or Connecticut Appellate Reports, the
latest print version is to be considered authoritative.
The syllabus and procedural history accompanying
the opinion as it appears on the Commission on Official
Legal Publications Electronic Bulletin Board Service
and in the Connecticut Law Journal and bound volumes
of official reports are copyrighted by the Secretary of
the State, State of Connecticut, and may not be repro-
duced and distributed without the express written per-
mission of the Commission on Official Legal
Publications, Judicial Branch, State of Connecticut.
******************************************************
STATE OF CONNECTICUT v. ANTONIO J. INGLIS
(AC 35750)
DiPentima, C. J., and Gruendel and Alvord, Js.
Argued March 20—officially released July 1, 2014
(Appeal from Superior Court, judicial district of
Middlesex, Clifford, J.)
Conrad Ost Seifert, assigned counsel, for the appel-
lant (defendant).
Melissa L. Streeto, senior assistant state’s attorney,
with whom, on the brief, were Peter A. McShane, state’s
attorney, and Timothy J. Liston, former state’s attor-
ney, for the appellee (state).
Opinion
GRUENDEL, J. The defendant, Antonio J. Inglis,
appeals from the judgment of conviction, rendered after
a jury trial, of two counts of murder in violation of
General Statutes § 53a-54a (a),1 and one count each of
capital felony in violation of General Statutes § 53a-54b
(7),2 assault in the first degree in violation of General
Statutes § 53a-59 (a) (5),3 and carrying a pistol without
a permit in violation of General Statutes § 29-35 (a).4 The
defendant claims that the court improperly declined
(1) to instruct the jury in accordance with two of his
proposed eyewitness identification instructions, and (2)
to provide a third party culpability instruction to the
jury.5 We affirm the judgment of the trial court.
The jury reasonably could have found that, in the
early hours of February 10, 2008, an altercation ensued
at the Cocktails on the Green nightclub (club) in Crom-
well that left two men dead and another wounded. The
altercation began when the defendant repeatedly antag-
onized one of the victims, Tyrese Lockhart, a patron
seated at the bar with friends. Lockhart and his friends
eventually confronted the defendant and asked him to
leave Lockhart alone. A group of the defendant’s friends
that included his brother, Daren Walls, likewise encour-
aged the defendant to leave Lockhart alone. When Israel
Dandrade, a disc jockey who was performing at the
club that evening, announced ‘‘last call’’ soon thereafter,
Lockhart headed toward an exit with friends. At that
moment, the defendant brandished a chrome revolver
and fired several shots in Lockhart’s direction. One shot
struck Lockhart in the head, another struck Dandrade
in the eye, and a third grazed the cheek of Kenneth
Lewis, a cook at the club. Lockhart and Dandrade died
as a result of their respective gunshot wounds.
The defendant subsequently was arrested and
charged with the aforementioned offenses. A jury trial
followed, at which the state presented eyewitness testi-
mony from multiple individuals identifying the defen-
dant as the shooter.6 The theory advanced by the
defense was that, due to the facial similarity between
Walls and the defendant, those witnesses could not
distinguish between the two brothers to properly iden-
tify the shooter.7 At the conclusion of trial, the jury
found the defendant guilty on all counts. The court
rendered judgment in accordance with that verdict and
sentenced the defendant to a total effective term of life
imprisonment without the possibility of release, plus
twenty five years.8 From that judgment, the defendant
now appeals.
I
The defendant alleges instructional error on the issue
of eyewitness identification. Specifically, he claims that
the court improperly declined to instruct the jury in
accordance with two of his proposed instructions
regarding ‘‘identification based on own recollection’’
and ‘‘honest mistake.’’9
Practice Book § 42-18, which specifies the form and
content requirements of requests to charge, provides
in relevant part that ‘‘[w]hen there are several requests,
they shall be in separate and numbered paragraphs,
each containing a single proposition of law clearly and
concisely stated with the citation of authority upon
which it is based, and the evidence to which the propo-
sition would apply. . . . ’’ (Emphasis added.) As our
Supreme Court repeatedly has explained, ‘‘[w]hile this
court does not favor unyielding adherence to rules of
procedure where the interests of justice are thereby
disserved . . . the ever increasing refinement of our
law justifies cooperation of counsel in stating requests
for jury instruction. The minor burden of cooperation
imposed by [Practice Book § 42-18] is neither unreason-
able nor novel.’’ (Internal quotation marks omitted.)
State v. Corbin, 260 Conn. 730, 747, 799 A.2d 1056
(2002).
It is undisputed that the defendant did not comply
with the prerequisites of Practice Book § 42-18. His
request to charge on eyewitness identification did not
cite to any legal authority, nor did it specify any evi-
dence to which the propositions allegedly applied. Sig-
nificantly, this is not a case in which the record contains
‘‘substantial additional support . . . such as detailed
colloquies with the court and opposing counsel and a
postcharge exception [indicating that] . . . the trial
court is informed adequately of the factual and legal
bases for the instructional request.’’ State v. Smith, 262
Conn. 453, 466, 815 A.2d 1216 (2003). Rather, the record
before us is bereft of any discussion of this specific
issue; the defendant did not raise it during the charging
conference or take a postcharge exception. The court,
therefore, properly could have denied those requests
to charge on the basis of the defendant’s noncompliance
with § 42-18. See State v. Bettini, 11 Conn. App. 684,
690, 528 A.2d 1180 (‘‘[i]n the absence of compliance
with the rules of practice, the trial court is entitled to
deny a request to charge’’), cert. denied, 205 Conn. 804,
531 A.2d 937 (1987); accord State v. Tomasko, 238 Conn.
253, 262–63, 681 A.2d 922 (1996) (trial court properly
denied request to charge that did not comply with rules
of practice).
The defendant also argues that his claim is reviewable
pursuant to State v. Golding, 213 Conn. 233, 239–40,
567 A.2d 823 (1989). He is mistaken. As this court has
observed, ‘‘[n]ot every claim of instructional error is
constitutional in nature. State v. LaBrec, 270 Conn. 548,
557, 854 A.2d 1 (2004). Our Supreme Court repeatedly
has noted that it has recognized instructional claims as
raising constitutional issues only in matters relating to
the elements of an offense, burden of proof and the
presumption of innocence. Id.; see also State v. Schi-
appa, 248 Conn. 132, 165, 728 A.2d 466, cert. denied,
528 U.S. 862, 120 S. Ct. 152, 145 L. Ed. 2d 129 (1999);
State v. Dash, 242 Conn. 143, 151–52, 698 A.2d 297
(1997); State v. Walton, 227 Conn. 32, 64–65, 630 A.2d
990 (1993). The defendant’s claim does not pertain to
the elements of the offenses in question, the state’s
burden of proof or the presumption of innocence, nor
does the defendant make such an argument. Accord-
ingly, it does not merit Golding review.’’ State v. Antwon
W., 118 Conn. App. 180, 201, 982 A.2d 1112 (2009), cert.
denied, 295 Conn. 922, 991 A.2d 568 (2010). That logic
applies equally in the present case.
Claims pertaining to the adequacy of a court’s instruc-
tions on misidentification are not constitutional in
nature. See State v. Cerilli, 222 Conn. 556, 567, 610 A.2d
1130 (1992) (identification instruction not constitution-
ally required); State v. Tillman, 220 Conn. 487, 501, 600
A.2d 738 (1991) (‘‘[e]ven if the court’s instructions were
less informative on the risks of misidentification than
they might have been, the issue is at most one of instruc-
tional error rather than of constitutional error’’), cert.
denied, 505 U.S. 1207, 112 S. Ct. 3000, 120 L. Ed. 2d 876
(1992); State v. Anderson, 20 Conn. App. 271, 281, 566
A.2d 436 (1989) (‘‘there is no constitutional right to an
instruction on the fallibility of eyewitness identifica-
tions’’), cert. denied, 213 Conn. 813, 569 A.2d 549 (1990).
As such, the defendant cannot satisfy the second prong
of Golding.
II
The defendant also claims that the court committed
reversible error when it declined to provide a third
party culpability instruction to the jury. We disagree.
The following additional facts, which the jury reason-
ably could have found, are relevant to this claim. Walls
was the defendant’s brother and bore a strong facial
resemblance to him. He did not physically resemble the
defendant. Unlike the defendant, who stood five feet,
seven inches tall with a ‘‘husky’’ and ‘‘more muscular’’
build, Walls was five feet, ten inches tall and had a
‘‘slim’’ physique. At the time of the shooting, Walls’ hair
was braided in cornrows, whereas the defendant’s hair
was short and curly.10 The two also were dressed differ-
ently at that time. The defendant wore a black knit cap,
a baggy grey jacket with yellow trim, jeans, and tan
boots. By contrast, Walls had on a fitted and light-
colored jacket with a large emblem on the upper left
chest, jeans, and no cap.
Lockhart was seated at the bar when the defendant
began antagonizing him. After several minutes, Lock-
hart turned around and said, ‘‘I don’t even know who
you are, who are you, leave me alone . . . what is the
problem?’’ As Lockhart turned back to the bar to finish
his drink, Walls intervened and attempted to calm the
defendant. Walls told the defendant to ‘‘let it go’’ and
made a ‘‘calm down’’ gesture with his hands. The defen-
dant nevertheless refused to ‘‘let it go’’ and remained
agitated. Walls continued his efforts to calm the defen-
dant, telling him to ‘‘chill, just let it go, back up . . . .’’11
Lockhart was fatally shot soon thereafter.
At trial, the defendant submitted a request to charge
that sought, inter alia, an instruction on third party
liability.12 During the charging conference, defense
counsel explained why he thought that instruction was
appropriate, stating: ‘‘There’s a lot of controversy as
to—with respect to where the shooter was and who
was shooting. . . . Certainly, based on the testimony
that’s brought out [Walls] as being—looking similar to
[the defendant with] one witness saying he looks
exactly alike, we believe it’s more than appropriate for
the court to give such an instruction . . . .’’ The court
responded: ‘‘The only reason I disagree with that [is]
your classic third party culpability is usually the defense
. . . after some kind of evidentiary hearing or motion,
is attempting to put in evidence of third party culpability
that someone else had the motive and opportunity and
there has to be corroboration, et cetera, that someone
else may have committed the crime. It’s usually some-
body totally independent. Many times it happens in
those cases that are real ‘whodunit’ type of a case. A
situation here which, to me, is fairly typical . . . there’s
an issue where there is a shooting and there may be
more than one group involved and there may be an
issue [as] to who, in fact, pulled a trigger. And I don’t
see that as classic third party culpability because, in
this case, and I do feel it’s appropriate, I’m giving a
more extensive charge on identification of the person
who actually caused the death of the two individuals
here. And I think that type of charge, basically, covers
it. . . . I don’t think factually, there’s much of a ques-
tion that two people died as a result of gunshot wounds,
but the main issue for [the jury] is who, in fact, did
that? So, I think that covers it adequately. I really don’t
see it as a classic third party culpability [situation], and
I think the instructions are adequate.’’
The charge ultimately provided to the jury contained
detailed instructions on eyewitness identification,
which the defendant concedes comport with the model
instructions provided on the Judicial Branch website.
In addition, the court specifically instructed the jury
that ‘‘you must be satisfied beyond a reasonable doubt
of the accuracy of the identification of the defendant
before you may find him guilty on any charge. In short,
you must consider the totality of the circumstances
effecting the identification. Remember, the state has
the burden to not only prove every element of the crime,
but also the identity of the defendant as the perpetrator
of the crime. You must be satisfied beyond a reasonable
doubt of the identity of the defendant as the one who
committed the crime or you must find the defendant
not guilty.’’
Our Supreme Court outlined the standards applicable
to claims concerning jury instructions on third party
culpability in State v. Arroyo, 284 Conn. 597, 607–610,
935 A.2d 975 (2007). It stated in relevant part: ‘‘[A]
defendant has a right to introduce evidence that indi-
cates that someone other than the defendant committed
the crime with which the defendant has been charged.
. . . The defendant must, however, present evidence
that directly connects a third party to the crime. . . .
It is not enough to show that another had the motive
to commit the crime . . . nor is it enough to raise a
bare suspicion that some other person may have com-
mitted the crime of which the defendant is accused.’’
(Internal quotation marks omitted.) Id., 609. ‘‘Because
the standards governing the admissibility of third party
culpability evidence require that the trial court deter-
mine that such evidence be relevant to the jury’s deter-
mination of whether a reasonable doubt exists as to
the defendant’s guilt, we conclude that those same stan-
dards should govern whether a trial court should give
an appropriate instruction on third party culpability.
Put another way, if the evidence pointing to a third
party’s culpability, taken together and considered in
the light most favorable to the defendant, establishes
a direct connection between the third party and the
charged offense, rather than merely raising a bare suspi-
cion that another could have committed the crime, a
trial court has a duty to submit an appropriate charge
to the jury.’’13 Id., 610. A trial court’s determination as
to whether the evidence in a given case establishes
a direct connection between the third party and the
criminal offense is subject to the abuse of discretion
standard of review. State v. Jackson, 304 Conn. 383,
424, 40 A.3d 290 (2012).
The defendant maintains that his request for a third
party culpability instruction was appropriate in light of
the evidence that (1) ‘‘Walls and the defendant look
alike, Walls was present when the shooting occurred,
and Walls had a motive to shoot Lockhart, namely, that
he was in a group that got in a confrontation with
Lockhart and his group’’; and (2) at least one witness
testified that the shooter’s hair was braided in corn-
rows.14 For the following reasons, that claim is
untenable.
A
As discussed in part I of this opinion, Practice Book
§ 42-18 obligated the defendant to apprise the court of
the evidentiary basis of the proposed charge. His writ-
ten request to charge failed to do so, as it contained no
reference to any evidence whatsoever. At the charging
conference, defense counsel offered the following evi-
dentiary basis for his proposed third party culpability
instruction: ‘‘Certainly, based on the testimony that’s
brought out [Walls] as being—looking similar to the
[defendant with] one witness saying he looks exactly
alike, we believe it more than appropriate for the court
to give such an instruction . . . .’’ The mere fact that
Walls bore a facial resemblance to the defendant and
was present at the club does not establish ‘‘a direct
connection between the third party and the charged
offense, rather than merely raising a bare suspicion that
another could have committed the crime, [such that] a
trial court has a duty to submit an appropriate charge
to the jury’’; State v. Arroyo, supra, 284 Conn. 610;
particularly when the jury heard ample testimony that
Walls attempted to calm the defendant and to diffuse
the situation immediately prior to the shooting. Accord-
ingly, we cannot say that the court improperly declined
to instruct the jury on the proposed charge when the
evidentiary basis proffered by the defendant plainly did
not meet that standard.
B
To the extent that the defendant’s claim on appeal
is predicated on witness testimony allegedly identifying
the shooter as one with cornrows; see footnote 14 of
this opinion; he cannot prevail. That distinct claim was
never presented to the trial court as a basis for the
request to charge. As a result, the court did not have
an opportunity to rule on this matter. ‘‘[I]t is well estab-
lished that [o]ur rules of procedure do not allow a
[party] to pursue one course of action at trial and later,
on appeal, argue that a path [the party] rejected should
now be open to him. . . . To rule otherwise would
permit trial by ambuscade.’’ (Internal quotation marks
omitted.) State v. Fourtin, 307 Conn. 186, 208, 52 A.3d
674, 688 (2012); see also Practice Book § 60-5 (appellate
courts ‘‘shall not be bound to consider a claim unless
it was distinctly raised at the trial’’). For that reason,
‘‘[o]nly in [the] most exceptional circumstances can
and will this court consider a claim, constitutional or
otherwise, that has not been raised and decided in the
trial court.’’ (Internal quotation marks omitted.) State
v. Canales, 281 Conn. 572, 579, 916 A.2d 767 (2007).
Furthermore, Golding review of this unpreserved
claim is unwarranted, as it ‘‘does not pertain to the
elements of the offenses in question, the state’s burden
of proof or the presumption of innocence . . . .’’ State
v. Antwon W., supra, 118 Conn. App. 201. The defendant
nevertheless relies on State v. Small, 242 Conn. 93, 104,
700 A.2d 617 (1997), which held that ‘‘a defendant who
has produced evidence supporting a legally recognized
defense is entitled, as a matter of law, to a theory of
defense instruction, and that the denial of such an
instruction is a violation of due process.’’ (Emphasis
added.) A fortiori, to establish a claim of constitutional
magnitude under Small, the requested charge must
implicate a legally recognized defense. See State v.
Rosado, 178 Conn. 704, 708, 425 A.2d 108 (1979) (‘‘only
when evidence indicating the availability of [a] legally
recognized [defense] is placed before a jury is a defen-
dant entitled as a matter of law to a theory of defense
instruction’’ [emphasis omitted]). Examples of legally
recognized defenses include entrapment; State v.
Golodner, 305 Conn. 330, 351, 46 A.3d 71 (2012); self
defense; State v. Havican, 213 Conn. 593, 603, 569 A.2d
1089 (1990); State v. Fletcher, 10 Conn. App. 697, 707,
525 A.2d 535 (1987), aff’d, 207 Conn. 191, 540 A.2d 370
(1988); duress; State v. Fuller, 199 Conn. 273, 277, 506
A.2d 556 (1986); and affirmative defenses. State v.
Small, supra, 242 Conn. 102. It is well established that
‘‘[a] claim of innocence or a denial of participation in
the crime charged is not a legally recognized defense
and does not entitle a defendant to a theory of defense
charge.’’ State v. Rosado, supra, 707; accord State v.
Golodner, supra, 352. A claim of third party culpability
is a denial of participation in the crime and, hence, not
a legally recognized defense. The defendant therefore
is not entitled to Golding review of that unpreserved
claim.
The judgment is affirmed.
In this opinion the other judges concurred.
1
General Statutes § 53a-54a provides in relevant part: ‘‘(a) A person is
guilty of murder when, with intent to cause the death of another person,
he causes the death of such person or a third person . . . .’’
2
General Statutes § 53a-54b provides in relevant part: ‘‘A person is guilty
of a capital felony who is convicted of any of the following . . . (7) murder
of two or more persons at the same time or in the course of a single
transaction . . . .’’
3
General Statutes § 53a-59 (a) provides in relevant part: ‘‘A person is
guilty of assault in the first degree when . . . (5) with intent to cause
physical injury to another person, he causes such injury to such person or
to a third person by means of the discharge of a firearm.’’
4
General Statutes § 29-35 (a) provides in relevant part: ‘‘No person shall
carry any pistol or revolver upon his or her person, except when such person
is within the dwelling house or place of business of such person, without
a permit to carry the same issued as provided in section 29-28. . . .’’
5
The defendant also invites this court to exercise its supervisory authority
over the administration of justice to require more detailed eyewitness identi-
fication jury instructions. We decline that invitation.
6
Brothers Maurice Overton and Andre Overton were at the club at all
relevant times. Maurice Overton testified at trial that he saw the defendant
holding a gun at the time of the shooting. Andre Overton similarly testified
that when the gunshots rang out, he turned and saw the defendant holding
a chrome gun in his hand. Andre Overton was approximately five feet behind
the defendant at that time. Qualnisha Lowe also identified the defendant as
the shooter at trial. She testified that, at the time of the shooting, she was
two feet from the defendant and ‘‘looked right in his face.’’ Nestor Diaz
testified that at the time of the shooting, he was approximately five feet
from the person holding the gun and was ‘‘confident’’ in his identification
of the defendant as the shooter.
Dana Middleton was socializing with Lockhart at the club and witnessed
the defendant antagonizing Lockhart prior to the shooting. He testified that
the defendant was approximately ten feet away and ‘‘kept dancing around
and pointing his fingers and grabbing his meat and making gestures like he
was making . . . threats, basically.’’ The defendant then approached Lock-
hart and Middleton and stated that ‘‘[t]hese motherfuckers don’t want it
with us. They don’t want no problems.’’ Middleton responded, ‘‘Yeah, you’re
right. Don’t nobody really want no problems.’’ As tensions rose, the defendant
was ‘‘[a]cting like a monkey; basically, like a monkey that wants to fight
somebody or have problems. . . . [A]cting like he’s ready to do something.
He’s ready to fight. He’s amped up. He can’t stand still.’’ The defendant
continued to gesture at Lockhart. Moments later as Middleton and Lockhart
were leaving the bar, Middleton heard gunshots and then saw Lockhart on
the ground with a hole in his head and brain matter on the floor. Middleton
then saw the defendant approximately five feet away holding a gun that
was pointed in his direction.
7
In his closing argument, defense counsel stated in relevant part that
‘‘[t]his is a misidentification case . . . . And you know it is the defense’s
assertion that [Walls] is the shooter in this case.’’
8
Specifically, the court sentenced the defendant to a term of life imprison-
ment without the possibility of release on the capital murder charge, with
which it merged the two murder counts. The court then sentenced the
defendant to a twenty year term of incarceration on the assault count and
a five year term of incarceration on the carrying a pistol without a permit
count, both of which the court ordered to run consecutively to the other sen-
tences.
9
In his principal appellate brief, the defendant acknowledges that ‘‘[t]he
court’s instructions comport with the model instructions published on the
state of Connecticut’s Judicial [Branch] website . . . .’’
10
Cornrows is ‘‘a hairstyle in which the hair is divided into sections and
braided close to the scalp in rows.’’ State v. Elliston, 86 Conn. App. 479,
481 n.2, 861 A.2d 563 (2004), cert. denied, 273 Conn. 906, 868 A.2d 746 (2005).
11
Diaz testified at trial that ‘‘[r]ight before the shot went off . . . someone
was talking to [the defendant] trying to calm him down, or it seemed like
they were trying to, you know, talk some sense into him or something like
that.’’ Yakeima Blake testified that it was a taller ‘‘guy with the braids’’ who
attempted to calm the individual antagonizing Lockhart by the bar. Middleton
likewise testified that, as the defendant antagonized Lockhart, ‘‘there was
a person right next to him. . . . He looked like a taller version of [the
defendant].’’
12
The requested charge stated: ‘‘You have heard evidence in this case
from several eyewitnesses that someone other than [the defendant] commit-
ted these crimes. This type of evidence is known as third party guilt. As I
already made clear to you, the state has the burden of proving the defendant’s
guilt beyond a reasonable doubt. It must prove all the elements of the crime,
including that it was the defendant, and not some other person, who was
the perpetrator. This burden rests on the state at all times; the defendant
has no burden of proof whatsoever, on this or any other issue. The question
presented by third party culpability evidence is not whether the guilt of
another person has been proven, but whether, after a full consideration of
all the evidence in this case, there is a reasonable doubt that [the defendant]
was the perpetrator. Evidence that a third party may have committed this
crime may, if credited, tend to raise a reasonable doubt as to whether the
state has met its required burden to prove the identity of the defendant as
the perpetrator. If, after considering all of the evidence, you have a reason-
able doubt as to the defendant’s guilt, you must find the defendant not
guilty. See generally State v. Echols, [203 Conn. 385, 524 A.2d 1143] (1987).’’
13
In his principal appellate brief, the defendant asks us to revisit that
authority, arguing that ‘‘imposing the ‘direct connection’ requirement is
overly restrictive.’’ We decline to do so. As an intermediate appellate tribunal,
this court is not free to depart from or modify the precedent of our Supreme
Court. See Hartford Steam Boiler Inspection & Ins. Co. v. Underwriters
at Lloyd’s & Cos. Collective, 121 Conn. App. 31, 48–49, 994 A.2d 262, cert.
denied, 297 Conn. 918, 996 A.2d 277 (2010).
14
At trial, Detective Denise LaMontagne of the Cromwell Police Depart-
ment acknowledged that, as part of her investigation into the shooting, she
had a description from ‘‘at least one, maybe two witnesses, that the shooter
had cornrows.’’ Also, during his cross-examination, the following colloquy
transpired between Diaz and defense counsel:
‘‘[Defense Counsel]: Your statement also says that the shooter had corn-
rows coming down the back of his head. Correct?
‘‘[Diaz]: Yes, that it might—that it might be, yeah. Curly hair; might have
been cornrows.
‘‘[Defense Counsel]: Is your testimony that your statement says he might
have had cornrows or that he did?
‘‘[Diaz]: If I can read the statement, I’ll clarify it.
‘‘[Defense Counsel]: Certainly. You said either maybe or he did—was
wearing cornrows hanging out of his hat, though. Correct?
‘‘[Diaz]: From what I can remember, yeah.’’
| {
"pile_set_name": "FreeLaw"
} |
Solar and battery systems are gaining in popularity for homes, like this one in Los Altos, California. But more will be needed to meet renewable, clean energy needs in the future. (Dai Sugano/Bay Area News Group/TNS)
By The Herald Editorial Board
Ten years ago, Washington state voters, recognizing the imperative need to address climate change and cleaner air, passed Initiative 937, which requires the state’s electrical utilities to use renewable resources for a portion of the electricity they provide to their customers.
Many utilities, public and private, balked. But since passage those utilities have been able to meet the initiative’s requirements. The portion of renewable energy that utilities provide has increased incrementally. Initially, utilities had to provide at least 3 percent of their loads from renewable sources by 2012, increasing to 9 percent by 2016. The requirement increases to its maximum level of 15 percent on Jan. 1, 2020.
Snohomish Public Utility District, for example, met its 9 percent renewable target for 2016, using a combination of wind, solar, biomass and even landfill gases or credits for those renewable sources. (Hydropower, while a renewable resource — and one heavily relied upon in the state — does not count toward a utility’s renewable requirement under the initiative.)
Despite predictions that the initiative would result in drastically higher electricity costs for residential, commercial and industrial customers, the state’s average retail price for electricity, according to data from the U.S. Energy Information Administration, rose by less than the rate of inflation for the eight years after the law was enacted. During that period, Washington state went from being the state with the seventh-lowest electrical cost to the second lowest.
As wind and solar technologies continue to advance and become more cost-effective, environmental groups and some lawmakers now want to build on that success by expanding the law’s reach to apply to all of the state’s electrical customers.
The Clean Energy First Act, House Bill 1334, would expand the renewable energy requirement to the state’s smaller utilities, those with fewer than 25,000 customers. Currently, the law applies only to utilities, such as Snohomish PUD, Puget Sound Energy and Seattle City Light, with more than 25,000 customers.
The act also would require renewable sources be used as utilities add capacity, preventing the use of fossil fuel-generated electricity, waste incineration and some hydroelectric as new sources. An exception would be made so that utilities could comply with mandatory standards for electrical power reliability.
The act also would require utility customers to pay a per-kilowatt charge to the utility to help fund conservation programs. The rate would be set by the state Utilities and Transportation Commission for private utilities and by the board of commissioners for individual public utilities.
During a Jan. 26 House Technology and Economic Development Committee hearing, some lawmakers and others shared their concerns regarding the legislation, including the ability of smaller utilities to meet the requirement, reliability of power supplies and the hidden carbon footprint of some renewable technologies because of the “rare earth” elements that they use.
Rep. Norma Smith, R-Clinton, who serves on the committee, has been a champion of finding alternatives to rare earth elements and was a leader in establishing a research program at Everett’s Washington State University North Puget Sound to develop those alternatives. We encourage that effort but believe that greater development of wind, solar and battery storage technologies can’t wait until rare earth alternatives are found.
Regarding energy reliability and the complexity of meeting the requirement for smaller utilities, amendments to the legislation should be able to address those concerns.
Washington state has made advancements in the amount of wind and solar energy it uses, but there are opportunities for growth. The state gets about 7 percent of its electricity from wind generation, but no turbine projects are being built or are currently planned. Solar energy installations are growing in Washington state, but at a fraction of what wind provides, it now ranks 26th in the nation for installed solar capacity.
Along with cleaner air, the developments in both technologies are also providing more jobs in the state. Solar jobs increased in the state from 2,779 in 2015 to 4,118 in 2016, according to the Solar Foundation. Wind supports between 1,000 and 2,000 jobs in the state, according to the American Wind Energy Association.
While Washington state has long relied on hydroelectric dams as a clean and reliable source of power, those dams can’t be counted on to provide more than they currently do. The needs of agriculture, salmon and water supplies will be increasingly balanced against hydropower.
The last 10 years have justified the voters’ confidence in the ability of renewable energy sources to provide a greater share of the electricity we use. The advancements made in the affordability and reliability of renewable sources should provide the confidence to expand their use even further.
Talk to us You can tell us about news and ask us about our journalism by emailing [email protected] or by calling 425-339-3428.
If you have an opinion you wish to share for publication, send a letter to the editor to [email protected] or by regular mail to The Daily Herald, Letters, P.O. Box 930, Everett, WA 98206.
More contact information is here.
Gallery | {
"pile_set_name": "OpenWebText2"
} |
---
abstract: 'We discuss various properties of the variational class of continuous matrix product states, a class of ansatz states for one-dimensional quantum fields that was recently introduced as the direct continuum limit of the highly successful class of matrix product states. We discuss both attributes of the physical states, *e.g.* by showing in detail how to compute expectation values, as well as properties intrinsic to the representation itself, such as the gauge freedom. We consider general translation non-invariant systems made of several particle species and derive certain regularity properties that need to be satisfied by the variational parameters. We also devote a section to the translation invariant setting in the thermodynamic limit and show how continuous matrix product states possess an intrinsic ultraviolet cutoff. Finally, we introduce a new set of states which are tangent to the original set of continuous matrix product states. For the case of matrix product states, this construction has recently proven relevant in the development of new algorithms for studying time evolution and elementary excitations of quantum spin chains. We thus lay the foundation for similar developments for one-dimensional quantum fields.'
author:
- Jutho Haegeman
- 'J. Ignacio Cirac'
- 'Tobias J. Osborne'
- Frank Verstraete
bibliography:
- 'paperslibrary.bib'
- 'manuallibrary.bib'
- 'books.bib'
title: Calculus of continuous matrix product states
---
Introduction
============
Many revolutions and breakthroughs in quantum physics, and quantum many body physics in particular, were stimulated by guessing a suitable variational ansatz that captures the relevant correlations for the systems under consideration. Feynman’s ansatz for the roton in superfluid Helium[@Feynman:1954aa; @Feynman:1956aa], the Bardeen-Cooper-Schrieffer wave function for superconductivity[@1957PhRv..106..162B] and the Laughlin wave function for the fractional quantum Hall effect[@PhysRevLett.50.1395] are only a few prominent examples. For gapped one-dimensional quantum spin systems, the set of matrix product states[@1987PhRvL..59..799A; @1988CMaPh.115..477A; @1992CMaPh.144..443F; @2008AdPhy..57..143V; @2009JPhA...42X4004C] is a very general ansatz that can describe a range of different phenomena and different physical phases, including normal symmetric and symmetry broken phases as well as the more exotic symmetry-protected topologically ordered phases such as the Haldane phase[@Haldane:1983aa; @Haldane:1983ab; @2010PhRvB..81f4439P]. Indeed, with the benefit of hindsight, we now understand White’s powerful density matrix renormalization group algorithm[@1992PhRvL..69.2863W; @1993PhRvB..4810345W] as a variational optimization over the set of matrix product states[@1995PhRvL..75.3537O; @1997PhRvB..55.2164R].
Until recently, few equally general ansatzes that surpass mean field theory were available for extended quantum systems in the continuum, *i.e.* quantum fields. Numerical approaches require a finite number of degrees of freedom in order to fit the problem in the memory of a computer. For compact systems such as nuclei, atoms and molecules, an expansion in terms of a finite-dimensional basis is possible, but for extended systems this eventually results in a discretization to an effective lattice system. A new variational ansatz field theories in $d=1$ spatial dimensions was developed by Verstraete and Cirac in 2010 [@2010PhRvL.104s0405V]. This ansatz is formulated in the continuum and does not require an underlying lattice approximation. It can be considered to be the continuum limit of a special subclass of matrix product states (MPS) and is therefore called the *continuous matrix product state* (cMPS) class.
The aim of the current paper is to discuss in greater detail the properties of cMPS. Section \[s:def\] reviews the different definitions and representations of these states in the current literature. We then derive a set of regularity conditions that become relevant in the case of systems with multiple particle species in Section \[s:regularity\]. Section \[s:expectval\] discusses how to (efficiently) evaluate expectation values with respect to these states. Section \[s:gauge\] is devoted to the gauge invariance and the existence of canonical forms in the continuous matrix product state representation for generic systems without translation invariance. We also discuss uniform continuous matrix product states in the thermodynamic limit and illustrate how continuous matrix product states possess a natural ultraviolet cutoff in Section \[s:ti\]. Finally, Section \[s:tangent\] provides an intuitive construction of tangent vectors to the variational set and discusses their representation properties as well, both for finite systems and in the thermodynamic limit. These tangent states are relevant when studying time evolution or elementary excitations along the lines of analogous MPS algorithms [@2011arXiv1103.0936H; @2011arXiv1103.2286H; @2012PhRvB..85c5130P; @2012arXiv1207.0691M]. We do not strive for absolute mathematical rigor, but merely attempt to explain in full detail the prerequisites for using cMPS in numerical algorithms. For example, due to the intrinsic difficulty of the various infinite-dimensional function spaces involved, we do not include a rigorous proof that the set of continuous matrix product states constitutes a smooth (complex) manifold and that the construction of a tangent space is justified.
Various definitions of the variational class {#s:def}
============================================
Setting {#ss:def:setting}
-------
Consider a quantum system defined on a one-dimensional continuum ${\ensuremath{\mathcal{R}}}=[-L/2,+L/2]$ with length $\lvert{\ensuremath{\mathcal{R}}}\rvert=L$ that accommodates $q$ bosonic and/or fermionic particle species, which are labeled by the greek index $\alpha=1,\ldots,q$. Throughout this paper, we restrict to non-relativistic systems. A state of the quantum system containing $N_{\alpha}$ particles of type $\alpha$ is then described by a square integrable function on $\prod_{\alpha=1}^{q}{\ensuremath{\mathcal{R}}}^{(N_{\alpha})}_{\eta_{\alpha}}$, where $\eta_{\alpha}=+1$ ($-1$) if particle species $\alpha$ is bosonic (fermionic) and ${\ensuremath{\mathcal{R}}}^{(N_{\alpha})}_{+}$ (${\ensuremath{\mathcal{R}}}^{(N_{\alpha})}_{-}$) corresponds to the symmetric (antisymmetric) subspace of ${\ensuremath{\mathcal{R}}}^N$, the Cartesian product of $N$ copies of ${\ensuremath{\mathcal{R}}}$. The space of the square integrable functions on this domain is a Hilbert space that is denoted as $${\ensuremath{{\ensuremath{\mathbb{H}}}}}_{{\ensuremath{\mathcal{R}}}}^{\{N_{\alpha}\}_{\alpha=1,\ldots,q}}=L^2\left(\prod_{\alpha=1}^{q}{\ensuremath{\mathcal{R}}}^{(N_{\alpha})}_{\eta_{\alpha}}\right).\label{eq:defNalphaspace}$$ Following the principles of second quantization, we now define the Fock space $${\ensuremath{{\ensuremath{\mathbb{H}}}}}_{{\ensuremath{\mathcal{R}}}}^{(\text{F})}=\bigoplus_{N_1=0}^{+\infty}\cdots \bigoplus_{N_q=0}^{+\infty}{\ensuremath{{\ensuremath{\mathbb{H}}}}}_{{\ensuremath{\mathcal{R}}}}^{\{N_{\alpha}\}_{\alpha=1,\ldots,q}}\label{eq:deffockspace}$$ which captures an arbitrary state of the quantum system. In addition, we denote the unique vacuum state as $\ket{\Omega}\in {\ensuremath{{\ensuremath{\mathbb{H}}}}}_{{\ensuremath{\mathcal{R}}}}^{\{N_\alpha=0\}_{\alpha=1,\ldots,q}}$. Particles of type $\alpha$ are created and annihilated at position $x\in{\ensuremath{\mathcal{R}}}$ with the operators ${\ensuremath{\hat{\psi}^\dagger}}_{\alpha}(x)$ and ${\ensuremath{\hat{\psi}}}_{\alpha}(x)$ with $\alpha=1,\ldots,q$. These satisfy the general commutation or anticommutation relations $$\begin{aligned}
{\ensuremath{\hat{\psi}}}_{\alpha}(x){\ensuremath{\hat{\psi}}}_{\beta}(y)-\eta_{\alpha,\beta} {\ensuremath{\hat{\psi}}}_{\beta}(y){\ensuremath{\hat{\psi}}}_{\alpha}(x)&=0,&{\ensuremath{\hat{\psi}}}_{\alpha}(x){\ensuremath{\hat{\psi}^\dagger}}_{\beta}(y)-\eta_{\alpha,\beta} {\ensuremath{\hat{\psi}^\dagger}}_{\beta}(y){\ensuremath{\hat{\psi}}}_{\alpha}(x)&=\delta_{\alpha,\beta}\delta(x-y),\label{eq:commrelations}\end{aligned}$$ where $\eta_{\alpha,\beta}=-1$ if both $\alpha$ and $\beta$ represent fermionic particles and $\eta_{\alpha,\beta}=1$ when at least one of the two particles species $\alpha$ or $\beta$ is bosonic. Clearly $\eta_{\alpha,\alpha}=\eta_{\alpha}$. We always write sums over the species index $\alpha$ explicitly and do not use Einstein’s summation convention with respect to this index.
Original definition {#ss:def:original}
-------------------
A cMPS is defined to be the state [@2010PhRvL.104s0405V] $$\begin{gathered}
\ket{\Psi[Q,R_{1},\ldots,R_{q}]}{\ensuremath{\triangleq}}\operatorname{tr}\left(B \mathscr{P}\!\exp\left[\int_{-L/2}^{+L/2}{\ensuremath{\mathrm{d}}}x\, Q(x)\otimes {\ensuremath{\hat{{\openone}}}}+\sum_{\alpha=1}^{q}R_{\alpha}(x) \otimes {\ensuremath{\hat{\psi}^\dagger}}_{\alpha}(x) \right]\right)\ket{\Omega},\label{eq:defcmps}\end{gathered}$$ where $\mathscr{P}\!\exp$ is the path ordered exponential (that orders its argument from left to right for increasing values of $x$) and $\ket{\Omega}$ is the empty vacuum that is annihilated by ${\ensuremath{\hat{\psi}}}_{\alpha}(x)$, $\forall \alpha=1,\ldots,N$. The trace operation acts on an auxiliary space $\mathbb{C}^D$, also called the ancilla space, where $D$ is the bond dimension. The variational parameters correspond to the functions $Q, R_{\alpha}: {\ensuremath{\mathcal{R}}}\to \mathbb{C}^{D\times D}$ that take value in $\mathbb{L}(\mathbb{C}^D){\ensuremath{\triangleq}}\mathbb{C}^{D\times D}$, the space of linear operators acting on the ancilla space. For now, we do not impose any continuity or regularity conditions on these functions, and we refer to Section \[s:regularity\] for a detailed discussion. Finally, the boundary operator $B\in \mathbb{L}(\mathbb{C}^D)$ encodes the boundary conditions. For a system with periodic boundary conditions the boundary operator has full rank and is typically chosen to be $B={\openone}_{D}$. In case of open boundary conditions, we can choose $B=\bm{v}_{\mathrm{R}}\bm{v}^{\dagger}_{\mathrm{L}}$ with $\bm{v}_{\mathrm{L}}$ and $\bm{v}_{\mathrm{R}}$ $D$-dimensional boundary vectors. Note that the matrix functions $Q$ and $R_{\alpha}$ themselves need to satisfy certain boundary conditions which are imposed by the physical setting. We discuss this in more detail in Section \[s:bc\].
More formally, we can identify the cMPS construction as a map between the function spaces ${\ensuremath{\mathcal{R}}}\to \mathbb{C}^{D\times D}$ and the Fock space ${\ensuremath{{\ensuremath{\mathbb{H}}}}}_{{\ensuremath{\mathcal{R}}}}^{(\text{F})}$: $$\begin{split}
\Psi:&({\ensuremath{\mathcal{R}}}\to \mathbb{C}^{D\times D})^{q+1} \to {\ensuremath{{\ensuremath{\mathbb{H}}}}}_{{\ensuremath{\mathcal{R}}}}^{(\text{F})}:\\
&\qquad(Q,R_1,\ldots,R_q)\mapsto \ket{\Psi[Q,R_1,\ldots,R_q]}.
\end{split}$$ The range of the map $\Psi$ defines a variational set ${\ensuremath{\mathcal{V}}}_{\mathrm{cMPS}(D)}\subset {\ensuremath{{\ensuremath{\mathbb{H}}}}}_{{\ensuremath{\mathcal{R}}}}^{(\text{F})}$, where we often omit the explicit specification of the bond dimension. Henceforth, we compactly denote a cMPS $\ket{\Psi[Q,R_{1},\ldots,R_{q}]}$ as $\ket{\Psi[Q,\{R_{\alpha}\}]}$. It will always be clear from the context how many and which particle species are present. The variational set ${\ensuremath{\mathcal{V}}}_{\text{cMPS}(D)}$ is not a vector space, since the representation of the sum of two elements $\ket{\Psi[Q,\{R_{\alpha}\}]}+\ket{\Psi[Q',\{R_{\alpha}'\}]}$ requires in the most general case a cMPS $\ket{\tilde{\Psi}[\tilde{Q},\{\tilde{R}_{\alpha}\}]}\in{\ensuremath{{\ensuremath{\mathcal{M}}}}}_{\text{cMPS}(\tilde{D})}$ with bond dimension $\tilde{D}=2D$, where we choose ($\forall x\in[-L/2,+L/2]$) $$\begin{aligned}
\tilde{Q}(x)&=Q(x)\oplus Q'(x),\\
\tilde{R}_{\alpha}(x)&=R_{\alpha}(x)\oplus R_{\alpha}'(x),&\forall \alpha=1,\ldots,q\\
\tilde{B}&=B\oplus B'.\end{aligned}$$ The variational set does however contain almost complete rays of states, since for any state $\ket{\Psi[Q,\{R_{\alpha}\}]}\in{\ensuremath{\mathcal{V}}}_{\text{cMPS}(D)}$ and any $\lambda\in\mathbb{C}_{0}=\mathbb{C}\setminus\{0\}$ we can also represent $\lambda\ket{\Psi[Q,\{R_{\alpha}\}]}$ as a cMPS with bond dimension $D$ as $\ket{\Psi[Q',\{R'_{\alpha}\}]}$, where $Q'(x)=Q(x)+\mu(x) {\openone}_{D}$ and $R_{\alpha}'(x)=R_{\alpha}(x)$ with $$\exp\left(\int_{-L/2}^{+L/2}{\ensuremath{\mathrm{d}}}x\,\mu(x)\right)=\lambda.$$ A special case is obtained for $\lambda=0$, since this requires us to redefine $Q(x)$ as $Q'(x)=Q(x)-\infty {\openone}_{D}$. Hence, the null state is not contained within ${\ensuremath{\mathcal{V}}}_{\text{cMPS}(D)}$ but only in its closure. Correspondingly, the variational set ${\ensuremath{\mathcal{V}}}_{\text{cMPS}(D')}$ with $D'<D$ is not a subset of ${\ensuremath{\mathcal{V}}}_{\text{cMPS}(D)}$. For example, if the boundary matrices are fixed to $B'={\openone}_{D'}$ and $B={\openone}_{D}$ (periodic boundary conditions), then a representation of the cMPS $\ket{\Psi'[Q',\{R_{\alpha}'\}]}$ with bond dimension $D'$ as a cMPS $\ket{\Psi[Q,\{R_{\alpha}\}]}$ with bond dimension $D>D'$ requires $Q=Q'\oplus (-\infty \times {\openone}_{D-D'})$ and $R_{\alpha}=R_{\alpha}'\oplus (0\times {\openone}_{D-D'})$, hence ${\ensuremath{\mathcal{V}}}_{\text{cMPS}(D')}$ is only included in the closure of ${\ensuremath{\mathcal{V}}}_{\text{cMPS}(D)}$. Note that this differs from the case of MPS on the lattice, where ${\ensuremath{\mathcal{V}}}_{\text{MPS}(D')}\subset {\ensuremath{\mathcal{V}}}_{\text{MPS}(D)}$ for $D\geq D'$.
Fock space embedding {#ss:def:fockembedding}
--------------------
The embedding of $\ket{\Psi[Q,\{R_{\alpha}\}]}\in{\ensuremath{\mathcal{V}}}_{\text{cMPS}(D)}$ in the Fock space ${\ensuremath{{\ensuremath{\mathbb{H}}}}}_{{\ensuremath{\mathcal{R}}}}^{(\text{F})}$ for finite $\lvert{\ensuremath{\mathcal{R}}}\rvert$ can be made explicit by expanding the path ordered exponential as $$\begin{gathered}
\ket{\Psi[Q,\{R_{\alpha}\}]}=\sum_{N=0}^{+\infty} \int_{-L/2\leq x_{1}\leq \cdots \leq x_{N}\leq L/2}{\ensuremath{\mathrm{d}}}x_{1}\cdots {\ensuremath{\mathrm{d}}}x_{N}\\
\operatorname{tr}\Bigg[ B \bigg(Q(x_1)\otimes {\ensuremath{\hat{{\openone}}}}+\sum_{\alpha_1=1}^{q}R_{\alpha_1}(x_1) \otimes {\ensuremath{\hat{\psi}^\dagger}}_{\alpha_1}(x_1) \bigg)\times \cdots\\
\times \bigg(Q(x_N)\otimes {\ensuremath{\hat{{\openone}}}}+\sum_{\alpha_N=1}^{q}R_{\alpha_N}(x_N) \otimes {\ensuremath{\hat{\psi}^\dagger}}_{\alpha_N}(x_N) \bigg)\Bigg]\ket{\Omega}.\end{gathered}$$ We can then expand the round brackets and reorder the sum in terms of the actual number of created particles by grouping subsequent occurrences of the $Q$ term, so as to obtain $$\begin{gathered}
\ket{\Psi[Q,\{R_{\alpha}\}]}=\sum_{N=0}^{+\infty} \sum_{\alpha_1,\ldots,\alpha_N=1}^{q} \int_{-L/2\leq x_{1}\leq \cdots \leq x_{N}\leq L/2}{\ensuremath{\mathrm{d}}}x_{1}\cdots {\ensuremath{\mathrm{d}}}x_{N}\\
\operatorname{tr}\bigg[ B M_Q(-L/2,x_1) R_{\alpha_1}(x_1) M_Q(x_1,x_2) \cdots R_{\alpha_N}(x_N) M_Q(x_N,L/2) \bigg]\\
{\ensuremath{\hat{\psi}^\dagger}}_{\alpha_1}(x_1){\ensuremath{\hat{\psi}^\dagger}}_{\alpha_2}(x_2)\cdots {\ensuremath{\hat{\psi}^\dagger}}_{\alpha_N}(x_N)\ket{\Omega},\label{eq:cmpsfockembedding}\end{gathered}$$ with $$M_Q(x,y)=\sum_{k=0}^{+\infty} \int_{x\leq z_1\leq \cdots \leq z_k \leq y} {\ensuremath{\mathrm{d}}}z_1\cdots {\ensuremath{\mathrm{d}}}z_k Q(z_1) \cdots Q(z_k)= \mathscr{P}{\ensuremath{\mathrm{e}}}^{\int_{x}^{y} Q(z) {\ensuremath{\mathrm{d}}}z}.$$ Eq. shows how a cMPS can be interpreted as an superposition over the different particle number sectors in the Fock space. Note that this is not completely equivalent to the different sectors ${\ensuremath{{\ensuremath{\mathbb{H}}}}}_{{\ensuremath{\mathcal{R}}}}^{\{N_{\alpha}\}_{\alpha=1,\ldots,q}}$ in the direct product construction of ${\ensuremath{{\ensuremath{\mathbb{H}}}}}_{{\ensuremath{\mathcal{R}}}}^{(\text{F})}$ \[Eq. \], since now only the total number of particles $N=\sum_{\alpha=1}^{q} N_{\alpha}$ is fixed. If we define the $N$-particle wave functions as $$\phi_{\alpha_{1},\ldots,\alpha_{N}}(x_{1},\ldots,x_{N})=\braket{\Omega|{\ensuremath{\hat{\psi}}}_{\alpha_{k}}(x_{k})\cdots {\ensuremath{\hat{\psi}}}_{\alpha_{1}}(x_{1})|\Psi[Q,\{R_{\alpha}\}]}.\label{eq:defphiN}$$ then we can infer from Eq. that $$\begin{gathered}
\phi_{\alpha_{1},\ldots,\alpha_{N}}(x_{1},\ldots,x_{N}) =\\
\operatorname{tr}\bigg[ B M_Q(-L/2,x_1) R_{\alpha_1}(x_1) M_Q(x_1,x_2) \cdots R_{\alpha_N}(x_N) M_Q(x_N,L/2) \bigg]\label{eq:cmpsNparticle}\end{gathered}$$ only when $x_1\leq x_2\leq \cdots \leq x_N$. It can be extended to any other order of the arguments by reordering the annihilation operators in Eq. according to the given commutation or anticommutation relations in Eq. . The non-relativistic kinetic energy requires that these functions are sufficiently regular, which together with the extension to arbitrary order of the arguments imposes certain non-trivial constraints on the matrix functions $Q$ and $R_{\alpha}$ that are to be discussed in Section \[s:regularity\].
The continuum limit of matrix product states {#ss:def:continuum}
--------------------------------------------
The cMPS $\ket{\Psi[Q,\{R_{\alpha}\}]}$ was originally constructed in Ref. as the continuum limit of a certain subset of MPS, where the subset was selected in such a way as to obtain a valid continuum limit. We explore this construction in greater detail and elaborate on some of the non-trivial implications regarding ultraviolet cutoffs and correlation lengths (infrared cutoffs).
We approximate the continuum ${\ensuremath{\mathcal{R}}}=[-L/2,L/2]$ by a lattice ${\ensuremath{\mathcal{L}}}$ with lattice spacing $a$ and $N=L/a$ sites, where we send $a\to 0$. On every site of the lattice we can create and annihilate particles of type $\alpha$ by acting with the creation and annihilation operators ${\ensuremath{\hat{c}}}_{\alpha}^{\dagger}(n)$ and ${\ensuremath{\hat{c}}}_{\alpha}(n)$. We can relate them to the field operators by $$\begin{aligned}
{\ensuremath{\hat{c}}}_{\alpha}(n)=\int_{na}^{(n+1) a} {\ensuremath{\hat{\psi}}}_{\alpha}(x)\, {\ensuremath{\mathrm{d}}}x\end{aligned}$$ and its hermitian conjugate. The local basis on site $n$ thus consists of the states $\ket{0}_{n}$ (no particles), $\ket{\alpha}_{n}=c_{\alpha}^{\dagger}(n)\ket{0}_{n}$, $\ket{\alpha,\beta}_{n}=c_{\alpha}^{\dagger}(n)c_{\beta}^{\dagger}(n)\ket{0}_{n}$, … On this lattice, we can define an MPS $\ket{\Psi[A]}$ with matrices $A^{s}(n)$ where $s$ can take values $0$, $\alpha$, $(\alpha,\beta)$, … If the local basis is infinite-dimensional, this MPS definition is only formal, *i.e.* it cannot be used for practical computations. In the limit $a\to 0$, the number of sites $L/a$ in the lattice ${\ensuremath{\mathcal{L}}}$ goes to infinity.
On an infinite number of lattice sites, two arbitrary MPS are generally orthogonal due to the (infrared) orthogonality catastrophe[@Anderson:1967aa]. Since we now aim to create quantum field states within the Fock space ${\ensuremath{{\ensuremath{\mathbb{H}}}}}_{{\ensuremath{\mathcal{R}}}}^{(\text{F})}$, we need to restrict to a special subset of MPS where the total number of particles is finite (on average, so that $\braket{ {\ensuremath{\hat{N}}}}$ is finite). Since a finite number of particles has to be distributed over a diverging number of sites $L/a$, most of the sites in the lattice ${\ensuremath{\mathcal{L}}}$ are empty on average. So $A^{0}$ has to be the dominant matrix, and it turns out that the cMPS $\ket{\Psi[Q,\{R_{\alpha}\}]}\in{\ensuremath{{\ensuremath{\mathbb{H}}}}}_{{\ensuremath{\mathcal{R}}}}^{(\text{F})}$ can be obtained from the continuum limit ($a\to 0$) of the MPS $\ket{\Psi[A]}\in{\ensuremath{{\ensuremath{\mathbb{H}}}}}_{{\ensuremath{\mathcal{L}}}}$ by identifying ${\ensuremath{\hat{\psi}^\dagger}}_{\alpha}(n a)={\ensuremath{\hat{c}}}^{\dagger}_{\alpha}(n)/\sqrt{a}$ and $$\begin{aligned}
A^{0}(n)&={\openone}_{D}+a Q(n a),\nonumber\\
A^{\alpha}(n) &= \sqrt{a} R_{\alpha}(n a),\nonumber\\
A^{(\alpha,\beta)}(n) &= \begin{cases} \frac{a}{2} [ R_{\alpha}(n a) R_{\beta}(n a)+\eta_{\alpha,\beta} R_{\beta}(n a) R_{\alpha}(n a)],& \alpha\neq \beta\\
\frac{a}{2} R_{\alpha}(n a)^{2},&\alpha=\beta
\end{cases}\label{eq:correspondencemps}\\
&\ldots\nonumber\end{aligned}$$ together with $\ket{\Omega}=\ket{\bm{0}}=\otimes_{n\in{\ensuremath{\mathcal{L}}}} \ket{0}_{n}$, $\forall n=-L/2a,-L/2a+1,\ldots,+L/2a-1$. This equivalence can be obtained from a Taylor expansion of the $\exp$-operator, although this is only completely rigorous when the entries of $Q$ and $R_{\alpha}$ are finite and the operators ${\ensuremath{\hat{\psi}^\dagger}}(x)$ are bounded (*i.e.* not for bosons). Most results for cMPS in the remainder of this chapter can be derived from this correspondence with MPS, but we attempt to derive these results directly in the continuum as much as possible.
The correspondence with MPS is useful for concluding that the entanglement of one half of the chain with the other half (in the case of open boundary conditions) is limited by the upper bound $\log D$. By restricting to MPS within a single Fock space in the thermodynamic limit, we avoid the orthogonality catastrophe. The infrared orthogonality catastrophe of MPS in the thermodynamic limit would turn into an ultraviolet catastrophe when this infinitely-sized lattice ${\ensuremath{\mathcal{L}}}$ would correspond to the continuum limit of a finitely sized continuum ${\ensuremath{\mathcal{R}}}$. Physically, the ultraviolet catastrophe is avoided because the finite number of particles induce a physical cutoff $a_{\text{phys}}$ that is given, not by the lattice spacing $a\to 0$ but by $a_{\text{phys}}=\rho^{-1}$ with $\rho=\braket{{\ensuremath{\hat{N}}}}/L$ the particle density[^1]. The presence of a physical length scale can be detected from the physical dimensions of $Q$ and $R_{\alpha}$, which are given by $[Q]=\ell^{-1}$ and $[R]=\ell^{-1/2}$ with $\ell$ a generic length dimension. The nature of the physical cutoff $a_{\text{phys}}$ and its relation to $Q$ and $R_{\alpha}$ is discussed in Section \[s:ti\] for the translation invariant case, where it can unambiguously be defined. Shifting the cutoff from the lattice spacing $a$ to a physical value $a_{\text{phys}}$ is a very important step in the definition of cMPS. MPS with finite bond dimension $D$ have a finite amount of entanglement to which corresponds in general a finite range of fluctuations $\xi/a$, where $\xi$ denotes the correlation length. Hence, they have in general a finite dimensionless correlation length $\tilde{\xi}=\xi/a$. As $a$ is scaled to zero while $\tilde{\xi}$ remains finite, the physical correlation length $\xi$ would also scale to zero. It is because the physical cutoff is shifted to a finite value $a_{\text{phys}}$ (with thus $a_{\text{phys}}/a\to \infty$) that cMPS are able to combine a finite amount of entanglement with a finite physical correlation length $\xi$ (with thus $\xi/a\to \infty$ but with $\xi/a_{\text{phys}}$ finite). The physical correlation length $\xi$ is also computed in Section \[s:ti\] for the translation invariant case.
Alternative construction through continuous measurement {#ss:def:continuousmeasurement}
-------------------------------------------------------
Rather than trying to construct a cMPS as the continuum limit of a MPS, we could also try to directly define the continuum limit of the processes that define MPS. Unfortunately, the process of sequential Schmidt decompositions has no straightforward generalization to the continuum and neither has the definition of valence bond solids. One can however define a continuum version of the sequential generation process that creates MPS[@2005PhRvL..95k0503S], based on the paradigm of continuous measurement [@Caves:1987aa]. The resulting process for creating cMPS is described in Ref. , and is here summarised for the sake of completeness.
As in the discrete case, let the ancilla start in a state $\bm{v}_{\text{R}}\in{\ensuremath{{\ensuremath{\mathbb{H}}}}}_{\text{ancilla}}=\mathbb{C}^{D}$. This ancilla can be interpreted as a resonating cavity with $D$ internal levels, in which there is a particle source that creates particles of type $\alpha$ ($\alpha=1,\ldots,q$). These particles gradually leave the cavity due to cavity losses. Since particles leaving the cavity at different times occupy different positions in space at a given time (since they travel at a certain speed which we set equal to one), the resulting configuration of particles can be interpreted as a static spatially distributed quantum state. For a compact cavity (*i.e.* a zero-dimensional system), the resulting quantum state is one-dimensional. As an abstraction of this physical process, a $(d-1)$-dimensional cavity can be used to encode a $d$-dimensional holographic quantum state. We refer to Ref. for the general case, and henceforth restrict to the $d=1$ case that produces cMPS.
Between two particle emissions, the cavity evolves according to a Hamiltonian $K\in\operatorname{\mathbb{L}}(\mathbb{C}^D)$ (a Hermitean $D\times D$ matrix), whereas the physical state outside the cavity does not evolve. By observing the particles that are emitted from the cavity, we are continuously measuring the state of the cavity (*i.e.* ancilla). The state of the cavity at time $t$ is encoded in the particle distribution at position $x=-t$. It was shown that the resulting configuration of particles outside the cavity is given by $$\bm{v}_{\mathrm{L}}^{\dagger}{\ensuremath{\mathscr{P}\exp}}\left(-{\ensuremath{\mathrm{i}}}\int_{-L/2}^{+L/2}{\ensuremath{\mathrm{d}}}x\, K(x)\otimes{\ensuremath{\hat{{\openone}}}} + \sum_{\alpha=1}^{N}{\ensuremath{\mathrm{i}}}R_{\alpha}(x)\otimes {\ensuremath{\hat{\psi}^\dagger}}_{\alpha}(x)-{\ensuremath{\mathrm{i}}}R_{\alpha}(x)^{\dagger}\otimes {\ensuremath{\hat{\psi}}}_{\alpha}(x)\right) \bm{v}_{\mathrm{R}} \ket{\Omega},\label{eq:defcontmeasurement}$$ where the ancilla is projected onto the state $\bm{v}_{\mathrm{L}}$ at the end of the measurement, in order to decouple it from the physical state. The resulting expression does not yet correspond exactly to Eq. but it can easily be brought in the required form by using the Baker-Campbell-Hausdorff formula on every infinitesimal patch of the path ordered exponential. We then obtain that the state in Eq. is contained within ${\ensuremath{\mathcal{V}}}_{\mathrm{cMPS}}$, as it is equal to $\ket{\Psi[Q,\{R_{\alpha}\}]}$ for the specific choice $$\begin{aligned}
Q(x)=-{\ensuremath{\mathrm{i}}}K(x) -\frac{1}{2}\sum_{\alpha=1}^{N} R_{\alpha}(x)^{\dagger}R_{\alpha}(x).\label{eq:qunitary}\end{aligned}$$ We recall that $K(x)$ is a Hermitian matrix. Generic cMPS can be brought into this form by using the gauge invariance of the cMPS representation, as discussed in Section \[s:gauge\].
This construction allows us to introduce a unitary operator ${\ensuremath{\hat{U}}}(y,z)\in\operatorname{\mathbb{L}}(\mathbb{C}^{D}\otimes {\ensuremath{{\ensuremath{\mathbb{H}}}}})$ $${\ensuremath{\hat{U}}}(y,z)={\ensuremath{\mathscr{P}\exp}}\left(-{\ensuremath{\mathrm{i}}}\int_{z}^{y}{\ensuremath{\mathrm{d}}}x\, K(x)\otimes{\ensuremath{\hat{{\openone}}}} + \sum_{\alpha=1}^{N}{\ensuremath{\mathrm{i}}}R_{\alpha}(x)\otimes {\ensuremath{\hat{\psi}^\dagger}}_{\alpha}(x)-{\ensuremath{\mathrm{i}}}R_{\alpha}(x)^{\dagger}\otimes {\ensuremath{\hat{\psi}}}_{\alpha}(x)\right).\label{eq:defUalternative}$$ Being a unitary operator, it conserves the norm of $\bm{v}_{\mathrm{R}}\otimes\ket{\Omega}$. This does not imply that the cMPS $\ket{\Psi[Q,\{R_{\alpha}\}]}$ with $Q$ given by Eq. is automatically normalized to unity, because the definition also involves a projection to $\bm{v}_{\mathrm{L}}$. But the unitarity of ${\ensuremath{\hat{U}}}(y,z)$ in Eq. does guarantee that $\ket{\Psi[Q,\{R_{\alpha}\}]}$ can easily be normalized and has no norm that diverges or goes to zero in the large volume limit.
From a physical perspective, this construction is important as it clearly sketches the holographic properties of the cMPS. The physical state of a one-dimensional system is described by a zero-dimensional boundary theory. The spatial coordinate of the physical system acts as a time coordinate in the boundary theory. The physical state is created because the boundary theory interacts with the physical system, where the position of the interaction shifts linearly in time. This interaction results in the boundary theory not being at equilibrium. Instead, the boundary theory is subject to dissipative dynamics, as will become clear in the following section. This holographic property is of course strongly related with the intrinsic area law for entanglement that is present in cMPS.
Path integral representation {#ss:def:pathintegral}
----------------------------
Recently, it has also been illustrated that we can break up the path ordered exponential in the definition of $\ket{\Psi[Q,\{R_\alpha\}]}$ and insert resolutions of the identity in order to obtain a path integral description of the same state[@Brockt:fk]. The easiest way to insert an identity is by first introducing a second quantized version of the ancilla by making the substitution $$\begin{aligned}
Q(x)& \mapsto \hat{Q}(x)=Q^{j,k}(x) \hat{b}_j^\dagger \hat{b}_k,&R_{\alpha}(x) &\mapsto \hat{R}_{\alpha}(x)=R_{\alpha}^{j,k}(x) \hat{b}_j^\dagger \hat{b}_k,\end{aligned}$$ with $\hat{b}_j$ and $\hat{b}^\dagger_j$ annihilation and creation operators for bosonic or fermionic particles in level $j=1,\ldots,D$ of the ancilla. The resolution of the identity can now be expressed in terms of coherent states. However, the ancilla Hilbert space is now an infinite-dimensional Fock space, whereas the original ancilla space was only $\mathbb{C}^D$ and corresponds to the single-particle sector of this Fock space. Because the operators $\hat{Q}(x)$ and $\hat{R}_{\alpha}(x)$ are particle-number preserving with respect to the ancilla, we can restrict the whole path integral to the single particle sector by either choosing appropriate boundary conditions. If $\ket{\omega}$ denotes the ancilla zero-particle state, then a restriction to the single particle state is obtained by identifying $$\begin{aligned}
B&\mapsto \hat{B}=B^{j,k} b^\dagger_j \ket{\omega}\bra{\omega} b_k.\end{aligned}$$ If we introduce the coherent states $$\ket{\phi}=\exp\left(\sum_{j=1}^{D} \phi_j \hat{b}^{\dagger}_j - \phi^\ast_j \hat{b}_j\right)\ket{\omega}$$ then we can write the identity as $$\hat{{\openone}}=\frac{1}{\pi^D} \int \prod_{j=1}^D {\ensuremath{\mathrm{d}}}\phi_j{\ensuremath{\mathrm{d}}}\phi_j^\ast \, \ket{\phi}\bra{\phi}.$$ Following the standard recipe, we can then obtain the path integral description of $\ket{\Psi[Q,\{R_{\alpha}\}]}$ as $$\begin{gathered}
\ket{\Psi[Q,\{R_{\alpha}\}]}=\\
\int \mathscr{D} \phi \mathscr{D}\phi^{\ast} \left(\phi(+L/2)^\dagger B \phi(-L/2)\right) {\ensuremath{\mathrm{e}}}^{-\frac{\lvert \phi(-L/2)\rvert^2}{2}-\frac{\lvert \phi(L/2)\rvert^2}{2}}\qquad\qquad\qquad\qquad\qquad\qquad\\
\times \exp\bigg[\int_{-L/2}^{+L/2} \Big\{\frac{1}{2}\phi^\dagger(x)\frac{{\ensuremath{\mathrm{d}}}\phi}{{\ensuremath{\mathrm{d}}}x}(x) -\frac{1}{2} \frac{{\ensuremath{\mathrm{d}}}\phi^{\dagger}}{{\ensuremath{\mathrm{d}}}x}(x) \phi(x) + \phi^\dagger(x)Q(x)\phi(x)\\
+ \sum_{\alpha=1}^{q} \left(\phi^\dagger(x) R_{\alpha}(x)\phi(x)\right) {\ensuremath{\hat{\psi}^\dagger}}_\alpha(x)\Big\}\, {\ensuremath{\mathrm{d}}}x \bigg]\ket{\Omega},\label{eq:pathintegralrepresentation}\end{gathered}$$ where $\phi(x)$ is a $D$-dimensional vector function with components $\phi_j(x)$, $j=1,\ldots,D$. This path integral representation can serve as a useful starting point for generalizations of the cMPS, *e.g.* by replacing the second quantized auxiliary system by a true field theory, so that this becomes the cMPS analogon of the construction in Ref. . If this field theory is a conformal field theory, it is then very close in spirit to some model states for Quantum Hall Systems[@Moore1991362; @Dubail:fk].
Regularity conditions {#s:regularity}
=====================
In Eq. we have defined the $N$-particle wave functions $\phi_{\alpha_{1},\ldots,\alpha_{N}}(x_{1},\ldots,x_{N})$. For $x_{1}\leq \cdots \leq x_{N}$ these are completely specified by Eq. . However, for general choices of the matrix functions $Q$ and $R_{\alpha}$, the extension of Eq. to all orders of its arguments does not automatically satisfy the required properties that a physical $N$-particle wave function should satisfy. For example, the $N$-particle wave functions should be differentiable in each of its arguments if the state has to produce a finite non-relativistic kinetic energy.
However, there is no need to work with the Fock space expansion of Eq. . We can check the regularity of the $N$-particle wave functions by immediately evaluating the kinetic energy in second quantization. For further reference, we first define $$\begin{aligned}
{\ensuremath{\hat{U}}}(x,y)=\mathscr{P} \exp\left[\int_{x}^{y}{\ensuremath{\mathrm{d}}}z\, \left\{Q(z)\otimes {\ensuremath{\hat{{\openone}}}} + \sum_{\alpha=1}^{q}R_{\alpha}(z)\otimes {\ensuremath{\hat{\psi}^\dagger}}_{\alpha}(z)\right\}\right]\label{eq:defU},\end{aligned}$$ where ${\ensuremath{\hat{U}}}(x,y)\in\operatorname{\mathbb{L}}({\ensuremath{{\ensuremath{\mathbb{H}}}}}\otimes \mathbb{C}^{D})$ with $\mathbb{C}^{D}$ the ancilla space, *i.e.* it is a $D\times D$ matrix of operators. Unlike the operator ${\ensuremath{\hat{U}}}(y,z)$ defined in Subsection \[ss:def:fockembedding\], the operator in Eq. is not unitary. It only equals the unitary version when acting on $\ket{\Omega}$ and if $Q(z)$ is given by Eq. . In addition, we define a closely related set of operators ${\ensuremath{\hat{U}}}_\alpha(x,y)$ ($\alpha=1,\ldots,q$) as $${\ensuremath{\hat{U}}}_{\alpha}(x,y)=\mathscr{P} \exp\left[\int_{x}^{y}{\ensuremath{\mathrm{d}}}z\,\left\{ Q(z)\otimes {\ensuremath{\hat{{\openone}}}} + \sum_{\beta=1}^{q}\eta_{\alpha,\beta}R_{\beta}(z)\otimes {\ensuremath{\hat{\psi}^\dagger}}_{\beta}(z)\right\}\right]\label{eq:defUalpha}.$$ In order to compute any expectation value, which is the topic of the next section, we need to be able to act with the field annihilation operators ${\ensuremath{\hat{\psi}}}_{\alpha}(x)$ on the state $\ket{\Psi[Q,\{R_{\alpha}\}]}$. If we are able to drag ${\ensuremath{\hat{\psi}}}_{\alpha}(x)$ through the path-ordered exponential, it then acts on $\ket{\Omega}$, which is annihilated by any field operator. We can now use Eq. as derived in Appendix \[a:formula\], where ${\ensuremath{\hat{B}}}={\ensuremath{\hat{\psi}}}_{\alpha}(x)$, ${\ensuremath{\hat{A}}}_1(z)$ contains both $Q(z)\otimes {\ensuremath{\hat{{\openone}}}}$ and any term $R_{\beta}(z)\otimes {\ensuremath{\hat{\psi}^\dagger}}_{\beta}(z)$ for which $\eta_{\alpha,\beta}=1$, and ${\ensuremath{\hat{A}}}_2(z)$ contains the terms $R_{\beta}(z)\otimes {\ensuremath{\hat{\psi}^\dagger}}_{\beta}(z)$ for which $\eta_{\alpha,\beta}=-1$. We then obtain $${\ensuremath{\hat{\psi}}}_{\alpha}(x){\ensuremath{\hat{U}}}(-L/2,+L/2)-{\ensuremath{\hat{U}}}_{\alpha}(-L/2,+L/2){\ensuremath{\hat{\psi}}}_{\alpha}(x)={\ensuremath{\hat{U}}}_{\alpha}(-L/2,x) R_{\alpha} {\ensuremath{\hat{U}}}(x,+L/2)$$ which immediately results in $${\ensuremath{\hat{\psi}}}_{\alpha}(x) \ket{\Psi[Q,\{R_{\beta}\}]}=\operatorname{tr}\left[B {\ensuremath{\hat{U}}}_{\alpha}(-L/2,x) R_{\alpha}(x) {\ensuremath{\hat{U}}}(x,+L/2)\right]\ket{\Omega}.\label{eq:psiPsi}$$ Hence, acting with an annihilation operator of type $\alpha$ at position $x$ not only lowers a matrix $R_{\alpha}(x)$, but also transforms the path ordered exponential ${\ensuremath{\hat{U}}}(-L/2,x)$ into ${\ensuremath{\hat{U}}}_{\alpha}(-L/2,x)$, because we had to take the particle statistics into account for bringing ${\ensuremath{\hat{\psi}}}_{\alpha}(x)$ to the position where it could lower $R_{\alpha}(x)$.
The non-relativistic kinetic energy operator ${\ensuremath{\hat{T}}}$ is given by $${\ensuremath{\hat{T}}}=\int_{-L/2}^{+L/2}{\ensuremath{\hat{t}}}(x)\,{\ensuremath{\mathrm{d}}}x,$$ where the kinetic energy density ${\ensuremath{\hat{t}}}(x)$ at position $x$ is given by $${\ensuremath{\hat{t}}}(x)=\sum_{\alpha=1}^{N} \frac{1}{2 m_{\alpha}} \left(\frac{{\ensuremath{\mathrm{d}}}{\ensuremath{\hat{\psi}^\dagger}}_{\alpha}}{{\ensuremath{\mathrm{d}}}x}(x)\right) \left(\frac{{\ensuremath{\mathrm{d}}}{\ensuremath{\hat{\psi}}}_{\alpha}}{{\ensuremath{\mathrm{d}}}x}(x)\right).$$ Hence, a finite kinetic energy expectation value $\braket{\Psi[\overline{Q},\{\overline{R}_{\alpha}\}]|{\ensuremath{\hat{T}}}|\Psi[Q,\{R_{\alpha}\}]}$ requires that the state $\frac{{\ensuremath{\mathrm{d}}}{\ensuremath{\hat{\psi}}}_{\alpha}}{{\ensuremath{\mathrm{d}}}x}(x)\ket{\Psi[Q,\{R_{\alpha}\}]}$ has a finite norm. Differentiating Eq. and using Eq. , we obtain $$\begin{aligned}
\frac{{\ensuremath{\mathrm{d}}}\ }{{\ensuremath{\mathrm{d}}}x}{\ensuremath{\hat{\psi}}}_{\alpha}(x)& \ket{\Psi[Q,\{R_{\beta}\}]}\nonumber\\
=&\operatorname{tr}\Bigg[B {\ensuremath{\hat{V}}}_{\alpha}(-L/2,x) \bigg([Q(x),R_{\alpha}(x)]+\frac{{\ensuremath{\mathrm{d}}}R_{\alpha}}{{\ensuremath{\mathrm{d}}}x}(x)\bigg){\ensuremath{\hat{U}}}(x,+L/2)\Bigg]\ket{\Omega}\nonumber\\
&+\operatorname{tr}\Bigg[B {\ensuremath{\hat{V}}}_{\alpha}(-L/2,x) \bigg(\sum_{\beta=1}^{q}\big[\eta_{\alpha,\beta} R_{\beta}(x)R_{\alpha}(x)\nonumber\\
&\qquad\qquad\qquad\qquad\qquad- R_{\alpha}(x)R_{\beta}(x)\big]\otimes{\ensuremath{\hat{\psi}^\dagger}}_{\beta}(x)\bigg){\ensuremath{\hat{U}}}(x,+L/2)\Bigg]\ket{\Omega}.\label{eq:diffpsiPsi}\end{aligned}$$ The term on the first line can be shown to have a finite norm (see next section), provided of course that $R_\alpha(x)$ is a differentiable function with a well-behaved derivative ${\ensuremath{\mathrm{d}}}R_\alpha(x)/d x$ at any $x\in{\ensuremath{\mathcal{R}}}$. Since the term on the second line of Eq. has particles of any species $\beta=1,\ldots,q$ being created at the fixed position $x$, this term is not normalizable. Put differently, $\lVert ({\ensuremath{\mathrm{d}}}{\ensuremath{\hat{\psi}}}(x)/{\ensuremath{\mathrm{d}}}x)\ket{\Psi[Q,\{R_{\alpha}\}]}\rVert^{2}$ contains a divergent contribution $\delta(0)$ (in position space), unless we impose the *regularity condition* $$\begin{aligned}
\eta_{\alpha,\beta} R_{\beta}(x)R_{\alpha}(x) -R_{\alpha}(x) R_{\beta}(x)=0, \quad \forall x\in {\ensuremath{\mathcal{R}}}.\label{eq:regcondition}\end{aligned}$$ Hence the matrices $R_{\alpha}$ should have the same statistics as the particle creation operators to which they couple. For systems with a single species of bosons, the condition in Eq. is automatically fulfilled. For systems with multiple species of bosons, it requires that any two matrices $R_{\alpha}(x)$ and $R_{\beta}(x)$ at the same spatial point $x$ commute. If $\alpha$ is a fermionic particle species, the corresponding matrix $R_{\alpha}(x)$ has to satisfy $R_{\alpha}(x)^{2}=0$, $\forall x\in{\ensuremath{\mathcal{R}}}$. When two particles of fermionic type $\alpha$ approach each other, there is a corresponding factor $R_{\alpha}(y) {\ensuremath{\mathscr{P}\exp}}(\int_{y}^{z}{\ensuremath{\mathrm{d}}}x\, Q(x)) R_{\alpha}(z)$ in the $N$-particle wave function $\phi_{\alpha_{1},\ldots,\alpha,\alpha,\ldots \alpha_{N}}(x_{1},\ldots,y,z,\ldots,x_{N})$. For $y\to z$, the exponential factor continuously evolves towards ${\openone}_{D}$, so that the $k$-particle wave function continuously becomes zero. Hence, the finiteness of the kinetic energy requires that two fermionic particles of the same type cannot come arbitrarily close together and thus imposes Pauli’s principle.
Differentiability of the wave function is sufficient for a finite kinetic energy, which is by far the most important physical requirement of the wave function. We can also impose higher regularity constraints on the $N$-particle wave functions. Since these do in general not arise from physical considerations, we postpone this discussion to Appendix \[a:higherorderregularity\]. While the resulting conditions are interesting from an algebraic point of view, they are in general hard to satisfy with finite-dimensional matrices. For practical applications, satisfying the original condition in Eq. , as imposed by the finiteness of the kinetic energy, should be sufficient.
We conclude this subsection by investigating what else can be learned from the physical considerations concerning particle statistics. The regularity conditions \[Eq. \] already require that the matrices $R_{\alpha}$ behave as the corresponding operators ${\ensuremath{\hat{\psi}}}_{\alpha}$ in terms of commutation and anticommutation relations. In a physical system, we should not have fermionic condensates, *i.e.* $\braket{\Psi|{\ensuremath{\hat{\psi}}}_{\alpha}(x)|\Psi}=0$ if particle species $\alpha$ is fermionic. This is a consequence of the invariance of an physical Hamiltonian ${\ensuremath{{\ensuremath{\hat{H}}}}}$ under the action of the parity operator ${\ensuremath{\hat{P}}}$, which flips the sign of any fermionic operator (${\ensuremath{\hat{P}}}{\ensuremath{\hat{\psi}}}_{\alpha}(x){\ensuremath{\hat{P}}}=\eta_{\alpha,\alpha}{\ensuremath{\hat{\psi}}}_{\alpha}(x)$) and is thus idempotent (${\ensuremath{\hat{P}}}={\ensuremath{\hat{P}}}^{-1}={\ensuremath{\hat{P}}}^{\dagger}$). We can construct ${\ensuremath{\hat{P}}}$ as $${\ensuremath{\hat{P}}}=\exp\left[{\ensuremath{\mathrm{i}}}\pi \sum_{\alpha\ \text{fermionic}} {\ensuremath{\hat{N}}}_{\alpha}\right]=\exp\left[{\ensuremath{\mathrm{i}}}\pi \sum_{\alpha\ \text{fermionic}} \int_{{\ensuremath{\mathcal{R}}}} {\ensuremath{\mathrm{d}}}x\, {\ensuremath{\hat{\psi}^\dagger}}_{\alpha}(x){\ensuremath{\hat{\psi}}}_{\alpha}(x)\right].$$ Physical states satisfy ${\ensuremath{\hat{P}}}\ket{\Psi}={\ensuremath{\mathrm{e}}}^{{\ensuremath{\mathrm{i}}}\phi} \ket{\Psi}$, where the idempotence of ${\ensuremath{\hat{P}}}$ requires $\phi=0$ or $\phi=\pi$. Physical states thus consist completely of a superposition of states, all of which have either an even or an odd number of fermions. Imposing this same property for cMPS requires one to explicitly incorporate the $\mathbb{Z}_{2}$ symmetry (with group elements $\{{\ensuremath{\hat{{\openone}}}},{\ensuremath{\hat{P}}}\}$) in the matrix structure of $R_{\alpha}$ and $Q$. Since ${\ensuremath{\hat{P}}}\ket{\Psi[Q,\{R_{\alpha}\}]}=\ket{\Psi[Q,\{\eta_{\alpha,\alpha} R_{\alpha}\}]}$, we should also be able to define a virtual operator $P\in\operatorname{\mathbb{L}}(\mathbb{C}^{D})$ such that $P Q P^{-1}=Q$ and $P R_{\alpha} P^{-1} =\eta_{\alpha,\alpha} R_{\alpha}$. This operator can in principle be $x$-dependent, but we should then be able to apply a local gauge transformation (see Section \[s:gauge\]) in order to make $P$ space-independent. In addition, it is clear from the definition that $P$ is idempotent ($P=P^{-1}$). If we can assume that $P$ is diagonalizable, then $P$ divides the ancilla space $\mathbb{C}^{D}$ into a sector with positive parity (eigenspace of eigenvalue $+1$) and a sector with negative parity (eigenspace of $-1$). A global gauge transformation brings $P$ into the diagonal form $$P=\begin{bmatrix} {\openone}_{D^{(+)}} & 0_{D^{(+)}\times D^{(-)}} \\ 0_{D^{(-)}\times D^{(+)}} & -{\openone}_{D^{(-)}}\end{bmatrix}$$ with $D^{(+)}+D^{(-)}=D$. The required transformation behavior of $Q$ and $R_{\alpha}$ then imposes the following decomposition $$\begin{aligned}
Q&=\begin{bmatrix} Q^{(+)} & 0_{D^{(+)}\times D^{(-)}} \\ 0_{D^{(-)}\times D^{(+)}} & Q^{(-)}\end{bmatrix},\\
R_{\alpha}&=\begin{bmatrix} R_{\alpha}^{(+)} & 0_{D^{(+)}\times D^{(-)}} \\ 0_{D^{(-)}\times D^{(-)}} & R_{\alpha}^{(-)} \end{bmatrix}\qquad \text{(particle species $\alpha$ is bosonic)},\\
R_{\alpha}&=\begin{bmatrix} 0_{D^{(+)}\times D^{(+)}} & R_{\alpha}^{(+-)} \\ R_{\alpha}^{(-+)} & 0_{D^{(-)}\times D^{(-)}}\end{bmatrix}\qquad \text{(particle species $\alpha$ is fermionic)}.\end{aligned}$$ In the cMPS $\ket{\Psi[Q,\{R_{\alpha}\}]}$, all contributions with either an even or an odd number of fermions in Eq. drop out, depending on the boundary matrices $B$. If only states with an even number of fermions are allowed, $B$ should have a decomposition as $$\begin{aligned}
B&=\begin{bmatrix} B^{(+)} & 0_{D^{(+)}\times D^{(-)}} \\ 0_{D^{(-)}\times D^{(+)}} & B^{(-)}\end{bmatrix},\end{aligned}$$ whereas a decomposition of the form $$\begin{aligned}
B&=\begin{bmatrix} 0_{D^{(+)}\times D^{(+)}} & B_{\alpha}^{(+-)} \\ B_{\alpha}^{(-+)} & 0_{D^{(-)}\times D^{(-)}}\end{bmatrix}\end{aligned}$$ is required to select only states with an odd number of fermions.
Boundary conditions {#s:bc}
===================
We have already mentioned in Section \[s:def\] that the type of boundary conditions —open or periodic— is encoded in the rank of the boundary matrix $B$. For a system with periodic boundary conditions, $B$ has full rank and is typically chosen to be the identity ($B={\openone}_{D}$). Since periodic boundary conditions identify the points $x=-L/2$ and $x=+L/2$, it is natural to assume that the matrix functions $Q$ and $R_{\alpha}$ are also single-valued, *i.e.* $Q(-L/2)=Q(+L/2)$ and $R_{\alpha}(-L/2)=R_{\alpha}(+L/2)$ for all $\alpha=1,\ldots,q$.
For a system with open boundary conditions, it is suitable to work with a boundary matrix of the form $B=\bm{v}_{\mathrm{R}}\bm{v}_{\mathrm{L}}^{\dagger}$, *i.e.* the rank of $B$ is one. However, in the case of open boundary conditions physical requirements impose additional conditions on the $N$-particle wave functions of Eq. . Typically, a finite system is interpreted as being embedded in an infinite system and having an infinitely strong potential energy outside of the interval ${\ensuremath{\mathcal{R}}}$, *i.e.* $v(x)=+\infty$ for $x<-L/2$ and $x>+L/2$. The single particle wave functions that build up the Fock space are zero outside ${\ensuremath{\mathcal{R}}}$. A finite kinetic energy imposes continuity, and thus requires that the single particle wave functions are zero at $x=\pm L/2$. Consequently, the resulting $N$-particle wave functions have to produce zero as soon as one of the arguments $x_i$ is equal to $\pm L/2$. Since this has to be true for any configuration of the remaining $N-1$ particles, we obtain that we have to impose $$\begin{aligned}
\bm{v}_{\mathrm{L}}^\dagger R(-L/2)&=0 &\text{and}&&R(+L/2)\bm{v}_{\mathrm{R}}=0.\label{eq:qropenbc}\end{aligned}$$ A more detailed discussion of these conditions is presented in Ref. [@qgp], where a partial differential equation for the evolution of $Q$ and $R_{\alpha}$ under real or imaginary time dynamics is derived. In order to solve this partial differential equation, it needs to be complemented by the proper boundary conditions as given above. Throughout the remainder of this manuscript, we assume that we are working with cMPS where the matrix functions $Q$ and $R_{\alpha}$ satisfy the required conditions.
We now also have to discuss whether we can completely fix the boundary matrix $B$, or whether its entries should be included within the set of variational parameters. While $B={\openone}_{D}$ represents a fixed choice that is well-suited for the case of periodic boundary conditions, we will see in Section \[s:gauge\] that it is beneficial to include one of both boundary vectors $\bm{v}_{\mathrm{L}}$ or $\bm{v}_{\mathrm{R}}$ in the set of variational parameters in the case of open boundary conditions. In order to have a uniform notation, we do not explicitly denote this dependence in the notation for the state $\ket{\Psi[Q,\{R_\alpha\}]}$. Note that it is impossible to absorb the boundary vectors into the matrices $Q(-L/2)$, $R_{\alpha}(-L/2)$ and $Q(L/2)$, $R_{\alpha}(L/2)$ in the case of open boundary conditions. More generally, unlike in the case of generic MPS on finite lattices, it is for cMPS impossible to use a space-dependent bond dimension $D(x)$, since the required continuity of $D$ in combination with its discrete character enforces a constant value.
Computation of expectation values {#s:expectval}
=================================
This section is concerned with the computation of expectation values of normally ordered operators. We have already illustrated how to act with annihilation operators and derivatives thereof in the Section \[s:regularity\]. With a MPS, the computation of expectation values boils down to a contraction of the physical indices in the network. In the continuum, however, the intuitive notion of physical indices is a bit lost. We therefore start by computing the overlap of two cMPS $\ket{\Psi[Q,\{R_{\alpha}\}]}$, $\ket{\Psi[Q',\{R_{\alpha}'\}]}$, which are given as an expansion in Fock space \[Eq. \]. It is clear that the basis states ${\ensuremath{\hat{\psi}^\dagger}}_{\alpha_1}(x_1)\cdots {\ensuremath{\hat{\psi}^\dagger}}_{\alpha_N}(x_N)\ket{\Omega}$ are automatically orthogonal for different $N$, and further that $$\begin{gathered}
\braket{\Omega|{\ensuremath{\hat{\psi}}}_{\beta_N}(y_N)\cdots {\ensuremath{\hat{\psi}}}_{\beta_1}(y_1){\ensuremath{\hat{\psi}^\dagger}}_{\alpha_1}(x_1)\cdots {\ensuremath{\hat{\psi}^\dagger}}_{\alpha_N}(x_N)|\Omega}=\\
\delta_{\alpha_1,\beta_1}\cdots \delta_{\alpha_N,\beta_N} \delta(x_1-y_1)\cdots \delta(x_N-y_N),\end{gathered}$$ due to the ordering of the arguments $x_1\leq \cdots \leq x_N$ and $y_1\leq \cdots \leq y_N$. We thus obtain $$\begin{gathered}
\braket{\Psi[Q',\{R'_{\alpha}\}]|\Psi[Q,\{R_{\alpha}\}]}=\sum_{N=0}^{+\infty}\sum_{\{\alpha_1,\ldots,\alpha_N\}=1}^{q} \int_{-L/2\leq x_1\leq \cdots \leq x_N\leq +L/2} {\ensuremath{\mathrm{d}}}x_1\cdots {\ensuremath{\mathrm{d}}}x_N\\
\operatorname{tr}\left[B {\ensuremath{\mathscr{P}\exp}}\left(\int_{-L/2}^{x_1} Q(z)\,{\ensuremath{\mathrm{d}}}z\right) R_{\alpha_1}(x_1)\cdots R_{\alpha_N}(x_N) {\ensuremath{\mathscr{P}\exp}}\left(\int_{x_N}^{+L/2} Q(z)\,{\ensuremath{\mathrm{d}}}z\right)\right]\\
\times \operatorname{tr}\left[\overline{B} {\ensuremath{\mathscr{P}\exp}}\left(\int_{-L/2}^{x_1} \overline{Q'(z)}\,{\ensuremath{\mathrm{d}}}z\right) \overline{R'_{\alpha_1}(x_1)}\cdots \overline{R'_{\alpha_N}(x_N)} {\ensuremath{\mathscr{P}\exp}}\left(\int_{x_N}^{+L/2} \overline{Q(z)}\,{\ensuremath{\mathrm{d}}}z\right)\right].\end{gathered}$$ Using trivial direct product identities such as $\operatorname{tr}[A]\operatorname{tr}[B]=\operatorname{tr}[A\otimes B]$, $(AB)\otimes (CD)=(A\otimes B)(C\otimes D)$ and $\exp(A)\otimes \exp(B)=\exp(A\otimes {\openone}_D+ {\openone}_D \otimes B)$ for $D\times D$ matrices $A$, $B$, $C$ and $D$, the previous expression can be rewritten as $$\begin{gathered}
\braket{\Psi[Q',\{R'_{\alpha}\}]|\Psi[Q,\{R_{\alpha}\}]}=\sum_{N=0}^{+\infty}\sum_{\{\alpha_1,\ldots,\alpha_N\}=1}^{q} \int_{-L/2\leq x_1\leq \cdots \leq x_N\leq +L/2} {\ensuremath{\mathrm{d}}}x_1\cdots {\ensuremath{\mathrm{d}}}x_N\\
\operatorname{tr}\Bigg[(B\otimes \overline{B}) {\ensuremath{\mathscr{P}\exp}}\left(\int_{-L/2}^{x_1} [Q(z)\otimes {\openone}_D+{\openone}_D \otimes \overline{Q'(z)}]\,{\ensuremath{\mathrm{d}}}z\right) (R_{\alpha_1}(x_1)\otimes \overline{R'_{\alpha_1}(x_1)})\cdots \\
(R_{\alpha_N}(x_N)\otimes \overline{R'_{\alpha_N}(x_N)}){\ensuremath{\mathscr{P}\exp}}\left(\int_{x_N}^{+L/2} [Q(z)\otimes {\openone}+{\openone}\otimes \overline{Q'(z)}]\,{\ensuremath{\mathrm{d}}}z\right)\Bigg].\end{gathered}$$ Reverting the expansion of the path ordered exponential that lead to Eq. , results in $$\begin{gathered}
\braket{\Psi[Q',\{R'_{\alpha}\}]|\Psi[Q,\{R_{\alpha}\}]}=\\
\operatorname{tr}\Bigg[(B\otimes \overline{B}) {\ensuremath{\mathscr{P}\exp}}\left(\int_{-L/2}^{+L/2} [Q(x)\otimes {\openone}_D+{\openone}_D \otimes \overline{Q'(x)}+\sum_{\alpha=1}^{q} R_{\alpha}(x)\otimes \overline{R'_{\alpha}(x)}]\,{\ensuremath{\mathrm{d}}}x\right) \Bigg].\end{gathered}$$
From the expression above, we can deduce that in the computation of expectation values ($Q'=Q$, $R_\alpha'=R_\alpha$) a central role is played by the local transfer matrix ${\ensuremath{\mathbb{T}}}(x)$ defined as $${\ensuremath{\mathbb{T}}}(x)=Q(x)\otimes {\openone}_{D}+{\openone}_{D}\otimes \overline{Q(x)} + \sum_{\alpha=1}^{N} R_{\alpha}(x)\otimes \overline{R_{\alpha}(x)}.\label{eq:transferoperator}$$ To this transfer matrix, we can also associate linear maps $\mathscr{T}^{(x)}:\operatorname{\mathbb{L}}(\mathbb{C}^{D})\mapsto \operatorname{\mathbb{L}}(\mathbb{C}^{D})$ and $\widetilde{\mathscr{T}}^{(x)}:\operatorname{\mathbb{L}}(\mathbb{C}^{D})\mapsto \operatorname{\mathbb{L}}(\mathbb{C}^{D})$ that map virtual operators $f$ ($D\times D$ matrices) to $$\begin{aligned}
\mathscr{T}^{(x)}(f) &= Q(x) f + f Q(x)^{\dagger}+ \sum_{\alpha=1}^{N} R_{\alpha}(x) f R_{\alpha}(x)^{\dagger},\\
\widetilde{\mathscr{T}}^{(x)}(f) &= f Q(x) + Q(x)^{\dagger}f+ \sum_{\alpha=1}^{N} R_{\alpha}(x)^{\dagger} f R_{\alpha}(x).\end{aligned}$$
The transfer matrix ${\ensuremath{\mathbb{T}}}(z)$ is of course strongly related to the transfer matrix ${\ensuremath{\mathbb{E}}}(n)=\sum_{s} A^s(n)\otimes \overline{A}^s(n)$ that features in expectation values with respect to MPS on the lattice. Indeed, if $\ket{\Psi[A]}$ is the MPS with matrices $A$ as in Eq. , then the transfer operator ${\ensuremath{\mathbb{T}}}(x)$ is related to the transfer operator ${\ensuremath{\mathbb{E}}}(n)$ of the MPS $\ket{\Psi[A]}$ by ${\ensuremath{\mathbb{E}}}(n)={\ensuremath{\mathbb{{\openone}}}}+a {\ensuremath{\mathbb{T}}}(na)+\operatorname{\mathscr{O}}(a^{2})$.
The expectation value of any normally ordered operator ${\ensuremath{\hat{O}}}=:O[\{{\ensuremath{\hat{\psi}^\dagger}}_{\alpha}\},\{{\ensuremath{\hat{\psi}}}_{\beta}\}]:$ can now be computed by first acting with all annihilation operators ${\ensuremath{\hat{\psi}}}_{\alpha}(x)$ on the ket $\ket{\Psi[Q,\{R_{\beta}\}]}$ as we did in the Section \[s:regularity\], and similarly acting with the creation operators on the bra. The result of this is the insertion of some operators acting on the virtual system at the corresponding positions, with operators ${\ensuremath{\hat{U}}}(x,y)$, ${\ensuremath{\hat{U}}}_{\alpha}(x,y)$ or ${\ensuremath{\hat{U}}}_{\alpha,\beta}(x,y)$ connecting them. The expectation value is obtained by “contracting the physical indices”, which results in the inserted virtual operators in the ket combining with those in the bra at the same position[^2], whereas the contraction of the part in between the local insertions result in a path ordered exponential of the transfer matrix. However, to incorporate the particle statistics, we also need to define generalized transfer operators as $$\begin{aligned}
{\ensuremath{\mathbb{T}}}_{\alpha}(x)&=Q(x)\otimes {\openone}_{D}+{\openone}_{D}\otimes \overline{Q(x)} + \sum_{\beta=1}^{N} \eta_{\alpha,\beta} R_{\beta}(x)\otimes \overline{R_{\beta}(x)},\\
{\ensuremath{\mathbb{T}}}_{\alpha,\beta}(x)&=Q(x)\otimes {\openone}_{D}+{\openone}_{D}\otimes \overline{Q(x)} + \sum_{\gamma=1}^{N} \eta_{\alpha,\gamma}\eta_{\beta,\gamma} R_{\gamma}(x)\otimes \overline{R_{\gamma}(x)}.\end{aligned}$$ Note that ${\ensuremath{\mathbb{T}}}_{\alpha,\alpha}(x)={\ensuremath{\mathbb{T}}}(x)$ since $\eta_{\alpha,\beta}^{2}=1$. Given this recipe we can, for example, evaluate the correlation function $$\begin{gathered}
G^{\alpha,\beta}(x,y)=\braket{\Psi[\overline{Q},\{\overline{R}_{\alpha}\}]|{\ensuremath{\hat{\psi}^\dagger}}_{\alpha}(x){\ensuremath{\hat{\psi}}}_{\beta}(y)|\Psi[Q,\{R_{\alpha}\}]}\\
=\theta(x-y)\operatorname{tr}\bigg[\big(B\otimes \overline{B}\big)\mathscr{P}{\ensuremath{\mathrm{e}}}^{\int_{-L/2}^{+x} {\ensuremath{\mathbb{T}}}_{\alpha,\beta}(z)\,{\ensuremath{\mathrm{d}}}z} \big(R_{\beta}(y)\otimes{\openone}_{D}\big) \mathscr{P}{\ensuremath{\mathrm{e}}}^{\int_{y}^{x}{\ensuremath{\mathbb{T}}}_{\alpha}(z)\,{\ensuremath{\mathrm{d}}}z}\\
\shoveright{\times\big({\openone}_{D}\otimes \overline{R_{\alpha}(x)}\big)\mathscr{P}{\ensuremath{\mathrm{e}}}^{\int_{x}^{+L/2}{\ensuremath{\mathbb{T}}}(z)\,{\ensuremath{\mathrm{d}}}z}\bigg]\ }\\
+\theta(y-x)\operatorname{tr}\bigg[\big(B\otimes \overline{B}\big)\mathscr{P}{\ensuremath{\mathrm{e}}}^{\int_{-L/2}^{+x} {\ensuremath{\mathbb{T}}}_{\alpha,\beta}(z)\,{\ensuremath{\mathrm{d}}}z} \big({\openone}_{D}\otimes \overline{R_{\alpha}(x)}\big) \mathscr{P}{\ensuremath{\mathrm{e}}}^{\int_{x}^{y}{\ensuremath{\mathbb{T}}}_{\beta}(z)\,{\ensuremath{\mathrm{d}}}z}\\
\times\big(R_{\beta}(y)\otimes{\openone}_{D}\big)\mathscr{P}{\ensuremath{\mathrm{e}}}^{\int_{y}^{+L/2}{\ensuremath{\mathbb{T}}}(z)\,{\ensuremath{\mathrm{d}}}z}\bigg].\label{eq:corrfungeneric}\end{gathered}$$ All quantities in this expression, if we could store and manipulate variables with a fully continuous $x$-dependence, are $D^{2}\times D^{2}$ matrices. Since such matrices need to be multiplied, this is an operation with computational complexity of $\operatorname{\mathscr{O}}(D^{6})$, or $\operatorname{\mathscr{O}}(D^{5})$ if we exploit the tensor-product structure.
For physical systems, we can further simplify Eq. . When only bosonic particle species are present, all $\eta_{\alpha,\beta}=1$ and ${\ensuremath{\mathbb{T}}}={\ensuremath{\mathbb{T}}}_{\alpha}={\ensuremath{\mathbb{T}}}_{\alpha,\beta}$. In case of the presence of fermionic particle species, we should incorporate the $\mathbb{Z}_{2}$ parity symmetry discussed in the Section \[s:regularity\]. We can then define an idempotent parity superoperator ${\ensuremath{\mathbb{P}}}=P\otimes \overline{P}$ and we obtain ${\ensuremath{\mathbb{P}}}{\ensuremath{\mathbb{T}}}{\ensuremath{\mathbb{P}}}={\ensuremath{\mathbb{T}}}$, as well as ${\ensuremath{\mathbb{P}}}{\ensuremath{\mathbb{T}}}_{\alpha}{\ensuremath{\mathbb{P}}}={\ensuremath{\mathbb{T}}}_{\alpha}$ and ${\ensuremath{\mathbb{P}}}{\ensuremath{\mathbb{T}}}_{\alpha,\beta}{\ensuremath{\mathbb{P}}}={\ensuremath{\mathbb{T}}}_{\alpha,\beta}$. This allows to conclude that $\braket{\Psi[\overline{Q},\{\overline{R}_{\alpha}\}]|{\ensuremath{\hat{\psi}^\dagger}}_{\alpha}(x){\ensuremath{\hat{\psi}}}_{\beta}(y)|\Psi[Q,\{R_{\alpha}\}]}=0$ whenever the particle species $\alpha$ and $\beta$ have different statistics. When $\alpha$ and $\beta$ are both bosonic or both fermionic, it is clear that ${\ensuremath{\mathbb{T}}}_{\alpha,\beta}={\ensuremath{\mathbb{T}}}$ and ${\ensuremath{\mathbb{T}}}_{\alpha}={\ensuremath{\mathbb{T}}}_{\beta}$.
In the case of open boundary conditions, we can define virtual density matrices $l(x),r(x)\in\operatorname{\mathbb{L}}(\mathbb{C}^{D})$ which are defined through the initial conditions $l(-L/2)=\bm{v}_{\mathrm{L}}\bm{v}_{\mathrm{L}}^{\dagger}$ and $r(+L/2)=\bm{v}_{\mathrm{R}}\bm{v}_{\mathrm{R}}^{\dagger}$ and the first order differential equations $$\begin{aligned}
\frac{{\ensuremath{\mathrm{d}}}\ }{{\ensuremath{\mathrm{d}}}x}l(x) &=\widetilde{\mathscr{T}}^{(x)}\big(l(x)\big),&\text{and}&&\frac{{\ensuremath{\mathrm{d}}}\ }{{\ensuremath{\mathrm{d}}}x}r(x) &=-\mathscr{T}^{(x)}\big(r(x)\big).\label{eq:virtualdensitymatrix}\end{aligned}$$ To these density matrices $l(x)$ and $r(x)$ we associate vectors ${\ensuremath{|l(x))}},{\ensuremath{|r(x))}}\in\mathbb{C}^{D}\otimes\overline{\mathbb{C}^{D}}$ in the ancilla product space. Formally, the solution is given by $$\begin{aligned}
{\ensuremath{(l(x)|}}&={\ensuremath{(l(-L/2)|}}\mathscr{P}{\ensuremath{\mathrm{e}}}^{\int_{-L/2}^{x}{\ensuremath{\mathbb{T}}}(y)\,{\ensuremath{\mathrm{d}}}y},\\
{\ensuremath{|r(x))}}&=\mathscr{P}{\ensuremath{\mathrm{e}}}^{\int_{x}^{+L/2}{\ensuremath{\mathbb{T}}}(y)\,{\ensuremath{\mathrm{d}}}y}{\ensuremath{|r(+L/2))}}.\end{aligned}$$ We can then write $$\begin{aligned}
\braket{\Psi[\overline{Q},\{\overline{R}_{\alpha}\}]|\Psi[Q,\{R_{\alpha}\}]}&={\ensuremath{\left(l(-L/2)\middle\vert {\ensuremath{\mathscr{P}\exp}}\left[\int_{-L/2}^{+L/2} {\ensuremath{\mathbb{T}}}(x)\,{\ensuremath{\mathrm{d}}}x\right]\middle\vert r(+L/2)\right)}}\nonumber\\
&={\ensuremath{\left(l(x)|r(x)\right)}}=\operatorname{tr}\left[l(x) r(x)\right], \quad \forall x \in {\ensuremath{\mathcal{R}}}.\end{aligned}$$ From the correspondence with completely positive maps, it can be shown that the solution $l(x)$ and $r(x)$ of Eq. starting from positive definite initial conditions $l(-L/2)$ and $r(+L/2)$ are positive for any $x\in\mathcal{R}$ (see Theorem 3 in Ref. ). The norm is thus guaranteed to be positive. Note that, for the special parameterization of $Q(x)$ in the continuous measurement interpretation \[Eq. \], we can write the determining differential equation for $r(x)$ as $$\begin{gathered}
\frac{{\ensuremath{\mathrm{d}}}\ }{{\ensuremath{\mathrm{d}}}x}r(x)=-\mathscr{T}^{(x)}\big(r(x)\big)=\\
-{\ensuremath{\mathrm{i}}}[K(x), r(x)] -\frac{1}{2}\sum_{\alpha=1}^{N} \{R_{\alpha}(x)^{\dagger}R_{\alpha}(x),r(x)\} +\sum_{\alpha=1}^{N}R_{\alpha}(x) r(x) R_{\alpha}(x)^{\dagger}.\end{gathered}$$ This is a master equation in Lindblad form [@1976CMaPh..48..119L] describing the non-equilibrium Markov dynamics of the ancilla (*i.e.* the cavity). Starting from a pure state $r(L/2)=\bm{v}_{\mathrm{R}}\bm{v}_{\mathrm{R}}^{\dagger}$ at $t=-x=-L/2$, it evolves through interaction with the physical system (via the interaction operators $R_{\alpha}$). At a general time $t=-x$, the density matrix $r(x)$ is no longer pure: non-equilibrium evolution is a dissipative process. Note that the evolution is trace preserving, since tracing the equation above results in ${\ensuremath{\mathrm{d}}}\operatorname{tr}[r(x)] /{\ensuremath{\mathrm{d}}}x=0$. In addition, the corresponding map $\widetilde{\mathscr{T}}^{(x)}$ satisfies $\widetilde{\mathscr{T}}^{(x)}({\openone}_{D})=0$.
In systems which only contain bosons, all $\eta_{\alpha,\beta}=1$ and there is no need to introduce ${\ensuremath{\mathbb{T}}}_{\alpha}(x)$, ${\ensuremath{\mathbb{T}}}_{\alpha,\beta}(x)$, etc. As an alternative to the general recipe described above, we can then also deduce all expectation values of normally ordered operators ${\ensuremath{\hat{O}}}=:O[\{{\ensuremath{\hat{\psi}^\dagger}}_{\alpha}\},\{{\ensuremath{\hat{\psi}}}_{\alpha}\}]:$ from a generating functional $Z[\{\overline{J}_{\alpha}\},\{J_{\alpha}\}]$ as (see Ref. ) $$\begin{gathered}
\braket{\Psi[\overline{Q},\{\overline{R}_{\alpha}]|:O[\{{\ensuremath{\hat{\psi}^\dagger}}_{\beta}\},\{{\ensuremath{\hat{\psi}}}_{\beta}\}]: |\Psi[Q,\{R_{\alpha}\}]}=\\
O\left[\bigg\{\frac{\delta\ }{\delta \overline{J}_{\beta}}\bigg\},\bigg\{\frac{\delta\ }{\delta J_{\beta}}\bigg\}\right]Z[\{\overline{J}_{\alpha}\},\{J_{\alpha}\}]\bigg|_{\overline{J}_{\alpha},J_{\alpha}=0}\label{eq:expecrule}\end{gathered}$$ with $\delta\ /\delta J_{\alpha}$ the functional derivative with respect to $J_{\alpha}$, and $$\begin{gathered}
Z[\{\overline{J}_{\alpha}\},\{J_{\alpha}\}]=\operatorname{tr}\Bigg[\big(B\otimes\overline{B}\big) {\ensuremath{\mathscr{P}\exp}}\bigg\{\int_{-L/2}^{+L/2}{\ensuremath{\mathrm{d}}}x\,{\ensuremath{\mathbb{T}}}(x)\\
+ \sum_{\alpha=1}^{N}J_{\alpha}(x)[R_{\alpha}(x)\otimes 1_{D}] +\overline{J}_{\alpha}(x) [1_{D}\otimes \overline{R_{\alpha}(x)}] \bigg\}\Bigg],\label{eq:genfunc}\end{gathered}$$ which for a system with open boundary conditions results in $$\begin{gathered}
Z[\{\overline{J}_{\alpha}\},\{J_{\alpha}\}]=\Bigg(l(-L/2)\Bigg\vert{\ensuremath{\mathscr{P}\exp}}\bigg\{\int_{-L/2}^{+L/2}{\ensuremath{\mathrm{d}}}x\,{\ensuremath{\mathbb{T}}}(x) \\
+ \sum_{\alpha=1}^{N}J_{\alpha}(x)[R_{\alpha}(x)\otimes 1_{D}] +\overline{J}_{\alpha}(x) [1_{D}\otimes \overline{R_{\alpha}(x)}] \bigg\}\Bigg\vert r(+L/2)\Bigg).\label{eq:genfuncopen}\end{gathered}$$
Let us now illustrate this approach by defining a generic Hamiltonian for a single-boson system with open boundary conditions[^3] $$\begin{gathered}
{\ensuremath{\hat{H}}}={\ensuremath{\hat{T}}}+{\ensuremath{\hat{V}}}+{\ensuremath{\hat{W}}}=\\
\int_{-L/2}^{+L/2}{\ensuremath{\mathrm{d}}}x\,\frac{1}{2m} \left(\frac{{\ensuremath{\mathrm{d}}}\ }{{\ensuremath{\mathrm{d}}}x} {\ensuremath{\hat{\psi}^\dagger}}(x)\right)\left(\frac{{\ensuremath{\mathrm{d}}}\ }{{\ensuremath{\mathrm{d}}}x}{\ensuremath{\hat{\psi}}}(x)\right)+\int_{-L/2}^{+L/2}{\ensuremath{\mathrm{d}}}x\,v(x){\ensuremath{\hat{\psi}^\dagger}}(x){\ensuremath{\hat{\psi}}}(x)\\
+\frac{1}{2}\int_{-L/2}^{+L/2}{\ensuremath{\mathrm{d}}}x\int_{-L/2}^{+L/2}{\ensuremath{\mathrm{d}}}y\,w(x,y) {\ensuremath{\hat{\psi}^\dagger}}(x){\ensuremath{\hat{\psi}^\dagger}}(y){\ensuremath{\hat{\psi}}}(y){\ensuremath{\hat{\psi}}}(x)
\label{eq:generichamiltonian}\end{gathered}$$ describing particles with mass $m$ that interact with an external potential $v(x)$ and with each other through two-particle interaction $w(x,y)$.
Using Eq. we find (henceforth omitting the arguments $Q$ and $R$ in the state $\ket{\Psi}$) $$\braket{\Psi|{\ensuremath{\hat{\psi}^\dagger}}(x){\ensuremath{\hat{\psi}}}(x)| \Psi}={\ensuremath{(l(x)|R(x)\otimes \overline{R}(x)|r(x))}},$$ and $$\begin{gathered}
\braket{\Psi|{\ensuremath{\hat{\psi}^\dagger}}(x){\ensuremath{\hat{\psi}^\dagger}}(y){\ensuremath{\hat{\psi}}}(y) {\ensuremath{\hat{\psi}}}(x)| \Psi}=\\
\theta(y-x){\ensuremath{(l(x)|R(x)\otimes \overline{R(x)} \mathscr{P}\mathrm{e}^{\int_{x}^{y}{\ensuremath{\mathrm{d}}}z\, {\ensuremath{\mathbb{T}}}(z)} R(y)\otimes\overline{R(y)}|r(y))}}\\
+\theta(x-y){\ensuremath{(l(y)|R(y)\otimes \overline{R(y)} \mathscr{P}\mathrm{e}^{\int_{y}^{x}{\ensuremath{\mathrm{d}}}z\, {\ensuremath{\mathbb{T}}}(z)}R(x)\otimes\overline{R(x)}|r(x))}}.\end{gathered}$$ Defining $R^{(l)}_{x}(x)=R(x)^{\dagger} l(x) R(x)$ for every $x\in[-L/2,+L/2]$ and solving $$\begin{aligned}
\frac{{\ensuremath{\mathrm{d}}}\ }{{\ensuremath{\mathrm{d}}}y} {\ensuremath{(R^{(l)}_{x}(y)|}}={\ensuremath{(R^{(l)}_{x}(y)|}}{\ensuremath{\mathbb{T}}}(y)\label{eq:defrl}\end{aligned}$$ for every $y\in [x,L/2]$, we can write the expectation value of the potential and interaction energy as $$\begin{aligned}
\braket{\Psi|{\ensuremath{\hat{V}}}|\Psi}&= \int_{-L/2}^{+L/2}{\ensuremath{\mathrm{d}}}x\, v(x) {\ensuremath{(l(x)|R(x)\otimes\overline{R(x)}|r(x))}},\\\braket{\Psi|{\ensuremath{\hat{W}}}|\Psi}&= \int_{-L/2}^{+L/2}{\ensuremath{\mathrm{d}}}x\int_{x}^{+L/2}{\ensuremath{\mathrm{d}}}y\, w(x,y) {\ensuremath{(R^{(l)}_{x}(y)|R(y)\otimes\overline{R(y)}|r(y))}}.\end{aligned}$$ To evaluate the expectation value of the kinetic energy, we compute $$\begin{gathered}
\braket{\Psi|\left(\frac{{\ensuremath{\mathrm{d}}}\ }{{\ensuremath{\mathrm{d}}}x}{\ensuremath{\hat{\psi}^\dagger}}(x)\right)\left(\frac{{\ensuremath{\mathrm{d}}}\ }{{\ensuremath{\mathrm{d}}}x}{\ensuremath{\hat{\psi}}}(x)\right)|\Psi}=\lim_{x\to y} \frac{{\ensuremath{\mathrm{d}}}^{2}\ }{{\ensuremath{\mathrm{d}}}x{\ensuremath{\mathrm{d}}}y}\braket{\Psi|{\ensuremath{\hat{\psi}^\dagger}}(x){\ensuremath{\hat{\psi}}}(y)|\Psi}\\
\shoveleft{\quad=\lim_{x\to y} \frac{{\ensuremath{\mathrm{d}}}^{2}\ }{{\ensuremath{\mathrm{d}}}x{\ensuremath{\mathrm{d}}}y}\bigg[\theta(y-x){\ensuremath{(l(x)|(1_{D}\otimes \overline{R(x)})\mathscr{P}\mathrm{e}^{\int_{x}^{y}{\ensuremath{\mathrm{d}}}z\, {\ensuremath{\mathbb{T}}}(z)}(R(y)\otimes 1_{D})|r(y))}}}\\
\shoveright{+ \theta(x-y){\ensuremath{(l(y)|(R(y)\otimes 1_{D})\mathscr{P}\mathrm{e}^{\int_{y}^{x}{\ensuremath{\mathrm{d}}}z\, {\ensuremath{\mathbb{T}}}(z)}(1_{D}\otimes \overline{R(x)})\)}}|r(x)}\bigg]}\\
\shoveleft{\quad=\lim_{x\to y} \frac{{\ensuremath{\mathrm{d}}}\ }{{\ensuremath{\mathrm{d}}}x}\Bigg[\theta(y-x)\big(l(x)\big|\big(1_{D}\otimes \overline{R(x)}\big)\mathscr{P}\mathrm{e}^{\int_{x}^{y}{\ensuremath{\mathrm{d}}}z\, {\ensuremath{\mathbb{T}}}(z)}}\\
\shoveright{\times\bigg\{ \big[{\ensuremath{\mathbb{T}}}(y) ,R(y)\otimes 1_{D}\big] + \big({\ensuremath{\mathrm{d}}}R(y)/{\ensuremath{\mathrm{d}}}y \otimes 1_{D}\big) \bigg\}\big\vert r(y)\big)\quad}\\
\qquad+ \theta(x-y)\big(l(y)\big\vert \bigg\{ \big[{\ensuremath{\mathbb{T}}},R(y)\otimes 1_{D}\big]+\big({\ensuremath{\mathrm{d}}}R(y)/{\ensuremath{\mathrm{d}}}y\otimes 1_{D}\big) \bigg\}\\
\times\mathscr{P}\mathrm{e}^{\int_{y}^{x}{\ensuremath{\mathrm{d}}}z\, {\ensuremath{\mathbb{T}}}(z)}\big(1_{D}\otimes \overline{R(x)}\big)\big|r(x)\big)\Bigg].\end{gathered}$$ We have used the defining equations \[Eq. \] in the computation of ${\ensuremath{\mathrm{d}}}{\ensuremath{(l(y)|}}/{\ensuremath{\mathrm{d}}}y={\ensuremath{(l(y)|}}{\ensuremath{\mathbb{T}}}(y)$ and ${\ensuremath{\mathrm{d}}}{\ensuremath{|r(y))}}/{\ensuremath{\mathrm{d}}}y=-{\ensuremath{\mathbb{T}}}(y){\ensuremath{|r(y))}}$. Since ${\ensuremath{\mathbb{T}}}(y)=Q(y)\otimes 1_{D}+1_{D}\otimes \overline{Q(y)}+R(y)\otimes \overline{R(y)}$, we obtain $[{\ensuremath{\mathbb{T}}}(y),R(y)\otimes 1_{D}]=[Q(y),R(y)]\otimes 1_{D}$ and thus $$\begin{gathered}
\braket{\Psi|\bigg(\frac{{\ensuremath{\mathrm{d}}}\ }{{\ensuremath{\mathrm{d}}}x}{\ensuremath{\hat{\psi}^\dagger}}(x)\bigg)\bigg(\frac{{\ensuremath{\mathrm{d}}}\ }{{\ensuremath{\mathrm{d}}}x}{\ensuremath{\hat{\psi}}}(x)\bigg)|\Psi}=\\
\shoveleft{\quad\lim_{x\to y} \bigg[\theta(y-x)\big(l(x)\big\vert1_{D}\otimes \big([\overline{Q(x)},\overline{R(x)}]+{\ensuremath{\mathrm{d}}}\overline{R(x)}/{\ensuremath{\mathrm{d}}}x\big) \mathscr{P}\mathrm{e}^{\int_{x}^{y}{\ensuremath{\mathrm{d}}}z\, {\ensuremath{\mathbb{T}}}(z)}}\\
\shoveright{\times\big( [Q(y) ,R(y)] + {\ensuremath{\mathrm{d}}}R(y)/{\ensuremath{\mathrm{d}}}y\big) \otimes 1_{D}\big\vert r(y)\big)\quad}\\
+ \theta(x-y)\big(l(y)\big\vert \big( [Q(y),R(y)]+{\ensuremath{\mathrm{d}}}R(y)/{\ensuremath{\mathrm{d}}}y)\otimes 1_{D}\mathscr{P}\mathrm{e}^{\int_{y}^{x}{\ensuremath{\mathrm{d}}}z\, {\ensuremath{\mathbb{T}}}(z)}\\
\times 1_{D}\otimes \big(1_{D}\otimes[\overline{Q(x)},\overline{R(x)}]+{\ensuremath{\mathrm{d}}}\overline{R(x)}/{\ensuremath{\mathrm{d}}}x\big)\big\vert r(x)\big)\bigg],\end{gathered}$$ where we used the same trick. Note that derivatives with respect to the Heaviside functions (which would produce a diverging contribution $\delta(x-y)$) nicely cancel for both derivatives with respect to $y$ and to $x$. As noted in the Section \[s:regularity\], the regularity condition Eq. is automatically fulfilled for the case of a single boson. We thus obtain $$\begin{gathered}
\braket{\Psi|{\ensuremath{\hat{T}}}|\Psi}= \frac{1}{2m}\int_{-L/2}^{+L/2}{\ensuremath{\mathrm{d}}}x\, \big(l(x)\big\vert\big([Q(x),R(x)]+{\ensuremath{\mathrm{d}}}R(x)/{\ensuremath{\mathrm{d}}}x\big)\\
\otimes\big([\overline{Q(x)},\overline{R(x)}]+{\ensuremath{\mathrm{d}}}\overline{R(x)}/{\ensuremath{\mathrm{d}}}x\big)\big\vert r(x)\big).\end{gathered}$$ Note that this result could also be obtained by the general strategy outlined at the beginning of this section, *i.e.* by acting directly on the cMPS with the operators ${\ensuremath{\hat{\psi}}}(x)$ and ${\ensuremath{\mathrm{d}}}{\ensuremath{\hat{\psi}}}(x) / {\ensuremath{\mathrm{d}}}x$ and only afterwards computing the expectation values. However, the generating function approach is very general and relates nicely to the standard approach that is used to compute expectation values in quantum field theory. As for the definition of the state itself, we can also write the generating functional using a path integral, which can be useful for analytic computations or Monte Carlo based evaluation strategies.
Gauge invariance {#s:gauge}
================
As with a MPS, the map $\Psi$ associating a physical state $\ket{\Psi[Q,\{R_{\alpha}\}]}\in {\ensuremath{{\ensuremath{\mathbb{H}}}}}_{{\ensuremath{\mathcal{R}}}}^{(\mathrm{F})}$ to the matrix functions $Q:{\ensuremath{\mathcal{R}}}\to \mathbb{C}^{D\times D}$ and $R_{\alpha}:{\ensuremath{\mathcal{R}}}\to\mathbb{C}^{D\times D}$ is not injective, *i.e.* the representation is not unique. For MPS, this so-called *gauge invariance* was rigorously discussed in terms of principal fibre bundles in Ref. . Such a rigorous treatment for the case of cMPS is severely complicated by the fact that both the domain and the codomain of the map $\Psi$ are now infinite dimensional. Therefore, it is beyond the scope of the current manuscript, as noted in the introduction. We thus proceed in an intuitive way.
We do expect the existence of a local gauge transformation $g:{\ensuremath{\mathcal{R}}}\to\mathsf{GL}(D,\mathbb{C})$, *i.e.* a position-dependent invertible matrix $g(x)$, that acts on the matrices $Q(x)$ and $R_{\alpha}(x)$ while leaving the physical state $\ket{\Psi[Q,\{R_{\alpha}]}$ invariant. While it is hard to extract the correct transformation formulas for $Q$ and $R_{\alpha}$ from the original cMPS definition in Eq. , people with a background in Yang-Mills gauge theories might recognise $Q$ as the connection that generates parallel transport by comparing the $N$-particle wave functions of the Fock space embedding \[Eq. \] to Wilson lines with insertions of charges transforming according to the adjoint representation, or from recognizing the action of the path integral formulation \[Eq. \] as a Yang-Mills action with a covariant derivative $\frac{\mathrm{d}\ }{\mathrm{d} x} + Q(x)$. The gauge transformation for a cMPS is thus given by $$\begin{aligned}
\tilde{Q}(x)&=g(x)^{-1} Q(x) g(x)+ g(x)^{-1} \frac{{\ensuremath{\mathrm{d}}}g}{{\ensuremath{\mathrm{d}}}x}(x) ,&\tilde{R}(x)&=g^{-1}(x) R(x) g(x),\label{eq:gaugetransform}\end{aligned}$$ While we prefer the continuum derivation, these transformation formulas can also be obtained by using the correspondence with MPS \[Eq. \] and the well-known gauge transformations for MPS [@Haegeman:fk] $$\begin{aligned}
\tilde{A}^{0}(n)&=g((n-1)a)^{-1} A^{0}(n) g(na)\\
&=g((n-1)a)^{-1}g(n a)+a g((n-1)a)^{-1}Q(na)g(na)\\
&={\openone}_{D}+a\left[-\frac{{\ensuremath{\mathrm{d}}}g^{-1}}{{\ensuremath{\mathrm{d}}}x}(na) g(n a) + g(na)^{-1} Q(na) g(na)\right]+\operatorname{\mathscr{O}}(a^{2}),\\
\tilde{A}^{\alpha}(n) &= g((n-1)a)^{-1} A^{\alpha}(n) g(na)\\
&=\sqrt{a} g(na)^{-1} R_{\alpha}(n a)g(na)+\operatorname{\mathscr{O}}(a^{3/2}),\\
\tilde{A}^{(\alpha,\beta)}(n) &= g((n-1)a)^{-1} A^{(\alpha,\beta)}g(na)\\
&=\begin{cases} \frac{a}{2} [ \tilde{R}_{\alpha}(n a) \tilde{R}_{\beta}(n a)+\eta_{\alpha,\beta} \tilde{R}_{\beta}(n a) \tilde{R}_{\alpha}(n a)]+\operatorname{\mathscr{O}}(a^{2}),& \alpha\neq \beta\\
\frac{a}{2} \tilde{R}_{\alpha}(n a)^{2}+\operatorname{\mathscr{O}}(a^{2}),&\alpha=\beta
\end{cases}\\
&\ldots\nonumber\end{aligned}$$ Indeed, using ${\ensuremath{\mathrm{d}}}g^{-1}(x) /{\ensuremath{\mathrm{d}}}x g(x) = - g^{-1}(x) {\ensuremath{\mathrm{d}}}g(x)/ {\ensuremath{\mathrm{d}}}x$, we reproduce the transformation formulas of Eq. . To have an invariant physical state $\ket{\Psi[Q,\{R_{\alpha}\}]}=\ket{\Psi[\tilde{Q},\{R_{\alpha}\}]}$, we also need to transform the boundary matrix as $\tilde{B}=g(L/2)^{-1} B g(-L/2)$. When $B$ is fixed, we need to restrict to gauge transformations that satisfy the boundary condition $g(L/2)^{-1} B g(-L/2)=B$ (*e.g.* $g(L/2)=g(-L/2)$ for $B={\openone}_D$). In addition, we also require the function $g:{\ensuremath{\mathcal{R}}}\to\mathsf{GL}(D,\mathbb{C})$ to be second order differentiable in order to have new matrix functions $\tilde{Q}(x)$ and $\tilde{R}_{\alpha}(x)$ which have a well-defined first order derivative. The regularity condition of Eq. is not modified by the gauge transformation and puts no further constraints on the set of allowed gauge transformations. Since this condition follows from physical considerations which are left invariant by gauge transformations, it would be strange if we obtained a different result.
As for MPS, we can use the gauge fixing conditions to impose a certain canonical form on the matrices $Q(x)$ and $R_{\alpha}(x)$. Suppose we want to impose a gauge fixing condition such that $\tilde{Q}(x)$ is of the form in Eq. , corresponding to the cMPS construction from continuous measurement. It is equivalent to the *left orthonormalization condition* of MPS and boils down to imposing $$\tilde{Q}(x)+\tilde{Q}(x)^\dagger +\sum_{\alpha=1}^{q} \tilde{R}_{\alpha}(x)^\dagger \tilde{R}_{\alpha}(x)=0$$ for every $x\in\mathcal{R}$. Inserting the explicit form of $\tilde{Q}(x)$ and $\tilde{R}_{\alpha}(x)$ in terms of the original $Q(x)$, $R_{\alpha}(x)$ and $g(x)$ \[Eq. \], we obtain that $g(x)$ should be a solution of the differential equation $$\begin{gathered}
\begin{split}
\frac{{\ensuremath{\mathrm{d}}}\ }{{\ensuremath{\mathrm{d}}}x} \left[ \left(g^{-1}(x)\right)^\dagger g^{-1}(x)\right]&= \left(g^{-1}(x)\right)^\dagger g^{-1}(x) Q(x) + Q(x)^\dagger \left(g^{-1}(x)\right)^\dagger g^{-1}(x)\\
&\qquad\qquad+\sum_{\alpha=1}^{q} R_{\alpha}(x)^\dagger \left(g^{-1}(x)\right)^\dagger g^{-1}(x) R_{\alpha}(x)\\
&=\tilde{\mathscr{T}}^{(x)}\left[\left(g^{-1}(x)\right)^\dagger g^{-1}(x)\right].
\end{split}\end{gathered}$$ Clearly, this differential equation only determines $g(x)$ up to a unitary prefactor. Put differently, for any solution $g(x)$ of this equation, $g'(x)=u(x) g(x)$ with $u(x)$ a unitary matrix is an equally valid solution. We can use the remaining gauge freedom $u(x)\in\mathsf{U}(D)$ to diagonalize $r(x)$ at every point $x$, hence obtaining the *left-canonical form*.
However, at this point it becomes important to discuss the boundary conditions that should be satisfied by solutions $g(x)$. If the boundary matrix $B$ is fixed, we need to impose $g^{-1}(+L/2) B g(-L/2)=B$. This is a highly non-trivial condition and it is not certain that such solutions exist. For periodic boundary conditions with $B={\openone}_{D}$, it logically results in $g(+L/2)=g(-L/2)$. Translation-invariant states with periodic boundary conditions can be subjected to the the same treatment as the translation-invariant states in the thermodynamic limit, which are discussed in the next section. Henceforth, we restrict to the case of open boundary conditions with $B=\bm{v}_{\mathrm{R}}\bm{v}_{\mathrm{L}}^{\dagger}$. From this, we can derive the conditions $$\begin{aligned}
\bm{v}_{\mathrm{L}}^{\dagger} g(-L/2) &= \alpha \bm{v}_{\mathrm{L}}^{\dagger} &g^{-1}(+L/2)\bm{v}_{\mathrm{R}}&=\frac{1}{\alpha} \bm{v}_{\mathrm{R}}\end{aligned}$$ for some non-zero $\alpha\in\mathbb{C}$. However, we can easily fix $\alpha=1$ by substituting $g(x)\leftarrow g'(x)=g(x)/\alpha$, since the constant gauge transformation $\alpha {\openone}_{D}$ acts trivially on $Q$ and $R$, *i.e.* it is within the kernel of the gauge group action. Nevertheless, the resulting boundary conditions are still highly non-trivial and it is not assured by the standard theory of differential equations that there exist solutions satisfying both conditions simultaneously. Hence, it is better to restrict to a single boundary condition such as $g(-L/2)={\openone}_{D}$ and do not impose any condition on $g(+L/2)$. The value of $g(+L/2)$ is then completely determined by the differential equation (up to the unitary prefactor). Consequently, we then also have to transform the right boundary vector as $\tilde{\bm{v}}_{\mathrm{R}}=g^{-1}(+L/2) \bm{v}_{\mathrm{R}}$. This implies that $\bm{v}_{\mathrm{R}}$ is part of the variational degrees of freedom, and should also be included in *e.g.* the variational optimization for finding ground states. Note that the boundary conditions for $g(x)$ are inherently imposed by the representation of the state, and are not related to or influenced by the physical conditions that need to be satisfied by $Q$ and $R$, as discussed in Section \[s:bc\].
Alternatively, we can also impose the *right orthonormalization condition*, which boils down to $$\tilde{Q}(x)+\tilde{Q}(x)^{\dagger}+\sum_{\alpha=1}^{N}\tilde{R}_{\alpha}(x)\tilde{R}_{\alpha}(x)^{\dagger}=0$$ and implies that $$\tilde{Q}(x)=-{\ensuremath{\mathrm{i}}}K(x) -\frac{1}{2}\sum_{\alpha=1}^{N}\tilde{R}_{\alpha}(x)\tilde{R}_{\alpha}(x)^{\dagger}$$ with $K(x)$ a Hermitian matrix. Starting from an arbitrary cMPS with matrices $Q(x)$ and $R_{\alpha}(x)$, we obtain new matrices $\tilde{Q}(x)$ and $\tilde{R}_{\alpha(x)}$ according to Eq. , which satisfy the above relations if $g(x)$ is a solution of $$\begin{gathered}
\begin{split}
\frac{{\ensuremath{\mathrm{d}}}\ }{{\ensuremath{\mathrm{d}}}x} \left[ g(x) g(x)^{\dagger} \right]&= -Q(x) g(x) g(x)^{\dagger} - g(x) g(x)^{\dagger} Q(x)^\dagger-\sum_{\alpha=1}^{q} R_{\alpha}(x)g(x)g(x)^{\dagger}R_{\alpha}(x)^{\dagger} \\
&=-\mathscr{T}^{(x)}\left[g(x) g(x)^\dagger\right].
\end{split}\end{gathered}$$ Clearly, for any solution $g(x)$, we obtain a family of solutions $g'(x)=g(x) u(x)$ with $u(x)\in\mathsf{U}(D)$. This unitary freedom can be fixed by diagonalizing $l(x)$, resulting in the *right-canonical form*. As for the left-canonical form, one has to pay careful attention to the boundary conditions that need to be satisfied by $g$. For a system with open boundary conditions, the easiest solution is again to include one of the boundary vectors in the set of the variational parameters and also transform it under the action of the gauge transform.
Note that we can also define a gauge transformation $g(x)$ for the cMPS $\ket{\Psi[Q,\{R_{\alpha}\}]}\in{\ensuremath{{\ensuremath{\mathcal{M}}}}}_{\text{cMPS}}$ so that $$\tilde{Q}(x)=g(x)^{-1} Q(x) g(x)+g(x)^{-1} \frac{{\ensuremath{\mathrm{d}}}g}{{\ensuremath{\mathrm{d}}}x}(x)=0.$$ It is sufficient to choose $$g(x)=\mathscr{P}\!\exp\left[\int^{+L/2}_{x} Q(y)\,{\ensuremath{\mathrm{d}}}y\right] g_0$$ with $g_{0}$ some arbitrary integration factor that is fixed by the boundary conditions. For example, if we require $g(-L/2)={\openone}_{D}$ then $g_0=\left(\mathscr{P}\!\exp\left[\int^{+L/2}_{-L/2} Q(y)\,{\ensuremath{\mathrm{d}}}y\right]\right)^{-1}$ and we also need to transform $\bm{v}_{\mathrm{R}}\leftarrow \bm{\tilde{v}}_{\mathrm{R}}= g(+L/2)^{-1}\bm{v}_{\mathrm{R}}=g_0^{-1}\bm{v}_{\mathrm{R}}$. Hence, the cMPS can now be written as $$\ket{\Psi[\{\tilde{R}_{\alpha}\}]}=\bm{v}_{\mathrm{L}}^{\dagger} \mathscr{P}\!\exp\left[\int_{-L/2}^{+L/2}{\ensuremath{\mathrm{d}}}x\, \sum_{\alpha=1}^{N}\tilde{R}_{\alpha}(x) \otimes {\ensuremath{\hat{\psi}^\dagger}}_{\alpha}(x) \right]\bm{\tilde{v}}_{\mathrm{R}}\ket{\Omega}.\label{eq:formulationlinkwithmeanfield}$$ This formulation is close in spirit to the bosonic mean field ansatz $$\ket{\varphi}=\exp\left(\int_{-L/2}^{+L/2}\varphi(x) {\ensuremath{\hat{\psi}^\dagger}}(x)\,{\ensuremath{\mathrm{d}}}x\right)\ket{\Omega}$$ with $\varphi$ a scalar (complex-valued) function, since it identifies the mean field ansatz with a cMPS with bond dimension $D=1$. This mean field ansatz lies at the basis of the Gross-Pitaevskii equation [@Gross:1961aa; @Pitaevskii:1961aa], that is still used today with great success. All variational degrees of freedom are now contained in the matrices $\tilde{R}_{\alpha}(x)$ (and $\bm{\tilde{v}}_{\mathrm{R}}$), and all gauge degrees of freedom have been eliminated. However, we do not employ this particular choice of gauge in the remainder of this manuscript as it also has some downsides. For example, translation-invariant states $\ket{\Psi[Q,R_{\alpha}]}$ can be obtained by choosing the matrices $Q$ and $R_{\alpha}$ $x$-independent (see next subsection). However, this particular gauge transformation maps the $x$-independent matrices $R_{\alpha}$ to $x$-dependent matrices $\tilde{R}_{\alpha}(x)={\ensuremath{\mathrm{e}}}^{+Q x} R_{\alpha}{\ensuremath{\mathrm{e}}}^{-Q x}$, so that translation invariance is less easily recognized.
Translation invariance and the thermodynamic limit {#s:ti}
==================================================
When using cMPS to approximate ground states of translation invariant Hamiltonians, we can restrict to the subclass of uniform cMPS $\ket{\Psi(Q,\{R_{\alpha}\})}$, which are obtained from taking $Q(x)=Q$ and $R_{\alpha}(x)=R_{\alpha}$ constant $x$-independent $D\times D$ matrices in $\ket{\Psi[Q,\{R_{\alpha}\}]}$. This approach is valid both for a finite system with periodic boundary conditions ($B={\openone}_{D}$) or for a system in the thermodynamic limit ($\lvert{\ensuremath{\mathcal{R}}}\rvert=L\to \infty$ or thus ${\ensuremath{\mathcal{R}}}\to \mathbb{R}$), where the precise value of the boundary matrix $B$ should be irrelevant and should not appear in any normalised expectation value. We henceforth restrict to the latter case. The transfer operator ${\ensuremath{\mathbb{T}}}=Q\otimes 1_{D}+1_{D}\otimes\overline{Q}+\sum_{\alpha=1}^{q} R_{\alpha}\otimes\overline{R}_{\alpha}$ also becomes translation invariant and ${\ensuremath{\mathscr{P}\exp}}[\int_{y}^{z}{\ensuremath{\mathrm{d}}}x\, {\ensuremath{\mathbb{T}}}]=\exp[{\ensuremath{\mathbb{T}}}(z-y)]$. The normalization of the state $\ket{\Psi(Q,R)}$ is given by $\lim_{L\to\infty}\operatorname{tr}\big[(B\otimes\overline{B})\exp({\ensuremath{\mathbb{T}}} L)\big]$. If $\mu=\max_{\lambda\in\sigma({\ensuremath{\mathbb{T}}})}\{\Re(\lambda)\}$, where $\sigma({\ensuremath{\mathbb{T}}})$ denotes the spectrum of ${\ensuremath{\mathbb{T}}}$ and $\Re$ the real part, then $\braket{\Psi(\overline{Q},\{\overline{R}_{\alpha}\})|\Psi(Q,\{R_{\alpha}\})}\sim \lim_{L\to\infty} \exp(\mu L)$. Normalizing this state by multiplying it with $\exp(-\mu L)$ results in $Q\leftarrow Q-\mu/2 {\openone}_{D}$ and ${\ensuremath{\mathbb{T}}}\leftarrow {\ensuremath{\mathbb{T}}}-\mu {\ensuremath{\mathbb{{\openone}}}}$, so that the new transfer operator ${\ensuremath{\mathbb{T}}}$ has at least one eigenvalue for which the real part is zero and no eigenvalue has a positive real part. Let us assume that the eigenvalue $\lambda$ with $\Re \lambda=0$ is unique. If ${\ensuremath{|r)}}$ is the corresponding right eigenvector, then we can write the eigenvalue equation as $\mathscr{T}(r)=\lambda r$ with $r$ the associated virtual density matrix. Hermitian conjugation learns that $\mathscr{T}(r^{\dagger})=\overline{\lambda} r^{\dagger}$, so that the uniqueness of the eigenvalue with $\Re \lambda=0$ implies that $\lambda=\overline{\lambda}=0$ and $r^{\dagger}={\ensuremath{\mathrm{e}}}^{{\ensuremath{\mathrm{i}}}\phi} r$, where we can choose the phase of the eigenvector so that $r$ is Hermitian. Similarly, the virtual density matrix $l$ associated to the left eigenvector ${\ensuremath{|l)}}$ can also be chosen Hermitian.
Having a unique eigenvalue zero and $\Re(\lambda)<0$ for all other eigenvalues $\lambda$ corresponds to the generic case, as can be better appreciated by referring to the well-known results for MPS[@1992CMaPh.144..443F; @2006quant.ph..8197P; @Haegeman:fk]. Indeed, a full categorisation of the eigenvalue structure of ${\ensuremath{\mathbb{T}}}$ can be obtained by identifying[^4] $${\ensuremath{\mathbb{T}}}=\lim_{a\to 0} \frac{1}{a} \ln {\ensuremath{\mathbb{E}}}$$ with ${\ensuremath{\mathbb{E}}}$ the corresponding transfer operator of the uniform MPS $\ket{\Psi(A)}$ with $A$ related to $Q$ and $R_{\alpha}$ as in Eq. . The set of MPS with a well-defined thermodynamic limit correspond to the injective or pure MPS, for which the transfer operator ${\ensuremath{\mathbb{E}}}$ has a single eigenvalue $1$ that maps to the eigenvalue zero of ${\ensuremath{\mathbb{T}}}$. The corresponding left and right eigenvectors ${\ensuremath{(l|}}$ and ${\ensuremath{|r)}}$ correspond to strictly positive Hermitian operators $l$ and $r$ (*i.e.* they have full rank). All other eigenvalues of ${\ensuremath{\mathbb{E}}}$ lie strictly within the unit circle and map to eigenvalues of ${\ensuremath{\mathbb{T}}}$ with strictly negative real part. If the left and right eigenvectors corresponding to eigenvalue $0$ are normalized such that ${\ensuremath{(l|r)}}=1$, then $\lim_{L\to\infty} \exp({\ensuremath{\mathbb{T}}} L)={\ensuremath{|r)}}{\ensuremath{(l|}}$ and we obtain $$\braket{\Psi(\overline{Q},\{\overline{R}_{\alpha}\})|\Psi(Q,\{R_{\alpha}\})}={\ensuremath{(l|B\otimes \overline{B}|r)}}.$$ In expectation values of local operators, this overall factor always appears, but the rest of the expression will not depend on $B$. Hence, the $B$-dependence is cancelled by considering normalized expectation values, or by henceforth choosing $B$ such that $\braket{\Psi(Q,\{R_{\alpha}\})|\Psi(Q,\{R_{\alpha}\})}={\ensuremath{(l|B\otimes \overline{B}|r)}}=1$.
For uniform cMPS, the gauge invariance is restricted to global transformations $Q\leftarrow\tilde{Q}=g Q g^{-1}$ and $R_{\alpha}\leftarrow \tilde{R}_{\alpha}=g R_{\alpha} g^{-1}$ with $g\in{\ensuremath{\mathsf{GL}}}(\mathbb{C},D)$. This gauge transformation can be used to impose the left or right orthonormalization conditions. Left orthonormalization boils down to fixing the left eigenvector $l$ of eigenvalue $0$ to $l={\openone}_{D}$, which results in $Q=-{\ensuremath{\mathrm{i}}}K-1/2 \sum_{\alpha=1}^{q} R_{\alpha}^{\dagger}R_{\alpha}$ with $K$ a Hermitian matrix. The remaining unitary gauge freedom can be used to diagonalize $r$, bringing $Q$ and $R_{\alpha}$ in the left-canonical form. The right-canonical form is obtained analogously. In principle, an exact computation of the left and right eigenvectors $l$ and $r$ corresponding to the eigenvalue with largest real part $\lambda$ of the transfer operator ${\ensuremath{\mathbb{T}}}$ are computationally costly operations \[$\operatorname{\mathscr{O}}(D^{6})$\]. By using an explicit parameterization of the left-canonical form in terms of $R_{\alpha}$ and the Hermitian matrix $K$, we know exactly that $\lambda=0$ and $l={\openone}_{D}$. It is then possible to obtain $r$ with an iterative solver with computational efficiency $\operatorname{\mathscr{O}}(D^{3})$.
By imposing the physical requirements discussed at the end of Section \[s:regularity\], we can define the parity superoperator ${\ensuremath{\mathbb{P}}}$ as in Section \[s:expectval\]. Since ${\ensuremath{\mathbb{P}}}{\ensuremath{\mathbb{T}}}{\ensuremath{\mathbb{P}}}={\ensuremath{\mathbb{T}}}$, we can expect that the left and right eigenvectors ${\ensuremath{|l)}}$ and ${\ensuremath{|r)}}$ corresponding to the zero eigenvalue satisfy ${\ensuremath{(l|}}{\ensuremath{\mathbb{P}}}={\ensuremath{(l|}}$ and ${\ensuremath{\mathbb{P}}}{\ensuremath{|r)}}={\ensuremath{|r)}}$, or thus $P^{\dagger} l P = l$ and $P r P^{\dagger}=r$. Note that we can always choose the gauge such that $P$ is Hermitian. In addition, it is easy to prove that ${\ensuremath{\mathbb{T}}}_{\alpha}$ also has an eigenvalue zero even if $\alpha$ refers to a fermionic particle species so that ${\ensuremath{\mathbb{T}}}_{\alpha}\neq {\ensuremath{\mathbb{T}}}$. The corresponding left and right eigenvectors are in that case given by $l_{\alpha}=l P=P^{\dagger} l$ and $r_{\alpha} =P r=r P^{\dagger}$, whereas they equal $l$ and $r$ if $\alpha$ is a bosonic particle.
We can now evaluate correlation functions as $$\begin{gathered}
C_{\alpha,\beta}(x,y)=\braket{\Psi(\overline{Q},\{\overline{R}_{\alpha}\})|{\ensuremath{\hat{\psi}^\dagger}}_{\alpha}(x){\ensuremath{\hat{\psi}}}_{\beta}(y)|\Psi(Q,\{R_{\alpha}\})}\\
=\theta(x-y){\ensuremath{(l|[R_{\beta}\otimes{\openone}_{D}]{\ensuremath{\mathrm{e}}}^{{\ensuremath{\mathbb{T}}}_{\alpha}(x-y)}[{\openone}_{D}\otimes \overline{R_{\alpha}}]|r)}}\\
+\theta(y-x){\ensuremath{(l|[{\openone}_{D}\otimes \overline{R_{\alpha}}]{\ensuremath{\mathrm{e}}}^{{\ensuremath{\mathbb{T}}}_{\alpha}(y-x)}[R_{\beta}\otimes{\openone}_{D}]|r)}},\label{eq:corrfunti}\end{gathered}$$ where we have used the physical requirement ${\ensuremath{\mathbb{T}}}_{\alpha,\beta}={\ensuremath{\mathbb{T}}}$ and ${\ensuremath{\mathbb{T}}}_{\alpha}={\ensuremath{\mathbb{T}}}_{\beta}$ for non-vanishing correlation functions (see Section \[s:expectval\]). The correlation function $C_{\alpha,\beta}(x,y)$ is translation invariant and we define $C_{\alpha,\beta}(x,y)=C_{\alpha,\beta}(y-x)$. When $\alpha$ is bosonic and $\beta$ fermionic, we automatically have $C_{\alpha,\beta}(x)=0$ if the parity considerations from Section \[s:regularity\] are correctly built in. In the long-range limit, we obtain $\lim_{\lvert x\rvert \to \infty}C_{\alpha,\beta}(x)={\ensuremath{(l|R_{\beta}\otimes{\openone}_{D}|r_{\alpha})}}{\ensuremath{(l_{\alpha}|{\openone}_{D}\otimes \overline{R_{\alpha}}|r)}}$. When both $\alpha$ and $\beta$ refer to fermionic particle species, this limiting value is automatically zero (also under the assumption that parity is correctly built into the matrices). When both indices refer to bosonic particles, a non-zero value is possible in the case of Bose-Einstein condensation. We should then define a connected correlation function $\tilde{C}_{\alpha,\beta}(x)$, which decays exponentially as $\lim_{\lvert x \rvert\to \infty}\tilde{C}_{\alpha,\beta}(x)=\operatorname{\mathscr{O}}(\exp[-\lvert x\rvert/\xi_{\text{c}}])$ with $\xi_{c}=(\Re \lambda_{1})^{-1}$, where $\lambda_{1}$ is the eigenvalue of ${\ensuremath{\mathbb{T}}}_{\alpha}$ with second largest real part (*i.e.* skipping eigenvalue $\lambda_{0}=0$). Clearly, $C_{\alpha,\beta}(x)$ is continuous at $x=0$. We can then compute the first derivative, which is only continuous at $x=0$ if we impose the regularity conditions in Eq. . This is another way to derive these conditions. If Eq. is satisfied, then the second derivative of $C_{\alpha,\beta}(x)$ at $x=0$ (which gives the expectation value of the kinetic energy density ${\ensuremath{\hat{t}}}$ up to a factor $-1/2m$) is finite and automatically continuous. The third derivative is then finite but will not be continuous in general, without imposing further conditions as discussed in Appendix \[a:higherorderregularity\].
We define the Fourier transformed correlation function $$n_{\alpha,\beta}(p,p')=\int_{-\infty}^{+\infty} \frac{{\ensuremath{\mathrm{d}}}x}{2\pi} \int_{-\infty}^{+\infty}\frac{{\ensuremath{\mathrm{d}}}y}{2\pi}\, C_{\alpha,\beta}(x,y){\ensuremath{\mathrm{e}}}^{{\ensuremath{\mathrm{i}}}p x - {\ensuremath{\mathrm{i}}}p' y}= \delta (p'-p) n_{\alpha,\beta}(p)$$ with $$n_{\alpha,\beta}(p)=\int_{-\infty}^{+\infty} \frac{{\ensuremath{\mathrm{d}}}x}{2\pi} C_{\alpha,\beta}(x) {\ensuremath{\mathrm{e}}}^{-{\ensuremath{\mathrm{i}}}p x}.$$ In order to evaluate $n_{\alpha,\beta}(p)$, it is important to separate $\exp({\ensuremath{\mathbb{T}}}_{\alpha}x)$ into two parts. The first part is given by ${\ensuremath{\mathbb{S}}}_{\alpha}={\ensuremath{|r_{\alpha})}}{\ensuremath{(l_{\alpha}|}}$, the projector onto the eigenspace corresponding to eigenvalue $0$ of ${\ensuremath{\mathbb{T}}}_{\alpha}$, and yields a singular contribution to the integral. If we define the complementary projector ${\ensuremath{\mathbb{Q}}}_{\alpha}={\openone}-{\ensuremath{\mathbb{S}}}_{\alpha}$, then the remaining part $$\exp({\ensuremath{\mathbb{T}}}_{\alpha}x)-{\ensuremath{\mathbb{S}}}_{\alpha}={\ensuremath{\mathbb{Q}}}_{\alpha}\exp({\ensuremath{\mathbb{T}}}_{\alpha}x) {\ensuremath{\mathbb{Q}}}_{\alpha}={\ensuremath{\mathbb{Q}}}_{\alpha}\exp({\ensuremath{\mathbb{Q}}}_{\alpha}{\ensuremath{\mathbb{T}}}_{\alpha}{\ensuremath{\mathbb{Q}}}_{\alpha}x) {\ensuremath{\mathbb{Q}}}_{\alpha}\label{eq:singulardecompositionT}$$ is well behaved in the Fourier transform, since all of its eigenvalues decay exponentially $x$. If we then introduce the notation ${\ensuremath{\mathbb{Q}}}_{\alpha}(-{\ensuremath{\mathbb{T}}}_{\alpha}\pm{\ensuremath{\mathrm{i}}}p)^{-1}{\ensuremath{\mathbb{Q}}}_{\alpha}=(-{\ensuremath{\mathbb{T}}}_{\alpha}\pm{\ensuremath{\mathrm{i}}}p)^{\mathsf{P}}$, which is well defined even at $p=0$ because the zero eigensector of ${\ensuremath{\mathbb{T}}}_{\alpha}$ is projected out, we can rewrite $n_{\alpha,\beta}(p)$ as $$\begin{gathered}
n_{\alpha,\beta}(p)=2\pi \delta(p) {\ensuremath{(l|{\openone}_{D}\otimes \overline{R_{\alpha}}|r_{\alpha})}}{\ensuremath{(l_{\alpha}|R_{\beta}\otimes{\openone}_{D}|r)}}\\
+{\ensuremath{(l|[{\openone}_{D}\otimes \overline{R_{\alpha}}] (-{\ensuremath{\mathbb{T}}}_{\alpha}+{\ensuremath{\mathrm{i}}}p)^{\mathsf{P}} [R_{\beta}\otimes{\openone}_{D}]|r)}}\\
+{\ensuremath{(l|[R_{\beta}\otimes{\openone}_{D}] (-{\ensuremath{\mathbb{T}}}_{\alpha}-{\ensuremath{\mathrm{i}}}p)^{\mathsf{P}} [{\openone}_{D}\otimes \overline{R_{\alpha}}]|r)}}. \label{eq:cmpsmomentumoccupation}\end{gathered}$$ The first term is only present for bosonic particles that have condensed. It would also disappear in the Fourier transformation of the connected correlation function $\tilde{C}(x,y)$. If we define Fourier transformed field operators ${\ensuremath{\hat{\varPsi}}}(p)$ —no confusion between the state $\ket{\Psi}$ and the momentum-space operator ${\ensuremath{\hat{\varPsi}}}$ should arise— as $${\ensuremath{\hat{\varPsi}}}(p)=\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{+\infty}{\ensuremath{\mathrm{d}}}x\,{\ensuremath{\hat{\psi}}}(x){\ensuremath{\mathrm{e}}}^{-{\ensuremath{\mathrm{i}}}p x},$$ then it is easy to see why we have used the suggestive notation $n_{\alpha,\beta}$ for the Fourier transform of $C_{\alpha,\beta}$. We obtain $$\braket{\Psi(\overline{Q},\{\overline{R}_{\alpha}\})|{\ensuremath{\hat{\varPsi}^\dagger}}_{\alpha}(p){\ensuremath{\hat{\varPsi}}}_{\beta}(p')|\Psi(Q,\{R_{\alpha}\})}=\delta(p-p')n_{\alpha,\beta}(p).\label{eq:defmomoccnum}$$ Hence, $n_{\alpha,\beta}(p)$ describes the occupation number of momentum levels. The large-$p$ behavior of $n_{\alpha,\beta}(p)$ follows from the regularity of $C_{\alpha,\beta}(x)$. At first sight, Eq. might seem to decay as $\operatorname{\mathscr{O}}(p^{-1})$. However, if the regularity conditions in Eq. are satisfied, then the momentum occupation number $n_{\alpha,\beta}(p)$ has to decay as $\operatorname{\mathscr{O}}(p^{-4})$ for large values of $p$. We can show this explicitly. For $\lvert p\rvert$ larger than the eigenvalue of ${\ensuremath{\mathbb{T}}}_{\alpha}$ with the largest absolute value, we can expand $(-{\ensuremath{\mathbb{T}}}_{\alpha}\pm {\ensuremath{\mathrm{i}}}p)^{\mathsf{P}}$ as $$(-{\ensuremath{\mathbb{T}}}_{\alpha}\pm {\ensuremath{\mathrm{i}}}p)^{\mathsf{P}}=\mp {\ensuremath{\mathrm{i}}}\frac{{\ensuremath{\mathbb{Q}}}_{\alpha}}{p}\sum_{n=0}^{+\infty} \left(\pm {\ensuremath{\mathrm{i}}}\frac{{\ensuremath{\mathbb{T}}}_{\alpha}}{p}\right)^n=\mp {\ensuremath{\mathrm{i}}}\frac{{\ensuremath{\mathbb{Q}}}_{\alpha}}{p} +\frac{{\ensuremath{\mathbb{T}}}_{\alpha}}{p^2}\pm {\ensuremath{\mathrm{i}}}\frac{{\ensuremath{\mathbb{T}}}_{\alpha}^2}{p^3}-\frac{{\ensuremath{\mathbb{T}}}_{\alpha}^3}{p^4}+\operatorname{\mathscr{O}}(p^{-5}).$$ We now have to show that by plugging this expansion into Eq. , the first three terms vanish. The first term is trivial, if particle type $\alpha$ is bosonic so that ${\ensuremath{\mathbb{Q}}}_{\alpha}={\ensuremath{\mathbb{{\openone}}}}-{\ensuremath{|r)}}{\ensuremath{(l|}}$. For the fermionic case, one has to employ the parity conservation. Using the regularity conditions of Eq. and $\eta_{\alpha,\gamma}=\eta_{\beta,\gamma}$ for non-vanishing correlation functions —$\alpha$ and $\beta$ are of both bosonic or both fermionic— we can show that $$\begin{aligned}
{\ensuremath{\mathbb{T}}}_{\alpha} [R_{\beta}\otimes {\openone}_{D}]{\ensuremath{|r)}}=[R_{\beta}\otimes{\openone}_{D}]{\ensuremath{\mathbb{T}}}{\ensuremath{|r)}}+[Q,R_{\beta}]\otimes{\openone}_{D}{\ensuremath{|r)}}=[Q,R_{\beta}]\otimes{\openone}_{D}{\ensuremath{|r)}}\end{aligned}$$ and similarly $$\begin{aligned}
{\ensuremath{\mathbb{T}}}_{\alpha} [{\openone}_{D}\otimes \overline{R_{\alpha}}]{\ensuremath{|r)}}&={\openone}_{D}\otimes[\overline{Q},\overline{R_{\alpha}}]{\ensuremath{|r)}},\\
{\ensuremath{(l|}}[R_{\beta}\otimes{\openone}_{D}]{\ensuremath{\mathbb{T}}}_{\alpha} &={\ensuremath{(l|}}[R_{\beta},Q]\otimes{\openone}_{D},\\
{\ensuremath{(l|}}[{\openone}_{D}\otimes \overline{R_{\alpha}}]{\ensuremath{\mathbb{T}}}_{\alpha} &={\ensuremath{(l|}}{\openone}_{D}\otimes [\overline{R_{\alpha}},\overline{Q}].\end{aligned}$$ These results can be used to show that both the second and third term in the expansion vanish when they are plugged into Eq. . The first non-vanishing term is thus of order $p^{-4}$. Because $n_{\alpha,\beta}(p)$ is a dimensionless quantity, this asymptotic behavior allows us to introduce a momentum cutoff $\Lambda$ as $$\Lambda^4=\lim_{p\to\infty} \lvert p^4 n_{\alpha,\beta}(p)\rvert=\lvert {\ensuremath{(l|[{\openone}_{D}\otimes \overline{R_{\alpha}}] {\ensuremath{\mathbb{T}}}_{\alpha}^3 [R_{\beta}\otimes{\openone}_{D}]|r)}}+{\ensuremath{(l|[R_{\beta}\otimes{\openone}_{D}] {\ensuremath{\mathbb{T}}}_{\alpha}^3 [{\openone}_{D}\otimes \overline{R_{\alpha}}]|r)}}\rvert,$$ where the absolute value is not required if we use $\beta=\alpha$. The eigenvalue spectrum of ${\ensuremath{\mathbb{T}}}_{\alpha}$ thus provides a definition for an ultraviolet cutoff scale $a=\Lambda^{-1}$. Rather than defining the ultraviolet cutoff scale $a=\Lambda^{-1}$ through the total particle density $$\rho_{\alpha,\beta}=\int_{-\infty}^{+\infty}\frac{{\ensuremath{\mathrm{d}}}p}{2\pi}\, n_{\alpha,\beta}(p),$$ we have now defined a UV cutoff scale $\Lambda$ based on the large momentum behavior of the momentum occupation number $n_{\alpha,\beta}(p)$.
For two pure uniform cMPS $\ket{\Psi(Q,\{R_{\alpha}\})}$ and $\ket{\Psi(Q',\{R_{\alpha}'\})}$ we can define a superoperator ${\ensuremath{\mathbb{T}}}_{\text{mixed}}=Q'\otimes{\openone}_{D}+{\openone}_{D}\otimes \overline{Q}+\sum_{\alpha=1}^{N}R_{\alpha}'\otimes \overline{R_{\alpha}}$ so that the $\braket{\Psi(Q,\{R_{\alpha}\})|\Psi(Q',\{R_{\alpha}'\})}$ decays as $\lim_{L\to+\infty}\exp(\lambda L)$, with $\lambda$ the eigenvalue with largest real part of ${\ensuremath{\mathbb{T}}}_{\text{mixed}}$. If the two uniform cMPS are inequivalent, $\Re(\lambda) < 0$ and there is an infrared orthogonality catastrophe. If $\Re(\lambda)=0$, then we can define a phase $\phi=\Im(\lambda)$ and a gauge transformation $g\in\mathsf{GL}(D;\mathbb{C})$ such that $Q'=g Q g^{-1} +{\ensuremath{\mathrm{i}}}\phi$ and $R'_{\alpha}=g R_{\alpha} g^{-1}$. With $f$ being the right eigenvector corresponding to eigenvalue $\lambda={\ensuremath{\mathrm{i}}}\phi$ of ${\ensuremath{\mathbb{T}}}_{\text{mixed}}$, $g$ can be obtained as $g=f r^{-1}$.
Let us also illustrate how to compute the expectation value of a translation invariant Hamiltonian. The generic Hamiltonian in Eq. becomes translation invariant for $v(x)=v$ and $w(x,y)=w(y-x)$ with $w(x)=w(-x)$. Since the uniform cMPS is extensive, expectation values are proportional to the volume and it makes more sense to compute the expectation values of the kinetic, potential and interaction energy densities ${\ensuremath{\hat{t}}}$, ${\ensuremath{\hat{v}}}$ and ${\ensuremath{\hat{w}}}$. We obtain $$\begin{aligned}
\braket{\Psi(\overline{Q},\{\overline{R}_{\alpha}\})|{\ensuremath{\hat{t}}}|\Psi(Q,\{R_{\alpha}\})}&=\frac{1}{2m}{\ensuremath{(l|[Q,R]\otimes [\overline{Q},\overline{R}]|r)}},\\
\braket{\Psi(\overline{Q},\{\overline{R}_{\alpha}\})|{\ensuremath{\hat{v}}}|\Psi(Q,\{R_{\alpha}\})}&=v{\ensuremath{(l|R\otimes \overline{R}|r)}},\end{aligned}$$ $$\begin{aligned}
\braket{\Psi(\overline{Q},\{\overline{R}_{\alpha}\})|{\ensuremath{\hat{w}}}|\Psi(Q,\{R_{\alpha}\})}&=\int_{0}^{+\infty}{\ensuremath{\mathrm{d}}}z\,w(z){\ensuremath{(l|R\otimes \overline{R} \mathrm{e}^{{\ensuremath{\mathbb{T}}} z} R\otimes\overline{R}|r)}}.\end{aligned}$$ If $w(z)$ has a Laplace transform $\mathscr{L}[w](\sigma)=\int_{0}^{+\infty}{\ensuremath{\mathrm{d}}}z w(z) \exp(-\sigma z)$ that is defined for $\Re \sigma \geq 0$, we obtain $$\begin{aligned}
\braket{\Psi|{\ensuremath{\hat{w}}}|\Psi}&={\ensuremath{(l|R\otimes \overline{R}\ \mathscr{L}[w](-{\ensuremath{\mathbb{T}}}) R\otimes\overline{R}|r)}}.\end{aligned}$$ Note that translation invariance has allowed the parameterization of a field with a continuous number of degrees of freedom by a discrete number of degrees of freedom. Having $l$ and $r$, the computational cost is $\operatorname{\mathscr{O}}(D^{6})$ when long-range interactions are present, since we then have to compute an arbitrary function $\mathscr{L}[w]$ of the transfer operator ${\ensuremath{\mathbb{T}}}$, unless $w$ is such that there is an exact or approximate (iterative) strategy for evaluating the action of $\mathscr{L}[w](-{\ensuremath{\mathbb{T}}})$ on a vector efficiently. One particular example is the case of strictly local interactions $w(x-y)\sim \delta(x-y)$. The interaction energy (density) can then be computed with computational complexity of $\operatorname{\mathscr{O}}(D^{3})$ just like the potential and the kinetic energy density.
Tangent vectors of continuous matrix product states {#s:tangent}
===================================================
Generic case
------------
For MPS, a new algorithm for time evolution and variational optimization (via imaginary time evolution) was recently constructed using the time-dependent variational principle[@2011arXiv1103.0936H]. An essential ingredient of this algorithm is the study of (infinitesimally) small variations of MPS, *i.e.* the set of MPS tangent vectors. Indeed, it was rigorously proven that the set of MPS can be given the structure of a variational manifold with a well-defined tangent space[@Haegeman:fk] by eliminating some singular points or regions. While we do expect the same theorems to hold for cMPS, the infinite dimensionality of the parameter space and Hilbert space might require a different proof strategy, especially in the absence of translation invariance. As noted several times before, this would be beyond the scope of this paper.
Given the practical use of tangent vectors, we nevertheless proceed, albeit in a more intuitive manner. Let us assume that we do have an open subset of cMPS with fixed bond dimension $D$ that constitute a (complex) manifold ${\ensuremath{\mathcal{M}}}_{\mathrm{cMPS}}\subset {\ensuremath{{\ensuremath{\mathbb{H}}}}}$. At any base point $\ket{\Psi[Q,\{R_{\alpha}\}]}\in {\ensuremath{\mathcal{M}}}_{\mathrm{cMPS}}$, we can construct a (holomorphic) tangent space $T_{\ket{\Psi[Q,\{R_{\alpha}\}]}} {\ensuremath{\mathcal{M}}}_{\mathrm{cMPS}} \subset {\ensuremath{{\ensuremath{\mathbb{H}}}}}$. If the collective index $i=1,\ldots,D^2$ is used to combine both virtual (matrix) indices $(\alpha,\beta)$ and we use the summation convention with respect to this index, a general tangent vector $\ket{\Phi[V,\{W_{\alpha}\};Q,\{R_{\alpha}\}]}$ in $T_{\ket{\Psi[Q,\{R_{\alpha}\}]}} {\ensuremath{\mathcal{M}}}_{\mathrm{cMPS}}$ can be defined as $$\begin{split}
&\ket{\Phi[V,\{W_{\alpha}\};Q,\{R_{\alpha}\}]}=\ket{\Phi^{[Q,\{R_{\alpha}\}]}[V,\{W_{\alpha}\}]}\\
&\quad=\int_{-L/2}^{+L/2}{\ensuremath{\mathrm{d}}}x\,\left(V^{i}(x) \frac{\delta\ }{\delta Q^{i}(x)}+\sum_{\beta=1}^{q}W_{\beta}^{i}(x) \frac{\delta\ }{\delta R_{\beta}^{i}(x)}\right) \ket{\Psi[Q,\{R_{\alpha}\}]}\\
&\quad=\int_{-L/2}^{+L/2}{\ensuremath{\mathrm{d}}}x\,\operatorname{tr}\left[B{\ensuremath{\hat{U}}}(-L/2,x) \left(V(x)\otimes {\ensuremath{\hat{{\openone}}}}+\sum_{\beta=1}^{q}W_{\beta}(x)\otimes {\ensuremath{\hat{\psi}^\dagger}}_{\beta}(x)\right){\ensuremath{\hat{U}}}(x,L/2)\right]\ket{\Omega}.
\end{split}\label{eq:deftangentgeneric}$$
Because of the gauge invariance discussed in Section \[s:gauge\], not all variations in $Q$ and $R_{\alpha}$ result in changes of the physical state. Consequently, not all linearly independent choices of the matrix functions $V$ and $W_{\alpha}$ result in linearly independent tangent vectors $\ket{\Phi[V,\{W_{\alpha}\};Q,\{R_{\alpha}\}]}$. Let $Q(\eta)$ and $R_{\alpha}(\eta)$ ($\forall \alpha=1,\ldots,q$) be a one-parameter family of matrix functions, so that $Q(\eta):{\ensuremath{\mathcal{R}}}\mapsto\mathbb{C}^{D\times D}:x\mapsto Q(x;\eta)$ and similarly for $R_{\alpha}(\eta)$. If we define $Q(0)=Q:x\mapsto Q(x)$, $R_{\alpha}(0)=R_{\alpha}:x\mapsto R_{\alpha}(x)$ together with ${\ensuremath{\mathrm{d}}}Q/{\ensuremath{\mathrm{d}}}\eta(0)=V:x\mapsto V(x)$ and ${\ensuremath{\mathrm{d}}}R_{\alpha}/{\ensuremath{\mathrm{d}}}\eta(0)=W_{\alpha}:x\mapsto W_{\alpha}(x)$, then we can write $$\left.\frac{{\ensuremath{\mathrm{d}}}\ }{{\ensuremath{\mathrm{d}}}\eta} \ket{\Psi[Q(\eta),{R_{\alpha}(\eta)}]}\right|_{\eta=0}=\ket{\Phi[V,\{W_{\alpha}\};Q,\{R_{\alpha}\}]}.$$ If we now choose a one-parameter family of gauge equivalent states, so that $Q(x;\eta)=g(x; \eta)^{-1}Q(x)g(x;\eta) +g(x,\eta)^{-1} \frac{\partial g(x; \eta)}{\partial x}$ and $R(x;\eta)=g(x;\eta)^{-1} R(x) g(x;\eta)$, where the one-parameter family of gauge transforms is given by $g(x;\eta)=\exp(\eta h(x))$ and $h(x)\in {\ensuremath{\mathfrak{gl}}}(\mathbb{C},D)\equiv\mathbb{C}^{D\times D}$, $\forall x\in{\ensuremath{\mathcal{R}}}$, then we can use the gauge invariance of the cMPS representation to obtain $\ket{\Psi[Q(x;\eta),R(x;\eta)]}=\ket{\Psi[Q(x),R(x)]}$ and thus $$\begin{aligned}
\ket{\Phi[\mathscr{M}_{\Phi}^{[Q]}[h],\{\mathscr{N}_{\alpha,\Phi}^{[R_{\alpha}]}[h]\};Q,\{R_{\alpha}\}]}=0,\end{aligned}$$ where the maps $\mathscr{M}_{\Phi}^{[Q]}$ and $\mathscr{N}_{\alpha,\Phi}^{[R_{\alpha}]}$ ($\forall \alpha=1,\ldots,N$) are given by $$\begin{aligned}
\mathscr{M}_{\Phi}^{[Q]}[h](x)&=[Q(x),h(x)]+\frac{{\ensuremath{\mathrm{d}}}h}{{\ensuremath{\mathrm{d}}}x}(x),&\mathscr{N}^{[R_{\alpha}]}_{\alpha,\Phi}[h](x)&=[R_{\alpha}(x),h(x)].\end{aligned}$$ The maps $\mathscr{M}_{\Phi}^{[Q]}$ and $\mathscr{N}_{\alpha,\Phi}^{[R_{\alpha}]}$ thus establish a linear homomorphism from functions $h:\mathcal{R}\to {\ensuremath{\mathfrak{gl}}}(\mathbb{C},D)\equiv\mathbb{C}^{D\times D}$ to the kernel of the representation $\ket{\Phi[V,\{W_{\alpha}\};Q,\{R_{\alpha}\}]}$ of the tangent space $\ket{\Psi[Q,\{R_{\alpha}\}]}\in T_{\ket{\Psi[Q,\{R_{\alpha}\}]}} {\ensuremath{\mathcal{M}}}_{\mathrm{cMPS}}$. Put differently, the representation of cMPS tangent vectors has a gauge invariance under the additive transformation law $V\leftarrow V+\mathscr{M}_{\Phi}^{[Q]}[h]$ and $W_{\alpha}\leftarrow W_{\alpha}+\mathscr{N}_{\alpha,\Phi}^{[R_{\alpha}]}[h]$. In all of the above, we have considered $B$ fixed. The gauge transformation $g(x)$ then has to satisfy the boundary condition $g(+L/2) B g(-L/2)^{-1}=B$, which also imposes a boundary condition on the set of allowed functions $h(x)$, namely $$h(+L/2) B - B h(-L/2) = 0.$$ In particular, for periodic boundary conditions with $B={\openone}_{D}$, we obtain that the generator $h:{\ensuremath{\mathcal{R}}}\to \mathfrak{gl}(D,\mathbb{C})$ should satisfy periodic boundary conditions $h(+L/2)=h(-L/2)$.
We now restrict to the case of open boundary conditions and discard the explicit reference to the base point $\ket{\Psi[Q,\{R_{\alpha}\}]}$ in the notation of tangent vectors. To take full advantage of the gauge freedom, we noted in Section \[s:gauge\] that is better to include one of the boundary vectors in the set of variational parameters. We thus generalize our definition of tangent vectors by also including variations with respect to *e.g.* the right boundary vector $\bm{v}_{\text{R}}$. We write $$\begin{split}
&\ket{\Phi[V,\{W_{\alpha}\},\bm{w}_{\mathrm{R}}]}\\
&\qquad=\bm{w}_{\mathrm{R}}\cdot \bm{\nabla}_{\bm{v}_{\mathrm{R}}}\ket{\Psi[Q,\{R_{\alpha}\}]}\\
&\qquad\qquad+\int_{-L/2}^{+L/2}{\ensuremath{\mathrm{d}}}x\,\left(V^{i}(x) \frac{\delta\ }{\delta Q^{i}(x)}+\sum_{\beta=1}^{N}W_{\beta}^{i}(x) \frac{\delta\ }{\delta R_{\beta}^{i}(x)}\right) \ket{\Psi[Q,\{R_{\alpha}\}]}\\
&\qquad=\bm{v}_{\mathrm{L}}^\dagger {\ensuremath{\hat{U}}}(-L/2,+L/2) \bm{w}_{\mathrm{R}}\ket{\Omega}\\
&\qquad\qquad+\int_{-L/2}^{+L/2}{\ensuremath{\mathrm{d}}}x\,\bm{v}_{\mathrm{L}}^\dagger {\ensuremath{\hat{U}}}(-L/2,x) \left(V(x)\otimes {\ensuremath{\hat{{\openone}}}}+\sum_{\beta=1}^{N}W_{\beta}(x)\otimes {\ensuremath{\hat{\psi}^\dagger}}_{\beta}(x)\right){\ensuremath{\hat{U}}}(x,L/2)\bm{v}_{\mathrm{R}}\ket{\Omega}.
\end{split}\label{eq:deftangentgeneric2}$$ Let us revisit the gauge freedom for the new tangent vectors of Eq. . The state $\ket{\Phi[V,\{W_{\alpha}\},\bm{w}_{\mathrm{R}}]}$ is invariant under the additive gauge transformation $V\leftarrow V+\mathscr{M}_{\Phi}[h]$, $W_{\alpha}\leftarrow W_{\alpha}+\mathscr{N}_{\alpha,\Phi}[h]$ and $\bm{w}_{\mathrm{R}}\leftarrow \bm{w}_{\mathrm{R}} + \bm{m}_{\Phi}[h]$ with $$\bm{m}_{\Phi}[h]=-h(+L/2)\bm{v}_{\mathrm{R}}.$$ Since $\bm{v}_{\mathrm{L}}$ is still fixed, the gauge transformation has to satisfy the boundary condition $g(-L/2)={\openone}_{D}$, so that its generator $h(x)$ satisfies $h(-L/2)=0$.
The overlap between two tangent vectors is given by $$\begin{split}
&\braket{\Phi[\overline{V},\{\overline{W}_{\alpha}\},\overline{\bm{w}_{\mathrm{R}}}]|\Phi[V',\{W'_{\alpha}\},\bm{w'}_{\mathrm{R}}]}=\bm{w}_{\mathrm{R}}^{\dagger} l(L/2) \bm{w'}_{\mathrm{R}}\\
&\qquad+\int_{-L/2}^{+L/2}{\ensuremath{\mathrm{d}}}x\, {\ensuremath{(l(x)|\sum_{\alpha=1}^{q} W'_{\alpha}(x) \otimes \overline{W_{\alpha}(x)} | r(x))}}\\
&\qquad +\int_{-L/2}^{+L/2}{\ensuremath{\mathrm{d}}}x\int_{x}^{+L/2}{\ensuremath{\mathrm{d}}}y\, \big(l(x)\big\vert\big[V'(x)\otimes 1_{D}+\sum_{\alpha=1}^{q} W'_{\alpha}(x)\otimes \overline{R_{\alpha}(x)}\big] \mathscr{P}\mathrm{e}^{\int_{x}^{y}{\ensuremath{\mathrm{d}}}z\, {\ensuremath{\mathbb{T}}}(z)}\\
&\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\quad\times \big[1_{D}\otimes \overline{V(y)}+\sum_{\alpha=1}^{q}R_{\alpha}(y)\otimes \overline{W_{\alpha}(y)}\big]|r(y)\big)\\
&\qquad+\int_{-L/2}^{+L/2}{\ensuremath{\mathrm{d}}}x\int_{-L/2}^{x}{\ensuremath{\mathrm{d}}}y\, \big(l(y)\big\vert\big[1_{D}\otimes \overline{V(y)}+\sum_{\alpha=1}^{q}R_{\alpha}(y)\otimes \overline{W_{\alpha}(y)}\big] \mathscr{P}\mathrm{e}^{\int_{y}^{x}{\ensuremath{\mathrm{d}}}z\, {\ensuremath{\mathbb{T}}}(z)}\\
&\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\quad\times \big[V'(x)\otimes 1_{D}+\sum_{\alpha=1}^{q}W'_{\alpha}(x)\otimes \overline{R_{\alpha}(x)}\big]\big\vert r(x)\big).
\end{split}\label{eq:phiphioverlap}$$ It defines a metric for the manifold ${\ensuremath{{\ensuremath{\mathcal{M}}}}}_{\mathrm{cMPS}}$ and features in any coordinate-invariant expression involving cMPS tangent vectors. We can use the gauge freedom in the representation of tangent vectors to simplify the expression above significantly. The counting argument for the gauge degrees of freedom is now less rigorous as in the discrete case. In general, we have $D^{2}$ parameters in $h(x)$ to eliminate $D^{2}$ degrees of freedom from $\{V(x),W_{1}(x),\ldots,W_{q}(x)\}$ at every point $x$. However, this is only correct if all linearly independent algebra-valued functions $h:{\ensuremath{\mathcal{R}}}\to{\ensuremath{\mathfrak{gl}}}(\mathbb{C},D)$ map to linearly independent matrix functions $[\mathscr{M}_{\Phi}^{[Q]},\{\mathscr{N}_{\alpha,\Phi}^{[R_{\alpha}]}\}]$. Let us show that by substituting $V(x)\leftarrow \tilde{V}(x)=V(x)+\mathscr{M}_{\Phi}[h](x)$ and $W_{\alpha}(x)\leftarrow \tilde{W}_{\alpha}(x)=W_{\alpha}(x)+\mathscr{N}_{\alpha,\Phi}[h](x)$ ($\forall \alpha=1,\ldots,q$), we can indeed impose $D^2$ conditions, such as the *left gauge fixing condition*: $${\ensuremath{(l(x)|}}\left[\tilde{V}(x)\otimes {\openone}_{D} + \sum_{n=1}^{N} \tilde{W}_{\alpha}(x)\otimes \overline{R_{\alpha}(x)}\right]=0.\label{eq:leftgaugefix}$$ This requires that $h$ is a solution of $$\frac{{\ensuremath{\mathrm{d}}}\ }{{\ensuremath{\mathrm{d}}}x}\big[l(x)h(x)\big]=\tilde{\mathscr{T}}^{(x)}\big[l(x)h(x)\big]-\left[l(x)V(x)+\sum_{\alpha=1}^{q} R_{\alpha}(x)^{\dagger} l(x) W_{\alpha}(x)\right]$$ which together with the boundary condition $h(-L/2)=0$ results in the solution $${\ensuremath{(l(x)h(x)|}}=-\int_{-L/2}^{x}{\ensuremath{\mathrm{d}}}y\, {\ensuremath{(l(y)|}}\left[V(y)\otimes{\openone}_{D}+\sum_{\alpha=1}^{q} W_{\alpha}(y)\otimes \overline{R}_{\alpha}(y)\right]\mathscr{P}\exp\left[\int_{y}^{x}{\ensuremath{\mathbb{T}}}(z)\,{\ensuremath{\mathrm{d}}}z\right].$$ This equation gives a solution for $l(x)h(x)$. We can extract $h(x)$ by multiplying with $l(x)^{-1}$ to the left. The left density matrix $l(x)$ should be positive definite and hence invertible for every $x>-L/2$. However, at $x=-L/2$ it equals $l(-L/2)=\bm{v}_{\mathrm{L}}\bm{v}_{\mathrm{L}}^{\dagger}$ and thus becomes singular. Nevertheless, the limit $\lim_{x\to-L/2} h(x)$ should be well defined since the right hand side of the equation above, which is being multiplied with $h(x)^{-1}$, will have a similar scaling.
Alternatively, we can also impose a *right gauge fixing condition* $$\left[V(x)\otimes {\openone}_{D} + \sum_{\alpha=1}^{N} W_{\alpha}(x)\otimes \overline{R_{\alpha}(x)}\right]{\ensuremath{|r(x))}}=0.\label{eq:rightgaugefix}$$
Finally, we remark that the tangent space $T_{\ket{\Psi[Q,\{R_{\alpha}\}]}} {\ensuremath{\mathcal{M}}}_{\mathrm{cMPS}}$ spanned by the states of Eq. contains the original cMPS $\ket{\Psi[Q,\{R_{\alpha}\}]}$, *e.g.* by choosing $V=1/L$, $W_{\alpha}=0$ and $\bm{w}_{\mathrm{R}}=0$ or by choosing $V=W_{\alpha}=0$ and $\bm{w}_{\mathrm{R}}=\bm{v}_{\mathrm{R}}$. Both choices are related by a gauge transform with $h(x)=(x/L+1/2){\openone}_{D}$. For a general tangent vector $\ket{\Phi[V,\{W_{\alpha}\},\bm{w}_{\mathrm{R}}]}$, we obtain $$\begin{split}
&\braket{\Psi[\overline{Q},\{\overline{R}_{\alpha}\}]|\Phi[V,\{W_{\alpha}\},\bm{w}_{\mathrm{R}}]}=\bm{v}_{\mathrm{R}}^{\dagger}l(L/2) \bm{w}_{\mathrm{R}}\\
&\qquad\qquad\qquad+\int_{-L/2}^{+L/2}{\ensuremath{\mathrm{d}}}x\,{\ensuremath{(l(x)|V(x)\otimes {\openone}_{D}+\sum_{\alpha=1}^{N} W_{\alpha}(x)\otimes \overline{R_{\alpha}(x)}|r(x))}}.
\end{split}\label{eq:overlappsiphi}$$ If we fix the gauge according to either the left or right gauge fixing prescription, the second term cancels. We can restrict to the orthogonal complement of $\ket{\Psi[Q,\{R_{\alpha}\}]}$ in $T_{\ket{\Psi[Q,\{R_{\alpha}\}]}} {\ensuremath{\mathcal{M}}}_{\mathrm{cMPS}}$, which is denoted as $T_{\ket{\Psi[Q,\{R_{\alpha}\}]}} {\ensuremath{\mathcal{M}}}_{\mathrm{cMPS}}^\perp$, by further imposing $$\bm{v}_{\mathrm{R}}^{\dagger}l(L/2) \bm{w}_{\mathrm{R}}=0.$$
Uniform case
------------
We specialize again to the case of translation invariant systems in the thermodynamic limit. While the parameter space is now finite dimensional, it is fruitful to still consider the full tangent space to the manifold of all (translation non-invariant) cMPS at the special uniform point $\ket{\Psi(Q,\{R_{\alpha}\})}$. This boils down to allowing space-dependent matrix functions $V(x)$ and $W_{\alpha}(x)$ in the definition of the tangent vectors. We can then decompose the full tangent space into sectors ${\ensuremath{{\ensuremath{\mathbb{T}}}}}_{\Phi_{p}}$ of momentum $p\in\mathbb{R}$ by introducing Fourier modes $V(x)=V {\ensuremath{\mathrm{e}}}^{{\ensuremath{\mathrm{i}}}p x}$ and $W_{\alpha}(x)=W_{\alpha}{\ensuremath{\mathrm{e}}}^{{\ensuremath{\mathrm{i}}}p x}$, resulting in $$\begin{gathered}
\ket{\Phi_{p}(V,\{W_{\alpha}\};Q,\{R_{\alpha}\})}=\ket{\Phi_{p}^{(Q,\{R_{\alpha}\})}(V,\{W_{\alpha}\})}=\\
\int_{-\infty}^{+\infty}{\ensuremath{\mathrm{d}}}x\,{\ensuremath{\mathrm{e}}}^{{\ensuremath{\mathrm{i}}}p x} \bm{v}_{\mathrm{L}}^{\dagger}{\ensuremath{\hat{U}}}(-\infty,x) \left(V\otimes {\ensuremath{\hat{{\openone}}}}+\sum_{\alpha=1}^{N}W_{\alpha}\otimes {\ensuremath{\hat{\psi}^\dagger}}_{\alpha}(x)\right){\ensuremath{\hat{U}}}(x,+\infty)\bm{v}_{\mathrm{R}}\ket{\Omega}.\end{gathered}$$ Note that the boundary vectors $\bm{v}_{\mathrm{L},\mathrm{R}}$ are irrelevant for the bulk properties of these states, and they are therefore not included in the set of variational parameters in the thermodynamic limit. Consequently, we also do not need to differentiate with respect to one of them in order to define the tangent space.
We can also compute the overlap between two of these tangent vectors and obtain $$\begin{split}
&\braket{\Phi_p(\overline{V},\{\overline{W}_{\alpha}\})|\Phi_{p'}(V',\{W'_{\alpha}\})}=\int_{-\infty}^{+\infty}{\ensuremath{\mathrm{d}}}x\, {\ensuremath{\mathrm{e}}}^{{\ensuremath{\mathrm{i}}}(p'-p) x}{\ensuremath{(l|\sum_{\alpha=1}^{q} W'_{\alpha} \otimes \overline{W_{\alpha}} | r)}}\\
&\qquad +\int_{-\infty}^{+\infty}{\ensuremath{\mathrm{d}}}x\int_{x}^{+\infty}{\ensuremath{\mathrm{d}}}y\, {\ensuremath{\mathrm{e}}}^{{\ensuremath{\mathrm{i}}}(p'x - py)}\big(l\big\vert\big[V'\otimes 1_{D}+\sum_{\alpha=1}^{q} W'_{\alpha}\otimes \overline{R_{\alpha}}\big] \mathrm{e}^{(y-x){\ensuremath{\mathbb{T}}}}\\
&\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\quad\times \big[1_{D}\otimes \overline{V}+\sum_{\alpha=1}^{q}R_{\alpha}\otimes \overline{W_{\alpha}}\big]|r\big)\\
&\qquad+\int_{-\infty}^{+\infty}{\ensuremath{\mathrm{d}}}x\int_{-\infty}^{x}{\ensuremath{\mathrm{d}}}y\,{\ensuremath{\mathrm{e}}}^{{\ensuremath{\mathrm{i}}}(p'y-px)} \big(l\big\vert\big[1_{D}\otimes \overline{V}+\sum_{\alpha=1}^{q}R_{\alpha}\otimes \overline{W_{\alpha}}\big] \mathrm{e}^{(x-y){\ensuremath{\mathbb{T}}}}\\
&\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\quad\times \big[V'\otimes 1_{D}+\sum_{\alpha=1}^{q}W'_{\alpha}\otimes \overline{R_{\alpha}}\big]\big\vert r\big).
\end{split}$$ If we again resort to the decomposition of Eq. , we can further evaluate this to $$\begin{split}
&\braket{\Phi_p(\overline{V},\{\overline{W}_{\alpha}\})|\Phi_{p'}(V',\{W'_{\alpha}\})}=\\
&\qquad 2\pi\delta(p'-p)\Big[{\ensuremath{(l|\sum_{\alpha=1}^{q} W'_{\alpha} \otimes \overline{W_{\alpha}} | r)}}\\
&\qquad\qquad\qquad +\big(l\big\vert\big[V'\otimes 1_{D}+\sum_{\alpha=1}^{q} W'_{\alpha}\otimes \overline{R_{\alpha}}\big](-{\ensuremath{\mathbb{T}}}+{\ensuremath{\mathrm{i}}}p)^{\mathsf{P}}\big[1_{D}\otimes \overline{V}+\sum_{\alpha=1}^{q}R_{\alpha}\otimes \overline{W_{\alpha}}\big]|r\big)\\
&\qquad\qquad\qquad +\big(l\big\vert\big[1_{D}\otimes \overline{V}+\sum_{\alpha=1}^{q}R_{\alpha}\otimes \overline{W_{\alpha}}\big] (-{\ensuremath{\mathbb{T}}}-{\ensuremath{\mathrm{i}}}p)^{\mathsf{P}} \big[V'\otimes 1_{D}+\sum_{\alpha=1}^{q}W'_{\alpha}\otimes \overline{R_{\alpha}}\big]\big\vert r\big)\Big]\\
&\qquad+(2\pi)^2 \delta(p) \delta(p')\big(l\big\vert\big[V'\otimes 1_{D}+\sum_{\alpha=1}^{q} W'_{\alpha}\otimes \overline{R_{\alpha}}\big]\big\vert r\big)\big(l\big\vert\big[1_{D}\otimes \overline{V}+\sum_{\alpha=1}^{q}R_{\alpha}\otimes \overline{W_{\alpha}}\big]|r\big).
\end{split}\label{eq:phipoverlap}$$ The momentum eigenstates $\ket{\Phi_{p}(V,\{W_{\alpha}\})}$ cannot be normalized to unity in the thermodynamic limit, but rather satisfy a $\delta$-normalization. For $p=p'=0$, there is an additional divergence which is stronger than the $\delta$-normalization. It can be related to the overlap between the $\ket{\Phi_{p}(V,\{W_{\alpha}\})}$ and the original cMPS $\ket{\Psi(Q,\{R_{\alpha}\})}$, which is given by $$\braket{\Psi(\overline{Q},\{\overline{R}_{\alpha}\})|\Phi_p(V,\{W_{\alpha}\})}=2\pi\delta(p) \big(l\big\vert|\big[V\otimes 1_{D}+\sum_{\alpha=1}^{q} W_{\alpha}\otimes \overline{R_{\alpha}}\big]\big\vert r\big).\label{eq:psiphipoverlap}$$
As before, a one-parameter family of local gauge transformations $g(x;s)=\exp(sh(x))$ with $h(x)\in{\ensuremath{\mathfrak{gl}}}(D;\mathbb{C})$ induces a map to the kernel of the representation $\Phi_{p}$ of ${\ensuremath{{\ensuremath{\mathbb{T}}}}}_{\Phi_{p}}$ by setting $h(x)=h{\ensuremath{\mathrm{e}}}^{{\ensuremath{\mathrm{i}}}p x}$, so that $$\ket{\Phi_{p}(\mathscr{M}_{\Phi_{p}}^{(Q)}(h),\{\mathscr{N}_{\alpha,\Phi_{p}}^{(R_{\alpha})}(h)\};Q,\{R_{\alpha}\})}=0,$$ with $$\begin{aligned}
\mathscr{M}_{\Phi_{p}}^{(Q)}(h)&=[Q,h]+{\ensuremath{\mathrm{i}}}p h&&\text{and}&\mathscr{N}_{\alpha,\Phi_{p}}^{(R_{\alpha})}(h)=[R_{\alpha},h].\end{aligned}$$ We henceforth omit the superscript notation of $Q$ and $R_{\alpha}$. The dimension of the kernel of the map $\Phi_{p}$ is thus $D^{2}$-dimensional, except at $p=0$. This can easily be proven, since for every non-zero $h\in{\ensuremath{\mathfrak{gl}}}(D;\mathbb{C})$, $\mathscr{M}_{\Phi_{p}}(h)\neq 0$ or $\mathscr{N}_{\alpha,\Phi_{p}}(h)\neq 0$, $\forall \alpha=1,\ldots,N$. Indeed, suppose that $\mathscr{M}_{\Phi_{p}}(h)= 0$ and $\mathscr{N}_{\Phi_{p}}(h)=0$. Imposing that $$\mathscr{M}_{\Phi_{p}}(h) r+\sum_{\alpha=1}^{N} \mathscr{N}_{\alpha,\Phi_{p}}(h) r R_{\alpha}^{\dagger} =0$$ results in ${\ensuremath{\mathbb{T}}} {\ensuremath{|h r)}}={\ensuremath{\mathrm{i}}}p {\ensuremath{|h r)}}$ which has no non-trivial solution except at $p=0$, where we find $h=c{\openone}_{D}$ with $c\in\mathbb{C}$. At nonzero momenta, we can use a gauge fixing condition to reduce the number of parameters by $D^{2}$. At $p=0$, we can only reduce the number of parameters by $D^{2}-1$ through gauge fixing. But imposing orthogonality to $\ket{\Psi(Q,R)}$ manually at $p=0$ allows to discard one additional parameter. For any momentum $p$, we can uniquely fix the gauge of any tangent vector in ${\ensuremath{{\ensuremath{\mathbb{T}}}}}_{\Phi_{p}}^{\perp}$ by setting ${\ensuremath{(l|}}V\otimes 1_{D} + W\otimes R=0$ or $V\otimes 1_{D} + W\otimes R{\ensuremath{|r)}}=0$, corresponding to the left and right gauge fixing conditions respectively. It can indeed be checked that with either one of these conditions being satisfied, the overlap $\braket{\Psi(\overline{Q},\{\overline{R}_{\alpha}\})|\Phi_p(V,\{W_{\alpha}\})}$ given in Eq. vanishes even for $p=0$. In addition, if either gauge fixing condition is satisfied, the overlap between two tangent vectors simplifies significantly, as only the local term survives. Also note the difference with the approach for translation non-invariant systems in the previous subsection. There we could impose the left or right gauge fixing condition for any $x$, without this automatically implying that $\ket{\Phi[V,\{W_{\alpha}\},\bm{w}_{\mathrm{R}}]}\perp \ket{\Psi[Q,\{R_{\alpha}\}]}$, since a non-zero overlap between the tangent vector and the original cMPS could be encoded in the changing boundary vector $\bm{w}_{\mathrm{R}}$.
Conclusion and outlook
======================
This manuscript provides a detailed description of a variational class of wave functions for one-dimensional quantum field theories, that goes by the name of “continuous matrix product states”. We reviewed different alternative constructions that produce the same class of states and have their own merits, *e.g.* in offering clear hints on how to generalize this class to different settings such as open quantum systems or higher-dimensional theories.
We illustrated how to formulate the cMPS ansatz for the most general class of theories including an arbitrary number of bosonic and fermionic particles, and were naturally led to a set of constraints that the variational parameters needed to satisfy in order to produce a finite kinetic energy density. We also discussed other physical constraints such as fermion parity. We then proceeded by explaining in detail how to compute expectation values, in particular for the case of systems with open boundary conditions. We provided some additional details for the case of systems with translation invariance, where we can use the expectation value of a correlation function to define an ultraviolet cutoff within the cMPS state.
We also discussed the important topic of gauge invariance in the cMPS representation. Finally we introduced the concept of cMPS tangent vectors, and discussed how the gauge invariance allows to represent them in such a way that the metric of the cMPS manifold simplifies tremendously.
While we have not introduced any practical algorithms or recipes for finding cMPS approximations of ground states or for describing other physical phenomena, we have introduced all necessary definitions and concepts in order to comfortably work with cMPS. This set of definitions can now be used in follow-up papers that will focus on new algorithms. As such, the current paper provides a stepping stone that will hopefully spur more research in the context of variational methods for quantum field theories in one dimension and beyond.
JH acknowledges fruitful discussions with Michaël Mariën. This work was supported by the EU grants QUERG and QFTCMPS, by the FWF SFB grants FoQuS and ViCoM, by the DFG cluster of excellence NIM and by the cluster of excellence EXC 201 Quantum Engineering and Space-Time Research.
A useful formula {#a:formula}
================
Consider an operator ${\ensuremath{\hat{U}}}(x,y)$ defined as $${\ensuremath{\hat{U}}}(x,y)={\ensuremath{\mathscr{P}\exp}}\left[\int_x^y {\ensuremath{\hat{A}}}(z)\,{\ensuremath{\mathrm{d}}}z\right],$$ where ${\ensuremath{\hat{A}}}$ is not necessarily antihermitian. This operator satisfies $$\begin{aligned}
\frac{{\ensuremath{\mathrm{d}}}\ }{{\ensuremath{\mathrm{d}}}x} {\ensuremath{\hat{U}}}(x,y)&=-{\ensuremath{\hat{A}}}(x) {\ensuremath{\hat{U}}}(x,y),&
\frac{{\ensuremath{\mathrm{d}}}\ }{{\ensuremath{\mathrm{d}}}y} {\ensuremath{\hat{U}}}(x,y)&=+{\ensuremath{\hat{U}}}(x,y) {\ensuremath{\hat{A}}}(y).\label{eq:diffU}\end{aligned}$$ For the derivatives of the inverse operator ${\ensuremath{\hat{U}}}(x,y)^{-1}$ we can use the general result $$\begin{aligned}
\frac{{\ensuremath{\mathrm{d}}}\ }{{\ensuremath{\mathrm{d}}}x} {\ensuremath{\hat{U}}}(x,y)^{-1} &= - {\ensuremath{\hat{U}}}(x,y)^{-1} \left(\frac{{\ensuremath{\mathrm{d}}}\ }{{\ensuremath{\mathrm{d}}}x} {\ensuremath{\hat{U}}}(x,y)\right) {\ensuremath{\hat{U}}}(x,y)^{-1}=+{\ensuremath{\hat{U}}}(x,y)^{-1} {\ensuremath{\hat{A}}}(x),\\
\frac{{\ensuremath{\mathrm{d}}}\ }{{\ensuremath{\mathrm{d}}}y} {\ensuremath{\hat{U}}}(x,y)^{-1} &= - {\ensuremath{\hat{U}}}(x,y)^{-1} \left(\frac{{\ensuremath{\mathrm{d}}}\ }{{\ensuremath{\mathrm{d}}}y} {\ensuremath{\hat{U}}}(x,y)\right) {\ensuremath{\hat{U}}}(x,y)^{-1}=- {\ensuremath{\hat{A}}}(y){\ensuremath{\hat{U}}}(x,y)^{-1},\\\end{aligned}$$
Now define the following operator quantity depending on an arbitrary operator ${\ensuremath{\hat{B}}}$ $${\ensuremath{\hat{C}}}(x,y)={\ensuremath{\hat{U}}}(x,y){\ensuremath{\hat{B}}} {\ensuremath{\hat{U}}}(x,y)^{-1}.$$ By taking the derivative with respect to $y$, we obtain $$\frac{{\ensuremath{\mathrm{d}}}\ }{{\ensuremath{\mathrm{d}}}y}{\ensuremath{\hat{C}}}(x,y)={\ensuremath{\hat{U}}}(x,y)\left[{\ensuremath{\hat{A}}}(y),{\ensuremath{\hat{B}}}\right] {\ensuremath{\hat{U}}}(x,y)^{-1}.$$ Integrating ${\ensuremath{\mathrm{d}}}{\ensuremath{\hat{C}}}(x,z) /{\ensuremath{\mathrm{d}}}z$ for $z$ from $x$ to $y$ and making use of the initial value ${\ensuremath{\hat{C}}}(x,x)={\ensuremath{\hat{B}}}$ results in $${\ensuremath{\hat{C}}}(x,y)={\ensuremath{\hat{B}}}+\int_x^y {\ensuremath{\hat{U}}}(x,z) \left[{\ensuremath{\hat{A}}}(z),{\ensuremath{\hat{B}}}\right]{\ensuremath{\hat{U}}}(x,z)^{-1}\, {\ensuremath{\mathrm{d}}}z.$$ We then multiply this equality with ${\ensuremath{\hat{U}}}(x,y)$ to the right and make use of the obvious identity ${\ensuremath{\hat{U}}}(x,y)={\ensuremath{\hat{U}}}(x,z) {\ensuremath{\hat{U}}}(z,y)$ for any $x<z<y$ in the integral of the right hand side in order to obtain our final result $$\left[{\ensuremath{\hat{U}}}(x,y),{\ensuremath{\hat{B}}}\right]=\int_x^y {\ensuremath{\hat{U}}}(x,z) \left[{\ensuremath{\hat{A}}}(z),{\ensuremath{\hat{B}}}\right]{\ensuremath{\hat{U}}}(z,y)\,{\ensuremath{\mathrm{d}}}z.\label{eq:commutatorequality}$$
We can further generalize this result. Suppose we have two operators ${\ensuremath{\hat{U}}}_{\pm}(x,y)$ defined as $${\ensuremath{\hat{U}}}_{\pm}(x,y)={\ensuremath{\mathscr{P}\exp}}\left[\int_x^y \left\{{\ensuremath{\hat{A}}}_1(z) \pm {\ensuremath{\hat{A}}}_2(z)\right\}\,{\ensuremath{\mathrm{d}}}z\right],$$ for arbitrary ${\ensuremath{\hat{A}}}_{1,2}(z)$. If we consider the quantity $${\ensuremath{\hat{C}}}(x,y)={\ensuremath{\hat{U}}}_{-}(x,y){\ensuremath{\hat{B}}} {\ensuremath{\hat{U}}}_{+}(x,y)^{-1},$$ then we obtain $$\frac{{\ensuremath{\mathrm{d}}}\ }{{\ensuremath{\mathrm{d}}}y}{\ensuremath{\hat{C}}}(x,y)={\ensuremath{\hat{U}}}_{-}(x,y)\left(\left[{\ensuremath{\hat{A}}}_1(y),{\ensuremath{\hat{B}}}\right]-\left\{{\ensuremath{\hat{A}}}_{2}(y),{\ensuremath{\hat{B}}}\right\}\right) {\ensuremath{\hat{U}}}(x,y)_{+}^{-1},$$ using a similar derivation. Continuing along the same line results in $${\ensuremath{\hat{B}}}{\ensuremath{\hat{U}}}_{+}(x,y) -{\ensuremath{\hat{U}}}_{-}(x,y){\ensuremath{\hat{B}}} = \int_x^y {\ensuremath{\hat{U}}}_{-}(x,x)\left(\left[{\ensuremath{\hat{B}}},{\ensuremath{\hat{A}}}_1(z)\right]+\left\{{\ensuremath{\hat{B}}},{\ensuremath{\hat{A}}}_2(z)\right\}\right){\ensuremath{\hat{U}}}_{+}(z,y)\,{\ensuremath{\mathrm{d}}}z.\label{eq:commutatorequalitygeneralized}$$
Higher order regularity conditions {#a:higherorderregularity}
==================================
In this appendix we derive additional regularity conditions by considering higher derivatives of the field operators acting on the ground state. Throughout this appendix, we assume that Eq. is fulfilled and $R_{\alpha}(x)$ has well-behaved higher order derivatives. We now consider the state $({\ensuremath{\mathrm{d}}}^{2}{\ensuremath{\hat{\psi}}}_{\alpha}(x)/ {\ensuremath{\mathrm{d}}}x^{2}) \ket{\Psi[Q,\{R_{\beta}\}]}$, which contains a contribution with infinite norm unless $$\left[\frac{{\ensuremath{\mathrm{d}}}R_\alpha}{{\ensuremath{\mathrm{d}}}x}(x) +[Q(x),R_\alpha(x)], R_{\beta}(x)\right]_{\mp}=0,\label{eq:regcondition2}$$ where $[\cdot,\cdot]_{\mp}$ is a commutator ($-$) or anticommutator ($+$) for $\eta_{\alpha,\beta}=\pm 1$. If $Q$ and $R_{\alpha}$ obey all equations to have a ‘well defined’ derivative up to order $n$, so that the state $({\ensuremath{\mathrm{d}}}^{n}{\ensuremath{\hat{\psi}}}(x)/{\ensuremath{\mathrm{d}}}x^{n})\ket{\Psi[Q,\{R_{\beta}\}]}$ is normalizable, the sufficient condition to eliminate all harmful contributions from $({\ensuremath{\mathrm{d}}}^{n+1}{\ensuremath{\hat{\psi}}}(x)/{\ensuremath{\mathrm{d}}}x^{n+1})\ket{\Psi[Q,\{R_{\beta}\}]}$ is $$\begin{gathered}
\bigg[\frac{{\ensuremath{\mathrm{d}}}^{n}\ }{{\ensuremath{\mathrm{d}}}x^{n}}R_\alpha(x) +\frac{\mathrm{d}^{n-1}\ }{\mathrm{d} x^{n-1}}[Q(x),R_\alpha(x)]+\frac{\mathrm{d}^{n-2}\ }{\mathrm{d} x^{n-2}}[Q(x),[Q(x),R_\alpha(x)]]\\
+ \ldots + [Q(x),[\ldots,[Q(x),R(x)]] \ldots ] , R_{\beta}(x)\bigg]_{\mp}=0.\label{eq:regconditionn}\end{gathered}$$
We can also impose regularity of the mixed derivatives of the $N$-particle wave function, by first evaluating ${\ensuremath{\hat{\psi}}}_{\alpha}(x){\ensuremath{\hat{\psi}}}_{\beta}(y)\ket{\Psi[Q,\{R_{\gamma}\}]}$ $$\begin{gathered}
{\ensuremath{\hat{\psi}}}_{\alpha}(x){\ensuremath{\hat{\psi}}}_{\beta}(y)\ket{\Psi[Q,\{R_{\gamma}\}]}=\\
\theta(y-x)\operatorname{tr}\left[B {\ensuremath{\hat{U}}}_{\alpha,\beta}(-L/2,x) \eta_{\beta,\alpha}R_{\alpha}(x) {\ensuremath{\hat{U}}}_{\beta}(x,y)R_{\beta}(y){\ensuremath{\hat{U}}}(y,+L/2)\right]\ket{\Omega}\\
+\theta(x-y)\operatorname{tr}\left[B {\ensuremath{\hat{U}}}_{\alpha,\beta}(-L/2,y) R_{\beta}(y) {\ensuremath{\hat{U}}}_{\alpha}(y,x)R_{\alpha}(x){\ensuremath{\hat{U}}}(x,+L/2)\right]\ket{\Omega}\end{gathered}$$ where a new set of operators ${\ensuremath{\hat{U}}}_{\alpha,\beta}(x,y)$ ($\alpha,\beta=1,\ldots,q$) was introduced as $${\ensuremath{\hat{U}}}_{\alpha,\beta}(x,y)=\mathscr{P} \exp\left[\int_{x}^{y}{\ensuremath{\mathrm{d}}}z\, \left\{Q(z)\otimes {\ensuremath{\hat{{\openone}}}} + \sum_{\gamma=1}^{q}\eta_{\alpha,\gamma}\eta_{\beta,\gamma}R_{\gamma}(z)\otimes {\ensuremath{\hat{\psi}^\dagger}}_{\gamma}(z)\right\}\right]\label{eq:defUalphabeta}.$$ Note that the regularity condition in Eq. is sufficient for the annihilation of two particles ${\ensuremath{\hat{\psi}}}_{\alpha}(x){\ensuremath{\hat{\psi}}}_{\beta}(y)\ket{\Psi[Q,\{R_{\gamma}\}]}$ to be continuous at $x=y$. By first differentiating with respect to $x$, we obtain $$\begin{gathered}
\left(\frac{{\ensuremath{\mathrm{d}}}{\ensuremath{\hat{\psi}}}_{\alpha}}{{\ensuremath{\mathrm{d}}}x}(x)\right){\ensuremath{\hat{\psi}}}_{\beta}(y)\ket{\Psi[Q,\{R_{\gamma}\}]}\\
\shoveleft{\quad=\theta(y-x)\operatorname{tr}\Bigg[B {\ensuremath{\hat{U}}}_{\alpha,\beta}(-L/2,x) \eta_{\beta,\alpha}\bigg(\frac{{\ensuremath{\mathrm{d}}}R_{\alpha}}{{\ensuremath{\mathrm{d}}}x}(x) +\big[Q(x),R_{\alpha}(x)\big]\bigg)}\\
\shoveright{\times{\ensuremath{\hat{U}}}_{\beta}(x,y)R_{\beta}(y){\ensuremath{\hat{U}}}(y,+L/2)\Bigg]\ket{\Omega}\ \ }\\
\shoveleft{\quad\quad+\theta(x-y)\operatorname{tr}\Bigg[B {\ensuremath{\hat{U}}}_{\alpha,\beta}(-L/2,y) R_{\beta}(y) {\ensuremath{\hat{U}}}_{\alpha}(y,x)}\\
\times\bigg(\frac{{\ensuremath{\mathrm{d}}}R_{\alpha}}{{\ensuremath{\mathrm{d}}}x}(x) +[Q(x),R_{\alpha}(x)]\bigg){\ensuremath{\hat{U}}}(x,+L/2)\Bigg]\ket{\Omega},\end{gathered}$$ where we have assumed the regularity condition in Eq. to hold. This allows one to eliminate the fixed insertion of particles at position $x$ as well as the terms obtained from differentiating the Heaviside functions (*i.e.* the terms proportional to $\delta(x-y)$). Such terms would indeed arise if ${\ensuremath{\hat{\psi}}}_{\alpha}(x){\ensuremath{\hat{\psi}}}_{\beta}(y)\ket{\Psi[Q,\{R_{\gamma}\}]}$ were not continuous at $x=y$. If we now also differentiate with respect to $y$, we obtain a divergent contribution $$-\delta(x-y)\operatorname{tr}\left[B {\ensuremath{\hat{W}}}_{\alpha,\beta}(-L/2,x) \left[R_{\beta}(x),\frac{{\ensuremath{\mathrm{d}}}R_{\alpha}}{{\ensuremath{\mathrm{d}}}x}(x) +[Q(x),R_{\alpha}(x)]\right]_{\mp}{\ensuremath{\hat{U}}}(x,+L/2)\right]\ket{\Omega}.$$ If we differentiated with respect to $y$ first, and then to $x$, the divergent contribution is $$\delta(x-y)\operatorname{tr}\left[B {\ensuremath{\hat{W}}}_{\alpha,\beta}(-L/2,x) \left[\frac{{\ensuremath{\mathrm{d}}}R_{\beta}}{{\ensuremath{\mathrm{d}}}x}(x) +[Q(x),R_{\beta}(x)],R_{\alpha}(x)\right]_{\mp}{\ensuremath{\hat{U}}}(x,+L/2)\right]\ket{\Omega}.$$ Since we are working under assumption of the regularity condition $[R_{\beta}(x),R_{\alpha}(x)]_{\mp}=0$ \[Eq. \], it is easy to show that $[R_{\beta}(x),{\ensuremath{\mathrm{d}}}R_{\alpha}(x)/{\ensuremath{\mathrm{d}}}x]_{\mp}=-[{\ensuremath{\mathrm{d}}}R_{\beta}(x)/{\ensuremath{\mathrm{d}}}x,R_{\alpha}(x)]_{\mp}$ and also $[R_{\beta}(x),[Q(x),R_{\alpha}(x)]]_{\mp}=-[[Q(x),R_{\beta}(x)],R_{\alpha}(x)]_{\mp}$, so that both diverging contributions are equal. By imposing $$\left[\frac{{\ensuremath{\mathrm{d}}}R_{\beta}}{{\ensuremath{\mathrm{d}}}x}(x) +[Q(x),R_{\beta}(x)],R_{\alpha}(x)\right]_{\mp}=-\left[R_{\beta}(x),\frac{{\ensuremath{\mathrm{d}}}R_{\alpha}}{{\ensuremath{\mathrm{d}}}x}(x) +[Q(x),R_{\alpha}(x)]\right]_{\mp}=0\label{eq:regmixed}$$ the mixed derivative $({\ensuremath{\mathrm{d}}}{\ensuremath{\hat{\psi}}}_{\alpha}(x)/{\ensuremath{\mathrm{d}}}x)({\ensuremath{\mathrm{d}}}{\ensuremath{\hat{\psi}}}_{\beta}(y)/{\ensuremath{\mathrm{d}}}y)\ket{\Psi[Q(x),\{R_{\gamma}\}]}$ is well defined and normalizable. Note that Eq. is identical to Eq. , so that regularity of the mixed product of two first order derivatives is guaranteed if the second order derivative is regular, or vice versa.
The higher order regularity conditions derived in this appendix put very strong constraints on $Q$ and $R_{\alpha}$ that might be hard to satisfy with finite-dimensional matrices. As mentioned in the main text, satisfying the original condition in Eq. , as imposed by the finiteness of the kinetic energy, should be sufficient for most practical applications.
[^1]: cMPS still obey the infrared orthogonality catastrophe when formulated in the thermodynamic limit (see Section \[s:ti\])
[^2]: If there is no insertion at the same position, we can always insert a unit operator ${\openone}_D$
[^3]: While we mentioned in Section \[s:bc\] that we always assume the matrix functions $Q$ and $R_{\alpha}$ to satisfy the proper boundary conditions, we do not have to use the condition in Eq. at any point in deriving the expectation value of the Hamiltonian ${\ensuremath{\hat{H}}}$ in Eq. .
[^4]: While we take a standard matrix logarithm, it also makes sense to define the linear maps $\mathscr{T}$, $\tilde{\mathscr{T}}$ as the logarithm of —or the generator for— the completely positive maps $\mathscr{E}$ and $\tilde{\mathscr{E}}$ associated to the left or right action of ${\ensuremath{\mathbb{E}}}$. However, not all completely positive maps have a natural logarithm associated to it, as was shown in Ref. .
| {
"pile_set_name": "ArXiv"
} |
Public Cloud: Free Trials from Rackspace and VMware
To entice more customers to try their public clouds, Rackspace and VMware are now offering cloud computing trials either at a very low cost or for free. Rackspace and Red Hat are both offering a free version of their OpenStack’s private clouds. VMware, on the other hand, is offering free vCloud trial to the public. This move is beneficial to business enterprises which are currently using virtualized environments so that they will be encouraged to move to either a private or public cloud. | {
"pile_set_name": "Pile-CC"
} |
The present invention relates to footwear and, more particularly, to footwear that promotes the natural movement of the wearer's foot and conformity to the ground.
Conventional footwear typically includes two primary elements: an upper and a sole construction. The upper at least partially covers the wearer's foot, and the sole construction provides support for the wearer's sole. The sole construction can include multiple layers and materials. For example, conventional sole constructions can include a molded foam midsole over a natural rubber outsole. The molded foam midsole can provide cushioning while the natural rubber outsole can provide traction and wear resistance.
Conventional sole constructions are primarily flexible in a single direction. In particular, many sole constructions are intended to flex in the upward direction, in which the ground engaging surface of the outsole is convex. Flexibility of this kind is typically achieved with modifications to the outsole. For example, it is known to introduce grooves in the outsole to promote the bending of the outsole in the upward direction. It is also known to separate the outsole into individual components that move away from each other as the outsole is bent in an upward direction.
The natural movement of the wearer's foot is not limited to flexure in the upward direction, however. In addition to upward flexure, or dorsi-flexion, the human foot naturally exhibits downward flexure, or plantar-flexion. Conventional sole constructions typically exhibit significant resistance to plantar-flexion, however. For example, many conventional sole constructions include an outsole or a midsole that resists plantar-flexion of the wearer's foot. By opposing the natural ability of the human foot to flex downwardly, many such sole constructions compromise stability and grip on all but even surfaces. | {
"pile_set_name": "USPTO Backgrounds"
} |
Variable capacitors are used in many applications, such as matching networks and variable filters. They allow for the precise tuning, after assembly, of frequency and/or impedance in applications needing a dynamic system response, such as in plasma processes. The ability to dynamically change impedance and frequency response provides more flexibility for the applications variable capacitors are used in, and can compensate for variations from unit-to-unit. Some examples of variable capacitors are vacuum variable capacitors (VVCs) and electronically variable capacitors (EVCs).
In electronic circuits, matching networks are used to match the source impedance to the load impedance and vice versa. That is, the source, being of some impedance with a resistive part and a reactive part, will be terminated into the complex conjugate impedance, and the load impedance will be driven by the complex conjugate of its impedance. The complex conjugate is used to eliminate the reactive part of the impedance, leaving only the resistive part, and the resistive part is made equal. This is done so that maximum power transfer can be achieved at the load.
In plasma applications, the load impedance can vary depending on several factors, such as time, power level, pressure, gas flow, chemistry of the gasses, and whether the plasma has been struck. Accordingly, the matching network must be able to automatically vary itself to ensure that the maximum power transfer is achieved. This helps with repeatability in both the depositing and etching.
EVCs use switches to add or remove fixed capacitors, such as an MLCC (multi-layer ceramic capacitor), in a circuit. The capacitor and switch are placed in series. This circuit is then placed in parallel with other capacitor/switch circuits. The parallel circuits allow the capacitors to be simply added or subtracted in the circuit, depending on how many switches are opened or closed. In the case where all the switches are open, the EVC will be at its lowest capacitance value. In the case where they are all closed, the EVC will be at its highest capacitance value.
There are different approaches for arranging and choosing the capacitors of the EVC such that the EVC can provide progressively increasing capacitance values. There is need for an arrangement of capacitors for an EVC that provides the needed capacitance values while avoiding overlap in solutions, and while using a lower number of capacitors, switches, and associated hardware, and thus taking up less space. | {
"pile_set_name": "USPTO Backgrounds"
} |
Adsorption of polyvinylpyrrolidone on Ag surfaces: insight into a structure-directing agent.
We use density functional theory to resolve the role of polyvinylpyrrolidone (PVP) in the shape-selective synthesis of Ag nanostructures. At the segment level, PVP binds more strongly to Ag(100) than Ag(111) because of a surface-sensitive balance between direct binding and van der Waals attraction. At the chain level, correlated segment binding leads to a strong preference for PVP bind to Ag(100). Our study underscores differences between small-molecule and polymeric structure-directing agents. | {
"pile_set_name": "PubMed Abstracts"
} |
Q:
NSOperationQueue addOperations waitUntilFinished
Hi I am building an app using Swift. I need to process notifications in a specific order. Therefore I am trying to use addOperations waitUntilFinished.
Here is what I did:
let oldify = NSOperation()
oldify.completionBlock = {
println("oldify")
}
let appendify = NSOperation()
appendify.completionBlock = {
println("appendify")
}
let nettoyify = NSOperation()
nettoyify.completionBlock = {
println("nettoyify")
}
NSOperationQueue.mainQueue().maxConcurrentOperationCount = 1
NSOperationQueue.mainQueue().addOperations([oldify, appendify, nettoyify], waitUntilFinished: true)
With this code none of the operations is being executed. When I try this instead:
NSOperationQueue.mainQueue().maxConcurrentOperationCount = 1
NSOperationQueue.mainQueue().addOperation(oldify)
NSOperationQueue.mainQueue().addOperation(appendify)
NSOperationQueue.mainQueue().addOperation(nettoyify)
The operations get executed but not in the right order.
Does anyone know what I'm doing wrong? I am getting confident in swift but completely new to NSOperations
A:
A couple of issues:
You are examining behavior of the completion block handlers. As the completionBlock documentation says:
The exact execution context for your completion block is not guaranteed but is typically a secondary thread. Therefore, you should not use this block to do any work that requires a very specific execution context.
The queue will manage the operations themselves, but not their completion blocks (short of making sure that the the operation finishes before its completionBlock is started). So, bottom line, do not make any assumptions about (a) when completion blocks are run, (b) the relation of one operation's completionBlock to other operations or their completionBlock blocks, etc., nor (c) which thread they are performed on.
Operations are generally executed in the order in which they were added to the queue. If you add an array of operations, though, the documentation makes no formal assurances that they are enqueued in the order they appear in that array. You might, therefore, want to add the operations one at a time.
Having said that, the documentation goes on to warn us:
An operation queue executes its queued operation objects based on their priority and readiness. If all of the queued operation objects have the same priority and are ready to execute when they are put in the queue—that is, their isReady method returns YES—they are executed in the order in which they were submitted to the queue. However, you should never rely on queue semantics to ensure a specific execution order of operation objects. Changes in the readiness of an operation can change the resulting execution order. If you need operations to execute in a specific order, use operation-level dependencies as defined by the NSOperation class.
To establish explicit dependencies, you might do something like:
let oldify = NSBlockOperation() {
NSLog("oldify")
}
oldify.completionBlock = {
NSLog("oldify completion")
}
let appendify = NSBlockOperation() {
NSLog("appendify")
}
appendify.completionBlock = {
NSLog("appendify completion")
}
appendify.addDependency(oldify)
let nettoyify = NSBlockOperation() {
NSLog("nettoyify")
}
nettoyify.completionBlock = {
NSLog("nettoyify completion")
}
nettoyify.addDependency(appendify)
let queue = NSOperationQueue()
queue.addOperations([oldify, appendify, nettoyify], waitUntilFinished: false)
BTW, as you'll see above, you should not add operations to the main queue in conjunction with the waitUntilFinished. Feel free to add them to a different queue, but don't dispatch from a serial queue, back to itself, with the waitUntilFinished option.
| {
"pile_set_name": "StackExchange"
} |
/* @group Base */
.chzn-container {
font-size: 13px;
position: relative;
display: inline-block;
zoom: 1;
*display: inline;
}
.chzn-container .chzn-drop {
background: #fff;
border: 1px solid #aaa;
border-top: 0;
position: absolute;
top: 29px;
left: 0;
-webkit-box-shadow: 0 4px 5px rgba(0,0,0,.15);
-moz-box-shadow : 0 4px 5px rgba(0,0,0,.15);
box-shadow : 0 4px 5px rgba(0,0,0,.15);
z-index: 1010;
}
/* @end */
/* @group Single Chosen */
.chzn-container-single .chzn-single {
background-color: #ffffff;
filter: progid:DXImageTransform.Microsoft.gradient( startColorstr='#ffffff', endColorstr='#eeeeee', GradientType=0 );
background-image: -webkit-gradient(linear, 0 0, 0 100%, color-stop(20%, #ffffff), color-stop(50%, #f6f6f6), color-stop(52%, #eeeeee), color-stop(100%, #f4f4f4));
background-image: -webkit-linear-gradient(top, #ffffff 20%, #f6f6f6 50%, #eeeeee 52%, #f4f4f4 100%);
background-image: -moz-linear-gradient(top, #ffffff 20%, #f6f6f6 50%, #eeeeee 52%, #f4f4f4 100%);
background-image: -o-linear-gradient(top, #ffffff 20%, #f6f6f6 50%, #eeeeee 52%, #f4f4f4 100%);
background-image: linear-gradient(#ffffff 20%, #f6f6f6 50%, #eeeeee 52%, #f4f4f4 100%);
-webkit-border-radius: 5px;
-moz-border-radius : 5px;
border-radius : 5px;
-moz-background-clip : padding;
-webkit-background-clip: padding-box;
background-clip : padding-box;
border: 1px solid #aaaaaa;
-webkit-box-shadow: 0 0 3px #ffffff inset, 0 1px 1px rgba(0,0,0,0.1);
-moz-box-shadow : 0 0 3px #ffffff inset, 0 1px 1px rgba(0,0,0,0.1);
box-shadow : 0 0 3px #ffffff inset, 0 1px 1px rgba(0,0,0,0.1);
display: block;
overflow: hidden;
white-space: nowrap;
position: relative;
height: 23px;
line-height: 24px;
padding: 0 0 0 8px;
color: #444444;
text-decoration: none;
}
.chzn-container-single .chzn-default {
color: #999;
}
.chzn-container-single .chzn-single span {
margin-right: 26px;
display: block;
overflow: hidden;
white-space: nowrap;
-o-text-overflow: ellipsis;
-ms-text-overflow: ellipsis;
text-overflow: ellipsis;
}
.chzn-container-single .chzn-single abbr {
display: block;
position: absolute;
right: 26px;
top: 6px;
width: 12px;
height: 13px;
font-size: 1px;
background: url('chosen-sprite.png') right top no-repeat;
}
.chzn-container-single .chzn-single abbr:hover {
background-position: right -11px;
}
.chzn-container-single.chzn-disabled .chzn-single abbr:hover {
background-position: right top;
}
.chzn-container-single .chzn-single div {
position: absolute;
right: 0;
top: 0;
display: block;
height: 100%;
width: 18px;
}
.chzn-container-single .chzn-single div b {
background: url('chosen-sprite.png') no-repeat 0 0;
display: block;
width: 100%;
height: 100%;
}
.chzn-container-single .chzn-search {
padding: 3px 4px;
position: relative;
margin: 0;
white-space: nowrap;
z-index: 1010;
}
.chzn-container-single .chzn-search input {
background: #fff url('chosen-sprite.png') no-repeat 100% -22px;
background: url('chosen-sprite.png') no-repeat 100% -22px, -webkit-gradient(linear, 0 0, 0 100%, color-stop(1%, #eeeeee), color-stop(15%, #ffffff));
background: url('chosen-sprite.png') no-repeat 100% -22px, -webkit-linear-gradient(top, #eeeeee 1%, #ffffff 15%);
background: url('chosen-sprite.png') no-repeat 100% -22px, -moz-linear-gradient(top, #eeeeee 1%, #ffffff 15%);
background: url('chosen-sprite.png') no-repeat 100% -22px, -o-linear-gradient(top, #eeeeee 1%, #ffffff 15%);
background: url('chosen-sprite.png') no-repeat 100% -22px, linear-gradient(#eeeeee 1%, #ffffff 15%);
margin: 1px 0;
padding: 4px 20px 4px 5px;
outline: 0;
border: 1px solid #aaa;
font-family: sans-serif;
font-size: 1em;
}
.chzn-container-single .chzn-drop {
-webkit-border-radius: 0 0 4px 4px;
-moz-border-radius : 0 0 4px 4px;
border-radius : 0 0 4px 4px;
-moz-background-clip : padding;
-webkit-background-clip: padding-box;
background-clip : padding-box;
}
/* @end */
.chzn-container-single-nosearch .chzn-search input {
position: absolute;
left: -9000px;
}
/* @group Multi Chosen */
.chzn-container-multi .chzn-choices {
background-color: #fff;
background-image: -webkit-gradient(linear, 0 0, 0 100%, color-stop(1%, #eeeeee), color-stop(15%, #ffffff));
background-image: -webkit-linear-gradient(top, #eeeeee 1%, #ffffff 15%);
background-image: -moz-linear-gradient(top, #eeeeee 1%, #ffffff 15%);
background-image: -o-linear-gradient(top, #eeeeee 1%, #ffffff 15%);
background-image: linear-gradient(#eeeeee 1%, #ffffff 15%);
border: 1px solid #aaa;
margin: 0;
padding: 0;
cursor: text;
overflow: hidden;
height: auto !important;
height: 1%;
position: relative;
}
.chzn-container-multi .chzn-choices li {
float: left;
list-style: none;
}
.chzn-container-multi .chzn-choices .search-field {
white-space: nowrap;
margin: 0;
padding: 0;
}
.chzn-container-multi .chzn-choices .search-field input {
color: #666;
background: transparent !important;
border: 0 !important;
font-family: sans-serif;
font-size: 100%;
height: 15px;
padding: 5px;
margin: 1px 0;
outline: 0;
-webkit-box-shadow: none;
-moz-box-shadow : none;
box-shadow : none;
}
.chzn-container-multi .chzn-choices .search-field .default {
color: #999;
}
.chzn-container-multi .chzn-choices .search-choice {
-webkit-border-radius: 3px;
-moz-border-radius : 3px;
border-radius : 3px;
-moz-background-clip : padding;
-webkit-background-clip: padding-box;
background-clip : padding-box;
background-color: #e4e4e4;
filter: progid:DXImageTransform.Microsoft.gradient( startColorstr='#f4f4f4', endColorstr='#eeeeee', GradientType=0 );
background-image: -webkit-gradient(linear, 0 0, 0 100%, color-stop(20%, #f4f4f4), color-stop(50%, #f0f0f0), color-stop(52%, #e8e8e8), color-stop(100%, #eeeeee));
background-image: -webkit-linear-gradient(top, #f4f4f4 20%, #f0f0f0 50%, #e8e8e8 52%, #eeeeee 100%);
background-image: -moz-linear-gradient(top, #f4f4f4 20%, #f0f0f0 50%, #e8e8e8 52%, #eeeeee 100%);
background-image: -o-linear-gradient(top, #f4f4f4 20%, #f0f0f0 50%, #e8e8e8 52%, #eeeeee 100%);
background-image: linear-gradient(#f4f4f4 20%, #f0f0f0 50%, #e8e8e8 52%, #eeeeee 100%);
-webkit-box-shadow: 0 0 2px #ffffff inset, 0 1px 0 rgba(0,0,0,0.05);
-moz-box-shadow : 0 0 2px #ffffff inset, 0 1px 0 rgba(0,0,0,0.05);
box-shadow : 0 0 2px #ffffff inset, 0 1px 0 rgba(0,0,0,0.05);
color: #333;
border: 1px solid #aaaaaa;
line-height: 13px;
padding: 3px 20px 3px 5px;
margin: 3px 0 3px 5px;
position: relative;
cursor: default;
}
.chzn-container-multi .chzn-choices .search-choice-focus {
background: #d4d4d4;
}
.chzn-container-multi .chzn-choices .search-choice .search-choice-close {
display: block;
position: absolute;
right: 3px;
top: 4px;
width: 12px;
height: 13px;
font-size: 1px;
background: url('chosen-sprite.png') right top no-repeat;
}
.chzn-container-multi .chzn-choices .search-choice .search-choice-close:hover {
background-position: right -11px;
}
.chzn-container-multi .chzn-choices .search-choice-focus .search-choice-close {
background-position: right -11px;
}
/* @end */
/* @group Results */
.chzn-container .chzn-results {
margin: 0 4px 4px 0;
max-height: 240px;
padding: 0 0 0 4px;
position: relative;
overflow-x: hidden;
overflow-y: auto;
-webkit-overflow-scrolling: touch;
}
.chzn-container-multi .chzn-results {
margin: -1px 0 0;
padding: 0;
}
.chzn-container .chzn-results li {
display: none;
line-height: 15px;
padding: 5px 6px;
margin: 0;
list-style: none;
}
.chzn-container .chzn-results .active-result {
cursor: pointer;
display: list-item;
}
.chzn-container .chzn-results .highlighted {
background-color: #3875d7;
filter: progid:DXImageTransform.Microsoft.gradient( startColorstr='#3875d7', endColorstr='#2a62bc', GradientType=0 );
background-image: -webkit-gradient(linear, 0 0, 0 100%, color-stop(20%, #3875d7), color-stop(90%, #2a62bc));
background-image: -webkit-linear-gradient(top, #3875d7 20%, #2a62bc 90%);
background-image: -moz-linear-gradient(top, #3875d7 20%, #2a62bc 90%);
background-image: -o-linear-gradient(top, #3875d7 20%, #2a62bc 90%);
background-image: linear-gradient(#3875d7 20%, #2a62bc 90%);
color: #fff;
}
.chzn-container .chzn-results li em {
background: #feffde;
font-style: normal;
}
.chzn-container .chzn-results .highlighted em {
background: transparent;
}
.chzn-container .chzn-results .no-results {
background: #f4f4f4;
display: list-item;
}
.chzn-container .chzn-results .group-result {
cursor: default;
color: #999;
font-weight: bold;
}
.chzn-container .chzn-results .group-option {
padding-left: 15px;
}
.chzn-container-multi .chzn-drop .result-selected {
display: none;
}
.chzn-container .chzn-results-scroll {
background: white;
margin: 0 4px;
position: absolute;
text-align: center;
width: 321px; /* This should by dynamic with js */
z-index: 1;
}
.chzn-container .chzn-results-scroll span {
display: inline-block;
height: 17px;
text-indent: -5000px;
width: 9px;
}
.chzn-container .chzn-results-scroll-down {
bottom: 0;
}
.chzn-container .chzn-results-scroll-down span {
background: url('chosen-sprite.png') no-repeat -4px -3px;
}
.chzn-container .chzn-results-scroll-up span {
background: url('chosen-sprite.png') no-repeat -22px -3px;
}
/* @end */
/* @group Active */
.chzn-container-active .chzn-single {
-webkit-box-shadow: 0 0 5px rgba(0,0,0,.3);
-moz-box-shadow : 0 0 5px rgba(0,0,0,.3);
box-shadow : 0 0 5px rgba(0,0,0,.3);
border: 1px solid #5897fb;
}
.chzn-container-active .chzn-single-with-drop {
border: 1px solid #aaa;
-webkit-box-shadow: 0 1px 0 #fff inset;
-moz-box-shadow : 0 1px 0 #fff inset;
box-shadow : 0 1px 0 #fff inset;
background-color: #eee;
filter: progid:DXImageTransform.Microsoft.gradient( startColorstr='#eeeeee', endColorstr='#ffffff', GradientType=0 );
background-image: -webkit-gradient(linear, 0 0, 0 100%, color-stop(20%, #eeeeee), color-stop(80%, #ffffff));
background-image: -webkit-linear-gradient(top, #eeeeee 20%, #ffffff 80%);
background-image: -moz-linear-gradient(top, #eeeeee 20%, #ffffff 80%);
background-image: -o-linear-gradient(top, #eeeeee 20%, #ffffff 80%);
background-image: linear-gradient(#eeeeee 20%, #ffffff 80%);
-webkit-border-bottom-left-radius : 0;
-webkit-border-bottom-right-radius: 0;
-moz-border-radius-bottomleft : 0;
-moz-border-radius-bottomright: 0;
border-bottom-left-radius : 0;
border-bottom-right-radius: 0;
}
.chzn-container-active .chzn-single-with-drop div {
background: transparent;
border-left: none;
}
.chzn-container-active .chzn-single-with-drop div b {
background-position: -18px 1px;
}
.chzn-container-active .chzn-choices {
-webkit-box-shadow: 0 0 5px rgba(0,0,0,.3);
-moz-box-shadow : 0 0 5px rgba(0,0,0,.3);
box-shadow : 0 0 5px rgba(0,0,0,.3);
border: 1px solid #5897fb;
}
.chzn-container-active .chzn-choices .search-field input {
color: #111 !important;
}
/* @end */
/* @group Disabled Support */
.chzn-disabled {
cursor: default;
opacity:0.5 !important;
}
.chzn-disabled .chzn-single {
cursor: default;
}
.chzn-disabled .chzn-choices .search-choice .search-choice-close {
cursor: default;
}
/* @group Right to Left */
.chzn-rtl { text-align: right; }
.chzn-rtl .chzn-single { padding: 0 8px 0 0; overflow: visible; }
.chzn-rtl .chzn-single span { margin-left: 26px; margin-right: 0; direction: rtl; }
.chzn-rtl .chzn-single div { left: 3px; right: auto; }
.chzn-rtl .chzn-single abbr {
left: 26px;
right: auto;
}
.chzn-rtl .chzn-choices .search-field input { direction: rtl; }
.chzn-rtl .chzn-choices li { float: right; }
.chzn-rtl .chzn-choices .search-choice { padding: 3px 5px 3px 19px; margin: 3px 5px 3px 0; }
.chzn-rtl .chzn-choices .search-choice .search-choice-close { left: 4px; right: auto; background-position: right top;}
.chzn-rtl.chzn-container-single .chzn-results { margin: 0 0 4px 4px; padding: 0 4px 0 0; }
.chzn-rtl .chzn-results .group-option { padding-left: 0; padding-right: 15px; }
.chzn-rtl.chzn-container-active .chzn-single-with-drop div { border-right: none; }
.chzn-rtl .chzn-search input {
background: #fff url('chosen-sprite.png') no-repeat -38px -22px;
background: url('chosen-sprite.png') no-repeat -38px -22px, -webkit-gradient(linear, 0 0, 0 100%, color-stop(1%, #eeeeee), color-stop(15%, #ffffff));
background: url('chosen-sprite.png') no-repeat -38px -22px, -webkit-linear-gradient(top, #eeeeee 1%, #ffffff 15%);
background: url('chosen-sprite.png') no-repeat -38px -22px, -moz-linear-gradient(top, #eeeeee 1%, #ffffff 15%);
background: url('chosen-sprite.png') no-repeat -38px -22px, -o-linear-gradient(top, #eeeeee 1%, #ffffff 15%);
background: url('chosen-sprite.png') no-repeat -38px -22px, linear-gradient(#eeeeee 1%, #ffffff 15%);
padding: 4px 5px 4px 20px;
direction: rtl;
}
/* @end */
| {
"pile_set_name": "Github"
} |
Hey! I was just informed that there are still 77 of you that are subscribed to the RSS Feed of this blog. But we've moved and have a NEW RSS FEED! If you'd like to continue reading this blog, please change your subscription to the one at our new location. You can subscribe by clicking right here. Thanks!
Yes I know this blog looks lopsided and deformed and the comments dont even work. That's because I don't know blogger and I dont have time to get to know blogger. And Typepad has spoiled me. I want my categories! I want my automatic google-map thingy, I want to host pictures with my blog, I want!
Typepad, very trustful Typepad, is looking VERY tempting right now. I just might have to move over there. Yes, move this blog AGAIN! I mean, I know Typepad very well. I've been using it for my other blog for three years without any problems. It's so easy! I'm just trying to figure out how to start a second blog on the same account or if that's even possible. And since Typepad does everything for you, and I'm so busy these days, it looks like Typepad is the way to go. It won't be a big deal moving over there except that this blog's address will change which is definitely annoying. But I know that once I'm there I'll stay there and I'll feel cozy and at home.
More pet coolness. How awesome are these? Popoutz are little bird feeders made from recyclable plastic. They're packaged flat and "pop out" when you open them, ready to be filled with birdie food. They also come in different shapes and sizes. See them here.
Poketo now sells bags! Yep, that's right. Now you can get a bag to match your cool wallet. The bags look roomy and perfect for someone like me that likes to carry all of my possesions around. They're made of canvas and have bold graphics made by different artists. There are two sizes: small and large and come in black or natural. Some are totes and others are messengers. I'm really loving the one on the top right in the image above which features a graphic by Miki Amano. The top left is a tote with a graphic by Katharina Leuzinger. She also did the graphic on the bottom right tote (love her). Bottom Left: Messenger with graphic by PCP.
I love Poketo! They not only sell great products like these bags and their award-winning wallets (I don't know if they're really award-winning, but they should be.) and apparel but it's just a great source to find new artists. And the Poketo people (aka Ted Vadakan and Angie Myung) have great taste! I'm a fan. | {
"pile_set_name": "Pile-CC"
} |
Q:
Map 2 fields from the same entity with another entity
I have a Symfony2 forum application where, among others, I have 2 entities, namely "User" and "Conversation". A conversation is always between only 2 persons, and I need to know the user who started it and the user to whom it is addressed. This are called "userFrom" and "userTo". I want to map "userFrom" with "id" column from "User" and "userTo" with the same column from "User".
I made this:
/**
* User
*
* @ORM\Table(name="app_users")
* @ORM\Entity(repositoryClass="Forum\CoreBundle\Repository\UserRepository")
*/
class User extends Timestampable implements UserInterface, \Serializable
{
/**
* @var ArrayCollection
*
* @ORM\OneToMany(targetEntity="Conversation", mappedBy="fromUser")
*/
private $conversationsFromUser;
/**
* @var ArrayCollection
*
* @ORM\OneToMany(targetEntity="Conversation", mappedBy="toUser")
*/
private $conversationsToUser;
}
and
/**
* Conversation
*
* @ORM\Table(name="conversation")
* @ORM\Entity(repositoryClass="Forum\CoreBundle\Repository\ConversationRepository")
*/
class Conversation extends Timestampable
{
/**
* @var User
*
* @ORM\ManyToOne(targetEntity="User", inversedBy="$conversationsFromUser")
* @ORM\JoinColumn(name="id_from_user", referencedColumnName="id", nullable=false)
* @Assert\NotBlank
*/
private $fromUser;
/**
* @var User
*
* @ORM\ManyToOne(targetEntity="User", inversedBy="$conversationsToUser")
* @ORM\JoinColumn(name="id_to_user", referencedColumnName="id", nullable=false)
* @Assert\NotBlank
*/
private $toUser;
}
but Symfony2 complains saying "the mappings are inconsistent with each other".
I want that when I request:
$user->getConversationsFromUser() to get a list of all the conversations started by that user
$user->getConversationsToUser() to get a list of all the conversations to that user
$conversation->getFromUser() to get the user who started that conversation
$conversation->getToUser() to get the user to whom that conversation is sent
Is it possible to map what I want with Doctrine2?
A:
inversedBy="$conversationsFromUser" => inversedBy="conversationsFromUser"
" $ " !!!
| {
"pile_set_name": "StackExchange"
} |
Chrisley Knows Best is a real-life family comedy, which broke its ratings record on its second season premiere, follows the entertaining lives of Atlanta-based "Patriarch of Perfection" Todd Chrisley, and his colorful clan including his wife, five children and two grandchildren.
Add to my shows | {
"pile_set_name": "Pile-CC"
} |
521 F.2d 957
31 A.L.R.Fed. 437, 172 U.S.App.D.C. 297,1 Fed. R. Evid. Serv. 22
UNITED STATES of Americav.Joseph E. SMITH, Appellant.
No. 74-1446.
United States Court of Appeals,District of Columbia Circuit.
Argued Jan. 14, 1975.Decided Oct. 23, 1975.
Roy F. Perkins, Jr., Washington, D. C. (appointed by this court), for appellant.
Michael A. Pace, Asst. U. S. Atty., with whom Earl J. Silbert, U. S. Atty., and John A. Terry, James F. McMullin, and John J. Mulrooney, Asst. U. S. Attys., were on the brief, for appellee.
Before WRIGHT and ROBINSON, Circuit Judges, and DAVIS,* Associate Judge.
Opinion for the court filed by Circuit Judge WRIGHT.
J. SKELLY WRIGHT, Circuit Judge:
1
Appellant was convicted in the District Court of robbery in violation of 22 D.C.Code § 2901 (1973) and sentenced to eight years imprisonment pursuant to the Youth Corrections Act, 18 U.S.C. § 5010(c) (1970). In this court he charges the District Court with reversible error in refusing to admit into evidence Police Department Form 251, the official report of the police officer who received the initial complaint of the robbery, and the transcript of that officer's subsequent radio broadcasts. He further claims the court compounded its error by failing to admit the P.D. Form 251 following a specific request from the jury. We agree the District Court was in error, and we remand the case to the District Court for further proceedings.
2
* Appellant was accused of robbing at gunpoint one James Williams, a taxi driver, in his cab shortly before 8:00 a. m. on March 18, 1971. Appellant was charged with armed robbery, 22 D.C.Code §§ 2901, 3202 (1973), robbery, 22 D.C.Code § 2901, and assault with a dangerous weapon, 22 D.C.Code § 502 (1973). At trial Williams testified that he picked appellant up in the vicinity of 58th and East Capitol Streets and was told to take him to 529 51st Street, N. E., a boarded-up and deserted apartment in a two-building complex. Upon arriving at that address, appellant allegedly displayed a pistol and demanded Williams' money. Williams turned over $28.00 in bills and coins, whereupon his assailant left the cab demanding that Williams drive on and not look back. Disobeying this instruction, Williams waited until his assailant was out of sight and then backed his cab up in time to see the robber enter an apartment in the building in the complex facing that containing 529. Because of the angle at which he was watching, Williams could not be certain exactly which apartment was entered, but he testified that it had to be one of two possibilities. One of the two was 527 51st Street, where appellant lived with his mother and sisters.
3
After circling the block Williams was able to locate and stop a police car driven by Officer John T. Carr. He reported the robbery to Officer Carr and described the robber. Officer Carr recorded this information on his Form 251 and then broadcast the report to the police dispatcher. Thereupon Carr and Williams returned to the apartment complex where they were joined by other officers who had monitored the radio dispatch. Because the officers misunderstood Williams' directions as to which building the robber had entered, they were concentrating their attention on the building containing Apartment 529 when appellant emerged from Apartment 527. Officer Roy J. Miller, who was just leaving Apartment 521, observed appellant's exit and noted that he matched Williams' description. Simultaneously Williams, who was waiting in a police car, noticed appellant and immediately identified him as his assailant, whereupon appellant was arrested. The police never searched appellant's apartment, or sought a warrant to do so, and the money and the gun were never recovered.
4
At trial the crucial evidence against appellant was Williams' identification. Williams was absolutely certain that appellant was his assailant, testifying not only that he identified him at the second sighting immediately after the robbery, but that he had seen appellant around the neighborhood over a four- or five-year period. Williams testified that he visited with a friend approximately once a week over this period in an apartment in the same complex as appellant's, and that he had frequently seen appellant standing on the street. He testified that he recognized appellant as soon as he picked him up and thus was particularly surprised when the robbery took place, asking his assailant, "You don't know me?" Transcript (Tr.) 19.
5
Since Williams' identification of appellant was so important to the Government's case indeed, it was virtually the entire case appellant's counsel strenuously tried to impeach Williams' credibility. He did so by attempting to develop inconsistencies between Williams' stated description of the crime and his assailant, and the report as recorded on Form 251 and as broadcast to the police dispatcher. Most of the discrepancies appeared in the Form 251. They were as follows: Williams stated (1) that the robbery occurred prior to 8:00 a. m., Tr. 89, while the form listed the time as 8:05 a. m.; (2) that he picked up his passenger at 58th and East Capitol Streets, Tr. 66-67, while the P.D. 251 gave the pickup location as 50th and East Capitol Streets; (3) that the robber never touched his wallet and change purse, Tr. 66-67, while the form stated that the robber had himself removed the money from these articles; (4) that he told Officer Carr the robber was wearing Hush Puppy shoes, Tr. 87, while the form made no mention of the robber's shoes. In addition, appellant developed inconsistencies between Williams' testimony and the radio broadcast. Williams claimed (1) that assailant had a "boy's haircut," Tr. 68, while the broadcast refers to a "bush"; (2) that the robber had a "light brown" complexion, Tr. 72, while the broadcast refers to "dark" complexion.1 Since the defense presented no witnesses, its case was largely dependent upon exploitation of these inconsistencies.2
6
When appellant sought to use the Form 251 to impeach Williams, the court refused to allow it into evidence, ruling that it was not his statement, but was hearsay and as such could not be used to impeach Williams. If it was to be admitted at all, it was to be through Officer Carr.3 Tr. 63-64. At the conclusion of Officer Carr's testimony, however, the court refused to allow admission of either P.D. 251 or the broadcast transcript, ruling that their use was still for purposes of impeachment and, as such, they were inadmissible hearsay.4 Tr. 115-116. The court's comments are set out in full in the margin.
7
Although the court refused to allow admission of the two documents, it did allow the defense counsel to show P.D. 251 to Williams to refresh his memory about what he told Officer Carr. After reading the form, Williams said, "No, that's not correct at all." Tr. 64. On cross-examination and redirect, he attempted to explain the inconsistencies.5 During that process, and during the subsequent examination of Officer Carr, the contents of the documents were fully aired to the jury. Tr. 64-68, 100, 102, 104-107, 112. At one point the description of the robber contained in P.D. Form 251 was read aloud verbatim.6 Tr. 100. The inconsistencies were hammered home to the jury again and again by appellant's trial counsel.7
8
In addition to presenting Williams and Officer Carr, the prosecution presented the arresting officer, Officer Miller, who identified appellant as the man he arrested after Williams' on-the-scene identification. Tr. 122. Thereafter the Government rested. After the defense motion for judgment of acquittal was denied, appellant rested without presenting any evidence. Tr. 143. The next day, after closing arguments, the Government dismissed the third count of the indictment, assault with a dangerous weapon, and the court charged the jury.8 Tr. 150-171. Several hours later the jury returned with a request for further instruction on the elements of armed robbery and for Defense Exhibit 5, the Form 251. The court denied the latter request, ruling that since the form was not in evidence the jury was not entitled to see it. The court told the jury it would have to rely on its recollection of the form as discussed in court. Tr. 172. The court denied appellant's renewed motion to introduce the documents into evidence. Tr. 179. Shortly thereafter the jury returned a verdict of not guilty on the count of armed robbery and guilty on the count of robbery. Tr. 180.
II
9
Appellant alleges that the District Court erred in excluding the Form 251 and the broadcast transcript from introduction into evidence. He claims they are admissible as business records and may be used to impeach the credibility of the complaining witness, Williams.9 We agree.
10
The business record exception to the hearsay rule, unlike most other exceptions, has been codified for some time,10 28 U.S.C. § 1732(a) (1970) and is contained in the new Federal Rules of Evidence (FRE) in a form similar to that in which it appeared in the United States Code.11 FRE, Rule 803(6). The exception is intended to allow introduction of reliable and accurate records without the necessity of calling every person who made or contributed to the record. A business record is admissible whether or not the maker is available to take the stand, 28 U.S.C. § 1732.12 While no case in this circuit has yet so held, at least five other circuits have found that a police record constitutes a business record within the meaning of the Act. See, e. g., Salsberg v. Modern Transfer Co., 2 Cir., 324 F.2d 737 (1963) (Marshall, J.); Bowman v. Kaufman, 2 Cir., 387 F.2d 582 (1967); United States v. Burruss, 4 Cir., 418 F.2d 677 (1969); United States v. Halperin, 5 Cir., 441 F.2d 612 (1971); United States v. Martin, 5 Cir., 434 F.2d 275 (1970); United States v. Wolosyn, 9 Cir., 411 F.2d 550 (1969); United States v. Graham, 6 Cir., 391 F.2d 439, cert. denied, 393 U.S. 941, 89 S.Ct. 307, 21 L.Ed.2d 278 (1968); Bridger v. Union Railway Co., 6 Cir., 355 F.2d 382 (1966). See also Smith v. Spina, 3 Cir., 477 F.2d 1140 (1973) (holding police record to be within common law business records exception). We adopt the approach of these circuits. While the record sought to be admitted must, of course, be shown to meet the standards of the Business Record Act,13 we see no reason to exclude a police record made in the regular course of business, it being the regular course of police work to make the record at issue. Thus Form 251 and the radio broadcast transcript14 were properly admissible as business records upon a showing of their trustworthiness.
11
Broadly read the Business Records Act would appear to admit Any hearsay contained in a business record as substantive evidence.
12
All other circumstances of the making of such writing or record, including Lack of personal knowledge by the entrant or maker, may be shown to affect its weight, but such circumstances Shall not affect its admissibility.15
13
28 U.S.C. § 1732(a) (emphasis added). By overwhelming majority, the better view of this language is that while it exempts the maker of the record from the requirement of personal knowledge, it allows admission of the hearsay only if it was reported to the maker, directly or through others, by one who is himself acting in the regular course of business, and who has personal knowledge. Thus a police record, a Form 251 for instance, is admissible as substantive evidence to show the date a crime was reported, or the fact that it was reported at all, even if the recorder was not the officer to whom the report was made. On the other hand, the complaining witness' description of the crime, recorded by the police officer in his report, is not made in the regular course of the witness' business and does not deserve the presumption of regularity accorded a business record. Therefore, that part of the Form 251 containing the witness' description is not admissible as substantive evidence under the business records exception.16 See Johnson v. Lutz, 253 N.Y. 124, 170 N.E. 517 (1930) (leading case); United States v. Burruss, supra; United States v. Graham, supra; United States v. Shiver, 5 Cir., 414 F.2d 461 (1969); Standard Oil Co. of Calif. v. Moore, 9 Cir., 251 F.2d 188 (1957), Cert. denied, 356 U.S. 975, 78 S.Ct. 1139, 2 L.Ed.2d 1148 (1958); Gordon v. Robinson, 3 Cir., 210 F.2d 192 (1954); Gencarella v. Fyfe, 1 Cir., 171 F.2d 419 (1948). See also, C. McCormick, Evidence § 286 at 602 (1954). Contra 5 J. Wigmore, Evidence § 1530a n. 1 at 391-392 (3d ed. 1940).
14
But while such hearsay in a business record is not admissible under the business record exception, the hearsay is admissible if it falls within any other exception. See e. g., C. McCormick, supra, § 286 at 603 n. 12, & § 290 at 611, and cases cited therein. Thus, for instance, the hearsay recorded by a police officer in his Form 251 might be admissible if it was an admission, a spontaneous exclamation, a dying declaration, or a declaration against interest. Id. Annot., 69 A.L.R.2d 1148, 116.6 § 5. See also Note, Revised Business Entry Statutes: Theory and Practice, 48 Colum.L.Rev. 920, 926-929 (1948). In addition, we believe hearsay is admissible to impeach a testifying witness as a prior inconsistent statement. Howard v. United States, 108 U.S.App.D.C. 38, 278 F.2d 872 (1960); Missouri Pacific Railroad Corp. v. Austin, 5 Cir., 292 F.2d 415, 421 (1961); Cf. Lindberg v. Short Line, Inc., 1 Cir., 399 F.2d 482 (1968).
15
Such is the case here. Williams' statements to Officer Carr, as recorded in the Form 251 and as broadcast over the police radio, are inadmissible to prove the truth of Williams' assertions, since Williams was not acting in the course of his business. But once the documents are established as business records, it is presumed that Officer Carr accurately transcribed and reported Williams' story. Thus the statements would be admissible to impeach Williams' present testimony, so long as the proper foundation for impeachment is laid, as it was here. The fact that Officer Carr testified does not preclude admission of the documents. A business record is admissible even if its maker testifies, for it is the record that is the most reliable evidence of what the maker heard, and of any contradiction that might impeach Williams' credibility.17 The jury deserved to see the records.18 Cf. Williams v. United States, 131 U.S.App.D.C. 153, 156, 403 F.2d 176, 179 (1968).
16
We hasten to specify the limits of our decision. We do not hold that a police record is admissible in a criminal proceeding as a business record, either as substantive evidence or for impeachment purposes, whenever the record meets the test of trustworthiness.19 We hold only that such a record is so admissible When offered by a criminal defendant to support his defense. We do not believe that such records may properly be so employed by the prosecution. While confrontation clause values figure in our reasoning,20 the primary basis for the distinction is the "litigation records" doctrine of Palmer v. Hoffman, 318 U.S. 109, 63 S.Ct. 477, 87 L.Ed. 645 (1943). In Palmer the Supreme Court affirmed a ruling by the Second Circuit that an accident report prepared by a since-deceased railroad engineer and offered by the railroad in its defense in a grade-crossing collision case did not qualify as a business record since the report was "dripping with motivations to misrepresent." 2 Cir., 129 F.2d 976, 991 (1942). The doctrine has since been applied to deny the business records exception to any document prepared with an eye toward litigation when offered by the party responsible for making the record. See, e. g., Bracey v. Herringa, 7 Cir., 466 F.2d 702 (1972).
17
While the cases involving police records as business records have not been entirely consistent in their treatment of Palmer, we think the rule we have suggested above that the records may not be used by the prosecution emerges upon analysis. In many cases where police records are offered, the litigation is civil in nature and between private parties. Thus the record has not been prepared at the behest of either party, the Palmer problem does not arise, and the records are routinely admitted. See, e. g., Salsberg v. Modern Transfer Co., supra; Bridger v. Union Railway Co., supra; Smith v. Spina, supra. Where the police records are offered by the prosecution in criminal cases, there are two independent lines of cases. In one series of cases police records have been treated as admissible business records and the Palmer issue has not been raised. See United States v. Burruss, supra; United States v. Halperin, supra; United States v. Wolosyn, supra; United States v. Graham, supra; United States v. Martin, supra. But while the issue was not raised in any of these cases, neither was it foreclosed, since in all but one case the records were ultimately excluded anyway, for one reason or another.21 Only in United States v. Wolosyn, supra, was a police record offered by the prosecution admitted under the business record exception, and in that case the record, which was simply read to the jury and not sent to the jury room, was used only to prove the date on which an automobile had been reported stolen.
18
The second line of cases excludes under the Palmer doctrine the use of police records when offered by the prosecution, apparently without recognizing that police records may qualify as business records. In the leading case, United States v. Ware, 7 Cir., 247 F.2d 698 (1957), the Seventh Circuit held:
19
(E)ven if memoranda such as the ones in question are regularly prepared by law enforcement officers, they lack the necessary earmarks of reliability and trustworthiness. Their source and the nature and manner of their compilation unavoidably dictate that they are inadmissible under section 1732. They are also subject to the objection that such utility as they possess relates primarily to prosecution of suspected law breakers, and only incidentally to the systematic conduct of the police business. Cf. Palmer v. Hoffman, supra.
20
247 F.2d at 700. This rule has been accepted wherever raised. See United States v. Frattini, 2 Cir., 501 F.2d 1234 (1974); United States v. Brown, 5 Cir., 451 F.2d 1231 (1971); United States v. Adams, 2 Cir., 385 F.2d 548 (1967); Sanchez v. United States, 8 Cir., 293 F.2d 260 (1961).22 Significantly, however, the Ware Rule has been adopted in the Second and Fifth Circuits, which otherwise regularly admit police records as business records. Thus, although the two cited lines of cases do not explicitly recognize one another, we do not think the former line precludes application of the Palmer doctrine. Our analysis thus produces the following rule: "Police reports are ordinarily excluded when offered by the party at whose instance they were made," Bracey v. Herringa, supra, 466 F.2d at 705 n. 9, but may still be admitted as business records when, as here, they are offered against that party, the prosecution, Cf. Koninklijke Luchtvaart Maatschappij N.V. KLM v. Tuller, 110 U.S.App.D.C. 282, 291, 292 F.2d 775, 784 (1961) (Burger, J.); Korte v. New York, N.H. & H.R. Co., 2 Cir., 191 F.2d 86, Cert. denied 342 U.S. 868, 72 S.Ct. 108, 96 L.Ed. 652 (1951) (physician's report prepared for defendant admitted when offered by plaintiff); Pekelis v. Transcontinental & W. Air, Inc., 2 Cir., 187 F.2d 122, Cert. denied, 341 U.S. 951, 71 S.Ct. 1020, 95 L.Ed. 1374 (accident report prepared by defendant admitted when offered by plaintiff); Yates v. Bair Transport, Inc., S.D.N.Y., 249 F.Supp. 681 (1965), or any other party.23 Thus despite the limitations Palmer imposes on the business records doctrine, we have no doubt that the police records offered by appellant were admissible against the prosecution as business records.24
21
It is worth observing that at least the P.D. Form 251 was admissible for a reason other than the business record theory raised by appellant. Officer Carr reread the Form 251 during his testimony for the prosecution. Tr. 99-100. It is well established that while a writing used to refresh a witness' memory is not ordinarily admissible, See, e. g., Young v. United States, 94 U.S.App.D.C. 62, 214 F.2d 232 (1954), it is properly admitted when offered by the opposing party or when the jury on its own motion requests to see it. See, e. g., Borel v. Fibreboard Paper Products Corp., 5 Cir., 493 F.2d 1076, 1102-1103, Cert. denied, 419 U.S. 869, 95 S.Ct. 127, 42 L.Ed.2d 107 (1974); 3 J. Wigmore, Evidence § 763 (Chadbourn rev. 1970). Cf. Federal Rules of Evidence, Rule 612.25 Both validating contexts are present here appellant offered the Form 251 and the jury requested it. On this theory alone, the Form 251, although not the radio transcript, should have been admitted at the conclusion of Officer Carr's testimony.26
22
No matter how the new FRE are interpreted, however, it is clear that under applicable law the District Court erred in refusing to admit Defense Exhibits 5 and 6 as business records. The Government argues that exclusion was a proper exercise of the court's discretion. We disagree. Where identification is the determinative issue, and where the identification hangs upon the credibility of a single witness, impeaching evidence of the sort tendered is too important to be excluded. Cf. United States v. Bundy, 153 U.S.App.D.C. 191, 192, 472 F.2d 1266, 1267 (1972). Moreover, while courts do have some discretion both in admitting business records27 and in admitting evidence "designed for impeachment of general credibility," Collazo v. United States, 90 U.S.App.D.C. 241, 252-253, 196 F.2d 573, 584-585, Cert. denied,343 U.S. 968, 72 S.Ct. 1065, 96 L.Ed. 1364 (1952),28 in this case the District Court exercised no discretion at all. United States v. Broadus,146 U.S.App.D.C. 265, 268, 450 F.2d 1312, 1313 (1971). It erroneously thought the documents were inadmissible Per se.
23
While we believe the District Court should have admitted the documents into evidence, our review of the record convinces us that their exclusion alone would not cause appellant sufficient prejudice to warrant reversal. Williams' identification was an extremely strong one, based not only on the robbery itself, but on observing appellant with some regularity over a four- or five-year period. Thus any inconsistency between the documents and Williams' testimony was far less important than the persuasiveness with which Williams described his past sightings of appellant and certainty with which he identified him in court. More importantly, the contents of the documents were fully aired and argued to the jury. Williams and Carr were both cross-examined closely about the inconsistencies contained in the documents. While admission of the documents was the proper action, ordinarily this full airing would make any resulting prejudice to appellant harmless.
24
Although we think the court's improper exclusion of the Form 251 and the radio transcript ordinarily would not have been unduly prejudicial to appellant, the jury's subsequent request for Officer Carr's report changes the complexion of the issue in this case. The court responded to the request as follows:
25
Officer Carr's report is not in evidence so you are not entitled to see it although, of course, it was used extensively and you can remember the testimony that came from it. You will have to rely on your recollection to that extent.
26
Tr. 172. The trial court is accorded considerable discretion in responding to jury requests. Salzman v. United States, 131 U.S.App.D.C. 393, 396, 405 F.2d 358, 361 (1968); United States v. Toney, 6 Cir., 440 F.2d 590, 592 (1971); United States v. Jackson, 3 Cir., 257 F.2d 41, 43 (1958). Moreover, the judge must exercise restraint in responding to jury requests so as to avoid giving undue emphasis to the requested evidence or testimony. United States v. Rabb, 3 Cir., 453 F.2d 1012 (1971). In this case, however, the court, once again, did not exercise its discretion at all. Rather, it refused the request simply because the report was not in evidence, a situation that resulted directly from the court's erroneous conclusion that the report was inadmissible. In our view that erroneous conclusion was harmless, and, without more, we would affirm. But it is the jury, and not this court, that is the trier of fact, and the jury thought the Form 251 was of sufficient importance to ask to see it. We cannot say that visual examination of Form 251 by the jury would not have affected its verdict. See Kotteakos v. United States, 328 U.S. 750, 764-765, 66 S.Ct. 1239, 90 L.Ed. 1557 (1946). We note that the form was directly relevant to Williams' credibility. The jury was obviously concerned about that credibility because Williams was the only witness to the crime. It acquitted appellant on the armed robbery count, evidence of which depended solely upon Williams' testimony.
27
On the other hand, we cannot say it would have been an abuse of discretion had the trial court exercised discretion to deny the jury's request. See Salzman v. United States, supra; United States v. Toney, supra; United States v. Jackson, supra. We do think, however, that the jury's request was of sufficient importance to the jury that the trial court should at least have the opportunity to rule upon it knowing the documents are admissible.29 Accordingly, we follow the procedure of United States v. Hairston, 161 U.S.App.D.C. 466, 495 F.2d 1046 (1974), and United States v. Henson, 159 U.S.App.D.C. 32, 486 F.2d 1292 (1973) (En banc ), and remand the case to the District Court to consider the jury request anew. If the court concludes that it would have exercised its discretion to let the jury see the documents and that its failure to do so was not harmless, it must order a new trial for appellant. Otherwise, the conviction may stand.
28
So ordered.
*
Of the United States Court of Claims, sitting by designation pursuant to 28 U.S.C. § 293(a)
1
While counsel did not expressly refer to the radio broadcast in cross-examining Williams about these two inconsistencies, it is clear that the transcript was the source of this information. In cross-examining Officer Carr about the same two points, the defense did expressly refer to the broadcast transcript. Tr. 104-105
On these two points the Form 251 was supportive of Williams' testimony. On the form the robber's hair was simply described as "short," and his complexion as "brn."
2
In addition to these inconsistencies between Williams' testimony and the two documents, the defense also pointed to Williams' statement that the robber was 5'8 , an assertion supported by the documents. Defense counsel described his client as being 5'2 in height, and appellant was allowed to stand so the jury could judge his height for themselves. Tr. 130-131
3
BY MR. REED (counsel for appellant, cross-examining the witness Williams):
Q. Now, I ask you do you see what has been marked as Defendant's Exhibit No. 5 (P.D. Form 251)?
(Williams). Yes.
Q. Do you see the name
MR. MULROONEY (prosecutor): Your Honor, I am going to object to this. The officer who prepared that report is here. I think it is entirely improper for questions from this writing to be directed to this witness.
THE COURT: What is the purpose of it?
MR. REED: Well, Your Honor
THE COURT: You can't impeach him with it. It is not his statement.
MR. REED: Well, if he made the statement, he is going to adopt the statement as the statement he made substantially. I can impeach him with it, Your Honor.
THE COURT: Well, if he is going to be willing to sign it and adopt it, that is something else. He has already said he didn't see him writing it down.
MR. REED: I believe he can read this, Your Honor, and determine whether or not this is substantially what he told the officer at that time.
THE COURT: Substantially is not good enough in a case like this. If you want to use the statement, put it in through the officer. Don't try to impeach him with something another person wrote down.
MR. REED: Well, I am going to have to recall this witness, then, Your Honor, because it's his statement; it's not the officer's.
THE COURT: Is it signed?
MR. REED: It is signed by two police officers.
THE COURT: All right. But he didn't sign it, did he?
MR. REED: No, but he made it.
THE COURT: See if you can refresh his recollection, but don't try to impeach him with it.
THE COURT: Let him refresh his recollection and repeat what he did tell the officer, but do not try to impeach him from somebody else's statement.
Tr. 63-65.
4
MR. REED: I would move both the 251 and the transcript into evidence at this time, Your Honor
MR. MULROONEY: No objection, Your Honor.
THE COURT: Would you come to the bench.
(At the Bench:)
THE COURT: What is the purpose of this?
MR. REED: I just want to use them to argue as to the time of the robbery. I just want to argue with them, Your Honor.
THE COURT: They aren't material. That hasn't been substantiated.
MR. REED: The complaining witness says a different time
THE COURT: If you are offering it to show his testimony, all you are doing is offering it for impeachment.
MR. REED: I am offering it to establish the time of the robbery.
THE COURT: You can argue that to the jury.
MR. REED: This is his statement. He has signed it.
I would like to use that, Your Honor.
THE COURT: You can argue it to the jury. But, I am not going to let this hearsay go in against the complaining witness.
Tr. 115-116.
5
On all six points Williams adhered to his version, and insisted that Officer Carr must have been in error. On the two inconsistencies derived from Officer Carr's radio broadcast, the Form 251 supported Williams' testimony. See note 1 Supra. The broadcast supported his assertion that the robbery took place before 8:00 a. m. rather than at 8:05 since the broadcast itself occurred at 8:05, as is noted therein. The 58th and East Capitol Street pickup point was supported by the clerk's summary of Williams' grand jury testimony, Defendant's Exhibit 4, see United States v. Broadus, 146 U.S.App.D.C. 265, 267, 450 F.2d 1312, 1314 (1971), and his testimony at appellant's first trial on this charge. Tr. 117. There was no supportive evidence for Williams' assertion that the robber had never touched his wallet and change purse and that he had told Officer Carr the robber was wearing Hush Puppies. Officer Carr could not recall the last point. Tr. 109-110
6
The full description on the form was:
N/M 17-18 yrs. Brn. Compl. 5'8 110 lbs. short hair brn. suede coat, khaki trousers.
Williams testified that "as far as I can remember" he gave Officer Carr the following description:
Negro male, about 18 or 19, and he had on brown pants and a suede jacket and Hush-Puppy shoes, and with a boy's haircut.
Tr. 22. Appellant, 18 years old at the time of his arrest, was arrested wearing khaki trousers and Hush Puppy shoes. Both of these items were introduced by the Government, over defense objections, into evidence and identified by Williams as similar to those worn by his assailant. Tr. 26-27. Appellant was not wearing a suede jacket, but rather, as he emerged from his house, was putting on a trench coat. Tr. 23. The record does not reflect appellant's hair style or complexion, but the jury was able to judge both for itself.
7
This process was so extensive as to be, in the judgment of the trial court, unduly repetitious. Tr. 91
8
The Court instructed the jury on impeachment by prior inconsistent statements as follows:
You will undoubtedly recall that in the course of this trial, on several occasions witnesses were confronted with statements they allegedly made outside of court at an earlier date which did not seem to jibe with what they were saying on the stand. I instructed you then and I instruct you again now that it is for you to determine whether or not any of those prior statements were in fact inconsistent with what the witness was saying on the witness stand. I would again advise you that the testimony of a witness may be discredited or, as lawyers say, impeached by showing that he has previously made statements which are inconsistent with the testimony he has given from the stand.
The prior statement, however, is admitted into evidence solely for your consideration in evaluating the credibility of that witness. You may consider that prior statement only in connection with your evaluation of the credence to be given to the witness' present testimony from the stand. You must not consider the prior statement as establishing the truth of any fact contained in that prior statement.
Tr. 158-159.
9
Plainly the records could properly be used to cross-examine the complaining witness Williams so as to lay the foundation for the anticipated subsequent impeachment of Williams by the maker of the records, Officer Carr C. McCormick, Evidence § 37 (1954); United States v. Hibler, 9 Cir., 463 F.2d 455, 461-472 (1972). Cf. United States v. Nobles, --- U.S. ---, 95 S.Ct. 2160, 45 L.Ed.2d 141, 43 U.S.L.Week, 4815 (1975). While the trial court was not certain about the theory under which it was acting, it nonetheless allowed such cross-examination here, Tr. 65-69, 72-73, so the only issue before us is the admissibility of the documents themselves
10
In pertinent part, the Business Records Act provides as follows:
§ 1732. Record made in regular course of business; photographic copies
(a) In any court of the United States and in any court established by Act of Congress, any writing or record, whether in the form of an entry in a book or otherwise, made as a memorandum or record of any act, transaction, occurrence, or event, shall be admissible as evidence of such act, transaction, occurrence, or event, if made in regular course of any business, and if it was the regular course of such business to make such memorandum or record at the time of such act, transaction, occurrence, or event or within a reasonable time thereafter.
All other circumstances of the making of such writing or record, including lack of personal knowledge by the entrant or maker, may be shown to affect its weight, but such circumstances shall not affect its admissibility.
The term "business," as used in this section, includes business, profession, occupation, and calling of every kind.
28 U.S.C. § 1732(a) (1970), repealed Pub.L. No. 595, § 2(b), 88 Stat. 1926, 1949 (1975).
11
The new version reads as follows:
Rule 803. Hearsay Exceptions; Availability of Declarant Immaterial
(6) Records of regularly conducted activity. A memorandum, report, record, or data compilation, in any form, of acts, events, conditions, opinions, or diagnoses, made at or near the time by, or from information transmitted by, a person with knowledge, if kept in the course of a regularly conducted business activity, and if it was the regular practice of that business activity to make the memorandum, report, record, or data compilation, all as shown by the testimony of the custodian or other qualified witness, unless the source of information or the method or circumstances of preparation indicate lack of trustworthiness. The term "business" as used in this paragraph includes business, institution, association, profession, occupation, and calling of every kind, whether or not conducted for profit.
See note 24 infra.
12
Except as noted at TAN & note 15 Infra, the Business Records Act as codified in 28 U.S.C. § 1732(a) (1970) has not been altered in any way relevant to this opinion by the repeal of that section and the adoption of FRE Rule 803(6). See notes 10-11 Supra, 24 Infra. Since § 1732 governed Smith's trial, the text of this opinion discusses that section only. The similarity of the provisions and our conclusion that the police report is admissible under the FRE, See Note 24 Infra, as well as under § 1732, renders unnecessary a decision as to which version would govern a new trial if one is granted
13
In this opinion, we address both the District Court's refusal to admit Form 251 into evidence in the effort to impeach Williams at the time he testified, and the court's subsequent refusal to admit the form after Officer Carr completed his testimony concerning it. When introduction was first attempted, the form was admissible as impeachment material only if it could qualify under the Business Records Act as a record of Williams' description of his assailant, and only after a proper foundation was laid for its use as a prior inconsistent statement by Williams
When introduction was again sought, the form could of course have been given the same treatment. However, it is conceivable that the form might have been admitted for impeachment purposes under an alternative theory as the written component of the evidence supplied by Officer Carr that Williams previously spoke inconsistently in giving him the description which the form incorporated. To determine the propriety of the District Court's exclusion of the form when first offered during Williams' testimony, we must determine whether the form was within the category of documents covered by the Act, and consequently admissible at that stage of the trial upon the showing which the Act requires. Since we hold in the affirmative, we have no occasion to consider the alternative theory of admissibility.
14
The transcript of a radio broadcast can be as much a business record as a written document. See LeRoy v. Sabena Belgian World Airlines, 2 Cir., 344 F.2d 266, Cert. denied, 382 U.S. 878, 86 S.Ct. 161, 15 L.Ed.2d 119 (1965)
15
The Federal Rules of Evidence do away with this troublesome language. See Note 11 Supra
16
The danger of admitting such reports has been well put:
These acts were intended to make admissible records which, because made pursuant to a regular business duty, are presumed to be reliable. The mere fact that recordation of third party statements is routine, taken apart from the source of the information recorded, imports no guaranty of the truth of the statements themselves. There is no reason for supposing an intention to make admissible hearsay of this sort. So to construe these statutes would make of them almost limitless dragnets for the introduction of random, irresponsible testimony beyond the reach of the usual tests for accuracy.
Note, Revised Business Entry Statutes: Theory and Practice, 48 Colum.L.Rev. 920, 927 (1948).
17
Indeed, the business record exception is desirable not only because the maker, acting in the regular course of his business, is presumed, and has every reason to be, accurate, but also because one who records many similar documents is likely to forget at a later date the circumstances surrounding the particular one at issue
18
An alternative theory of admissibility might be grounded in the Government Records Act, 28 U.S.C. § 1733 (1970), or in the common law version of that Act that has developed in the District of Columbia. See United States v. Broadus, supra note 5; Howard v. United States, 108 U.S.App.D.C. 38, 278 F.2d 872 (1960). Because this theory is unnecessary to our resolution of this case, we do not reach this issue
19
Of course, if the Government chooses to contest the admissibility of a report so offered it may do so on the ground that the report is not sufficiently trustworthy to qualify as a business record. To do this the Government may call and examine the officer who made the report. Thus if there were serious questions about the reliability of any given police record, the jury would not be deprived of the benefit of cross-examination of a live witness as to the accuracy of the report. The business record exception is only an evidentiary shortcut; where there is a real dispute the jury's ability to make an informed determination may always be protected
20
Many traditional hearsay exceptions also operate as exceptions from the demands of the confrontation clause of the Sixth Amendment. Mattox v. United States, 146 U.S. 140, 151, 13 S.Ct. 50, 36 L.Ed. 917 (1892) (dying declarations); Mattox v. United States, 156 U.S. 237, 240-244, 15 S.Ct. 337, 39 L.Ed. 409 (1895) (testimony of deceased witness who had testified at former trial); 5 J. Wigmore, Evidence § 1397 (3d ed. 1940). Indeed, we have typically applied the business records exception in criminal cases without insisting that the confrontation clause demands the testimony of the maker of the record. See, e. g., Gass v. United States, 135 U.S.App.D.C. 11, 416 F.2d 767 (1969) (hospital records); Wheeler v. United States, 93 U.S.App.D.C. 159, 211 F.2d 19 (1953), Cert. denied, 347 U.S. 1019, 74 S.Ct. 876, 98 L.Ed. 1140 (1954) (same)
However, the admission of police records, which may at times contain written summaries of the prosecution's entire case against a defendant, See note 22 Infra, poses more difficult Sixth Amendment problems. It is clear that the mere existence of a hearsay exception does not cause the confrontation clause to recede. California v. Green, 399 U.S. 149, 156, 90 S.Ct. 1930, 26 L.Ed.2d 489 (1970). Thus confrontation values have been found violated even when evidence was admitted under arguably recognized hearsay exceptions. Barber v. Page, 390 U.S. 719, 88 S.Ct. 1318, 20 L.Ed.2d 255 (1968); Pointer v. Texas, 380 U.S. 400, 85 S.Ct. 1065, 13 L.Ed.2d 923 (1965). The circuits have split on whether the business record exception always operates consistently with the demands of the Sixth Amendment. See United States v. Haili, 9 Cir., 443 F.2d 1295 (1971) (yes); United States v. Leal, 9 Cir., 509 F.2d 122 (1975) (yes); McDaniel v. United States, 5 Cir., 343 F.2d 785 (1965) (no). We think the Fifth Circuit view is the better. In McDaniel the court held:
We do not believe that all documents covered by the (Business Records) Statute in all cases are admissible in a criminal trial, but the trial judge has the duty to determine in each instance whether such documents are constitutionally admissible under the Sixth Amendment guarantee of confrontation.
343 F.2d at 789. Such a balancing test is clearly appropriate. While it might be proper to admit a police record as a business record to prove the date an automobile theft has been reported to the police, United States v. Wolosyn, 9 Cir., 411 F.2d 550 (1969), See note 23 Infra, it would strike at the core of the Sixth Amendment to admit a police report summarizing the prosecution's entire case against a defendant in the absence of the maker of the record. Such a procedure would do no less than allow the prosecution to proceed without allowing the defendant the opportunity to cross-examine what may be his sole accuser. Cf. Barber v. Page, supra ; Pointer v. Texas, supra. While it may be a closer question whether such evidence is admissible if the maker testifies and is subjected to cross-examination, we do not attempt an answer since the record would be inadmissible in any case under the Palmer doctrine discussed in text.
21
The records were excluded because they were improperly offered to prove the truth of hearsay contained therein, United States v. Burruss, 4 Cir., 418 F.2d 677 (1969); United States v. Halperin, 5 Cir., 441 F.2d 612 (1971); United States v. Graham, 6 Cir., 391 F.2d 439, Cert. denied, 393 U.S. 941, 89 S.Ct. 307, 21 L.Ed.2d 278 (1968), or because they were offered without authenticating testimony. United States v. Martin, 5 Cir., 434 F.2d 275 (1970)
22
Some of these cases seem to take the view that quite apart from the business records exception
it is error and ordinarily reversible error to receive an exhibit containing "A neat condensation of the government's whole case against the defendant."
United States v. Parker, 8 Cir., 491 F.2d 529 (1974) (Bright and Ross, JJ., separate statement of views on petition for rehearing) (emphasis in original), Quoting Sanchez v. United States, 8 Cir., 293 F.2d 260, 269 (1961), Quoting United States v. Ware, 7 Cir., 247 F.2d 698, 700 (1957). See also United States v. Brown, 5 Cir., 451 F.2d 1231 (1971). These cases all involve the admissibility of narcotics agents' summaries of the case against the defendant attached to the lock seal envelope in which confiscated narcotics were stored for trial. As the Ware court found, with such evidence before the jury, "The government's witnesses in effect accompanied the jury into the jury room." 247 F.2d at 700. If this is an independent ground of decision for these cases, it would add to the Palmer and confrontation clause rationales for excluding police records offered by the prosecution.
23
It may be that under this rule the police record was properly admitted in United States v. Wolosyn, 9 Cir., 411 F.2d 550 (1969). See page 18 Supra. The record in Wolosyn, far from being a summary of the Government's case against the defendant, See note 22 Supra, was only proof of the date an automobile had been reported stolen. Such a record may not be suspect under Palmer at all, since it arguably relates primarily to the systematic conduct of police business the recording and investigation of crime and only secondarily to prosecution of suspected law breakers. Cf. United States v. Ware, supra note 22, 247 F.2d at 700. Likewise, it would seem that police personnel records might be admissible under the espoused rule
24
The same result would be reached under the Federal Rules of Evidence. Rule 803(6) contains the business records exception in much the same form as it is found in the Business Records Act. See notes 10-11 Supra. Congress, which considered the FRE at great length, can be presumed to have been aware of the interpretation of the business records exception current in the courts when it approved Rule 803(6). See 2A Sands, Sutherland Statutory Construction § 49.09 (1973); Cf. Georgia v. United States, 411 U.S. 526, 533, 93 S.Ct. 1702, 36 L.Ed.2d 472 (1973). See also Advisory Committee's Note to Rule 803, 56 F.R.D. 303, 309 (1972) (discussing admissibility of police records during explanation of Rule 803(6) ). Of course, Congress must also be deemed to have continued the restriction the doctrine of Palmer v. Hoffman places on the use of police reports by the prosecution. The clear congressional intent to preclude the Government from using the reports of law enforcement personnel in a criminal trial, See infra, supports this conclusion
Rule 803(8) of the FRE creates a specific hearsay exception for "public records and reports." While this provision appears to overlap rather than to diminish 803(6), See Colvin v. United States, 479 F.2d 998, 1002 (9th Cir. 1973); 4 J. Weinstein & M. Berger, Weinstein's Evidence P 803(8)(03) at 803-185 (1975) ("Public records could, of course, as in the past be admitted under the regular entries exception."), it is also useful to consider the admissibility of police records under this provision. Rule 803(8) provides:
The following are not excluded by the hearsay rule, even though the declarant is available as a witness:
(8) Public records and reports. Records, reports, statements, or data compilations, in any form, of public offices or agencies, setting forth * * * (B) matters observed pursuant to duty imposed by law as to which matters there was a duty to report, excluding, however, in criminal cases matters observed by police officers and other law enforcement personnel, or (C) in civil actions and proceedings and against the Government in criminal cases, factual findings resulting from an investigation made pursuant to authority granted by law, unless the sources of information or other circumstances indicate lack of trustworthiness.
It is not clear whether a police report like that prepared by Officer Carr falls under 803(8)(B) or 803(8)(C). 803(8)(C) seems to be the more appropriate provision since the Form 251 contains information " . . . resulting from an investigation made pursuant to authority granted by law." The rule's reference to "factual findings" may authorize the admission of the investigator's conclusions based on the information he derived from firsthand observers, See J. Weinstein and M. Berger, supra, at 803-184 to 185, but it should not be read to require that the information obtained during an investigation be interpreted by the investigator before his report can be admitted. On the other hand, 803(8)(B) may also be considered applicable since the Form 251 contained Carr's report of what he heard and saw Williams tell him.
Were we required to choose which section of the rule to apply, we would settle on 803(8)(C). That portion of the rule deals explicitly with reports based on investigations, whereas 803(8)(B)'s most natural reading is that it is concerned with officials' reports of their own firsthand observations of events. Cf. 120 Cong.Rec. H564 (daily ed. Feb. 6, 1974) (remarks of Reps. Brasco and Dennis) (originally proposed 803(8)(B) would admit as evidence against defendant police officer's report " . . . that he saw Mr. X with a gun on such and such an occasion . . ."). 803(8)(C) explicitly adopts the result we have reached: the report is admissible, but only "against the Government in criminal cases."
On its face, 803(8)(B) appears to require a different conclusion. We are convinced, however, that 803(8)(B) should be read, in accordance with the obvious intent of Congress and in harmony with 803(8)(C) to authorize the admission of the reports of police officers and other law enforcement personnel at the request of the defendant in a criminal case.
As proposed by the Advisory Committee and submitted to Congress by the Supreme Court, 803(8)(B) would have exempted from the hearsay rule " . . . reports . . . in any form, of public offices or agencies, setting forth . . . (B) matters observed pursuant to duty imposed by law." H.R.Doc.No.93-46, 93d Cong., 1st Sess. 29 (1973). Concerned that this language would allow the prosecution to use a report " . . . to prove its case in chief with the possibility of no other evidence being presented," 120 Cong.Rec. H564 (daily ed. Feb. 6, 1974) (remarks of Rep. Brasco), the House added the present language. During the debate Representative Dennis, who supported amending the proposed rule, indicated that the amended version would still allow a defendant to " . . . use the report to contradict (the reporting officer) and cross-examine him." Id. See also id. (remarks of Reps. Hunt and Brasco). The House's changes survived conference with the Senate, and when he explained the Conference Committee's report to the House, Representative Hungate stated that "(a)s the rules of evidence now stand, police and law enforcement reports are not admissible Against defendants in criminal cases. This is made quite clear by the provisions of rule 803(8)(B) and (C)." 120 Cong.Rec. H12254 (daily ed. Dec. 18, 1974) (emphasis added).
Thus, the apparently absolute language of 803(8)(B) had its origin in congressional concern that use of reports against defendants would be unfair. Moreover, as Representative Hungate's statement indicates, the prohibitory language of 803(8)(B), added on the floor of the House, should be read in conjunction with the more carefully drafted parallel provision of 803(8)(C). See also J. Weinstein & M. Berger, supra, at 803-186 (referring to the parts of 803(8)(B) and (C) considered here as a single restriction against the government). Since there is no apparent reason to allow defendants to use the reports admitted by 803(8)(C) but not those governed by 803(8)(B), we conclude that a police report, like that of Officer Carr, is an exception to the new hearsay rules when introduced at the request of the defense. Thus the FRE reinforce our view that the police records offered by appellant are admissible in this case.
25
Under the FRE, unlike the common law, writings used to refresh memory prior to testifying as well as those used on the stand will be admissible, in the court's discretion, on the motion of the opposing party. FRE Rule 612. Thus, as is typical, whenever an officer reviews his P.D. Form 251 prior to testifying, whether or not he refers to it on the stand, it will hereinafter be admissible
26
The Form 251 and the radio transcript may have been admissible as well to impeach Officer Carr
27
The court's discretion lies in judging whether the document offered has the inherent probability of trustworthiness. LeRoy v. Sabena Belgian World Airlines, supra note 14; McDaniel v. United States, 5 Cir., 343 F.2d 785, Cert. denied, 382 U.S. 826, 86 S.Ct. 59, 15 L.Ed.2d 71 (1965); Puggioni v. Luckenbach S.S. Co., 2 Cir., 286 F.2d 340 (1961); United States v. Newman, 5 Cir., 468 F.2d 791 (1972), Cert. denied, 411 U.S. 905, 93 S.Ct. 1527, 36 L.Ed.2d 194 (1973)
28
See also Howard v. United States, 128 U.S.App.D.C. 336, 340-341, 389 F.2d 287, 291-292 (1967); Christoffel v. United States, 91 U.S.App.D.C. 241, 248-249, 200 F.2d 734, 740-741 (1952)
29
We note that the trial court did not attempt to comply with the jury's request in any manner. It could, for instance, have had read to the jury those portions of the transcript in which Williams was examined about the inconsistencies between his testimony and the documents
| {
"pile_set_name": "FreeLaw"
} |
Q:
Is it correct to state that the first number that collide in single precision is 131072.02? (positive, considering 2 digits as mantissa)
I was trying to figure it out for my audio application if float can be used to represent correctly the range of parameters I'll use.
The "biggest" mask it needs is for frequency params, which is positive, and allow max two digits as mantissa (i.e. from 20.00 hz to 22000.00 hz). Conceptually, the following digits will be rounded out, so I don't care for them.
So I made this script to check the first number that collide in single precision:
float temp = 0.0;
double valueDouble = 0.0;
double increment = 1e-2;
bool found = false;
while(!found) {
double oldValue = valueDouble;
valueDouble += increment;
float value = valueDouble;
// found
if(temp == value) {
std::cout << "collision found: " << valueDouble << std::endl;
std::cout << " collide with: " << oldValue << std::endl;
std::cout << "float stored as: " << value << std::endl;
found = true;
}
temp = value;
}
and its seems its 131072.02 (with 131072.01, stored as the same 131072.015625 value), which is far away than 22000.00. And it seems I would be ok using float.
But I'd like to understand if that reasoning is correct. It is?
The whole problem would be if I set a param of XXXXX.YY (7 digits) and it collides with some other numbers having a less number of digits (because single precision only guarantee 6 digits)
Note: of course numbers such as 1024.0002998145910169114358723163604736328125 or 1024.000199814591042013489641249179840087890625 collide, and they are within the interval, but they do it at a longer significative digits than my required mantissa, so I don't care.
A:
IEEE 754 Single precision is defined as
1 sign bit
8 exponent bits: range 2^-126 to 2^127 ~ 10^-38 to 10^38)
23 fraction (mantissa) bits: decimal precision depending on the exponent)
At 22k the exponent will represent an offset of 16384=2^14, so the 23-bit mantissa will give you a precision of 2^14/2^23= 1/2^9 = 0.001953125... which is sufficient for your case.
For 131072.01, the exponent will represent an offset 131072 = 2^17, so the mantissa will give a precision of 2^17/2^23 = 1/2^6 = 0.015625 which is larger then your target precision of 0.01
| {
"pile_set_name": "StackExchange"
} |
Paul Slowinski
Paul "The Sting" Slowinski (born 24 September 1980) is a Polish kickboxer, a four-time World Muay Thai Council (WMC) Muay Thai World champion and two-time K-1 World GP 2006 in Auckland and K-1 World GP 2007 in Amsterdam champion. After two years training in Amsterdam, Netherlands under Ernesto Hoost, Slowinski moved back to Adelaide in 2009 and began to teach and train out of Rikers Gym. He has competed in the K-1 and SUPERKOMBAT promotions.
Biography
Paul Slowinski was born in Strzegom, Poland and immigrated to Adelaide, Australia in 1996 as a teenager with his mother and brother. He started kickboxing in 1998 under Alan Wong. He turned pro on February 2001.
He faced Cătălin Moroşanu at the K-1 World Grand Prix 2012 in Tokyo Final 16 on 14 October 2012 and lost via unanimous decision after visiting the canvas twice in round three.
He defeated Nato Lauui via unanimous decision at Kings of Kombat 8 in Melbourne on 8 December 2012.
Slowinski and Ben Edwards met for the fourth time on 23 March 2013 in Canberra, Australia at Capital Punishment 7. Edwards won on points to bring their rivalry to 2–2.
He made his professional MMA debut against Leamy Tato at MMA Downunder 4, on 21 September 2013. The event was held at the Adelaide Arena in Adelaide, South Australia. He spend time training for this bout with the Blackzilians training camp in Boca Raton, Florida.
He lost to Raul Cătinaș by third round KO at the SUPERKOMBAT World Grand Prix 2013 Final in Galați, Romania on 21 December 2013.
Slowinski defeated Tsotne Rogava via unanimous decision to win the WMC World Super Heavyweight (+95 kg/209 lb) Championship at Monte Carlo Fighting Masters 2014 in Monte Carlo, Monaco on 14 June 2014.
Titles
2014 WMC Super Heavyweight World champion
2011 ISKA Heavyweight World champion
2010 K-1 Oceania GP in Canberra runner up
2009 International Kickboxer Magazine Super Heavyweight(+95 kg) champion
2009 WMC Super Heavyweight World champion
2008 WMC Super Heavyweight World champion
2007 WMC Super Heavyweight World champion
2007 K-1 World GP in Amsterdam champion
2006 K-1 World Grand Prix in Auckland champion
2005 KOMA GP in Tokyo champion
2005 WMC World Heavyweight GP champion
2005 King of Oceania champion
2003 WMC Heavyweight World champion
2002 WMC Cruiserweight World champion
2001 WMC Light Heavyweight World champion
2001 Super 8 Tournament in Brisbane champion
1999 IAMTF Australian Super Light Heavyweight champion
1999 King's Birthday Cup Amateur champion
Fight record
Mixed martial arts record
|-
|Loss
|align=center| 1–2
|Michal Andryszak
|TKO (punches)
|KSW 26
|
|align=center| 1
|align=center| 1:06
|Warsaw, Poland
|
|-
|Loss
|align=center| 1–1
|Marcin Rozalski
|Submission (rear-naked choke)
|KSW 24
|
|align=center| 1
|align=center| –
|Lodz, Poland
|
|-
| Win
| align=center| 1–0
| Leamy Tato
| TKO (head kick & punches)
| MMA Downunder 4
|
| align=center| 1
| align=center| 2:27
| Adelaide, Australia
|
See also
List of male kickboxers
List of K-1 Events
References
External links
Flinders University Muay Thai Club
Profile at K-1
Category:1980 births
Category:Living people
Category:Australian male kickboxers
Category:Polish male kickboxers
Category:Light heavyweight kickboxers
Category:Cruiserweight kickboxers
Category:Heavyweight kickboxers
Category:Super heavyweight kickboxers
Category:Australian Muay Thai practitioners
Category:Polish Muay Thai practitioners
Category:Australian people of Polish descent
Category:Polish emigrants to Australia
Category:Naturalised citizens of Australia
Category:People from Strzegom
Category:People from Adelaide
Category:Sportspeople from Lower Silesian Voivodeship
Category:SUPERKOMBAT kickboxers | {
"pile_set_name": "Wikipedia (en)"
} |
home buyer seminars
Minnesota First Time Home Buyer Class – Thursday, February 15th, 2018, 6:30-8 PM – Have you thought about possibly buying your first home but you just don’t know where to start? Maybe you have considered it, but you just don’t want to call up a Realtor and be given a sales pitch? We understand. Do you know how to get the most of the real estate listings by participants of the Minnesota Northstar MLS? In today’s Minneapolis St. Paul housing market, first-time home buyers are finding low interest rates as well as homes with mortgage payments less than rent. But really, where does one start? No worries at all. Our team is here to help you at your pace, and provide you as much information (in a low pressure setting) about buying your first home in Minnesota. At this first time buyer class, we will first take a look at the entire buying process from the financing and money standpoint. Many buyers want to know: what types of down payment assistance programs may be available to you as a buyer? What price range of home can or should you look at buying? How do you start the process of starting to look for various properties for sale? At this one and a half hour seminar, Charlie Leimer from The Minnesota Real Estate Team of REMAX Advantage Plus will take you through the entire process of buying your first home: from the first meeting over a cup of coffee where we can learn more about you and what you are looking for in a home, to the process of looking at properties, making offers, and moving forward up through the day of closing. Don’t hesitate to sign up for this great event today!
Investment Property 101 Seminar – Tuesday, February 20th, 2018, 6:30-8 PM – Have you ever thought about becoming a real estate investor? Seen a few of those late night infomercials, your interest may be piqued, but you simply don’t want to spend money to buy all those books and tapes? And you want to know how real estate investing really works here in Minnesota? Look no further. At this free seminar, Ryan O’Neill, leader of The Minnesota Real Estate Team of REMAX Advantage Plus, shares his real estate investing experiences from owning a number of rentals here in Minnesota over the last 14 years. Some of the topics covered include: where to start as an investor, what type of properties to buy in this market, how to rent them, how to finance them, how to deal with tenant issues. Waterstone Mortgage will share some insightful mortgage information at this seminar as well. Whether you are looking to possibly buy and hold some investment properties or fix and flip, this seminar is a great, low pressure yet informative spot to start. Stop on out to this outstanding event!
Investment Property 101.5 Seminar – How We Do It – Tuesday, March 27th, 2018, 6:30-8 PM – You have now attended our Investment Property 101 seminar, and you want to learn even more. Perhaps you are wondering how you can put all of the information further into practice in today’s market? Come and join Dan Myers of RE/MAX Advantage Plus and The Minnesota Real Estate Team as he guides you through the complete process of real estate investing. Dan started out in our Investment Property 101 Seminar a few years back. During the course of his first year, Dan decided he really wanted to pursue this endeavor; he ended up purchasing seven investment properties with our team. After purchasing, rehabbing, and then renting out these properties, Dan then went on to get his real estate license. Now Dan of course is helping other investors to purchase investment properties. At the 101.5 seminar, Dan will walk you through the entire investment property buying process. He goes step by step with all types of print outs, contractor bids, rehab loans, Truth in Housing (TISH) reports, and cash flow analysis spreadsheets. You will see how the actual process is put together, as well as the projected monthly cash flow of the investment property. As usual, there is no cost or obligation by attending. Just outstanding information for you as a real estate investor in today’s Minnesota market. | {
"pile_set_name": "Pile-CC"
} |
Field of the Invention
This invention relates generally to recreation equipment and, more specifically, to a camping and recreation trailer suitable for summer or winter use. | {
"pile_set_name": "USPTO Backgrounds"
} |
Q:
Need help finding smallest value of $x^2 + y^2$
I need to find the smallest value of $x^2 + y^2$ with the restriction $2x + 3y = 6$. This chapter focuses on the vertex formula.
A:
Using the Cauchy-Schwarz inequality, we have $6^2=(2x+3y)^2\leq (x^2+y^2)(2^2+3^2)$, which gives the minimum value of $x^2+y^2$ to be $\frac{36}{13}$.
Edit:Equality occurs for $\frac{x}{2}=\frac{y}{3}$.
A:
Rewrite $x^2 + y^2$ in terms of one of the variables (either $x$ or $y$) using the restriction given to you which will give you a quadratic equation. That should help move you along to the answer.
A:
For fun, we give a couple of solutions that are not the intended ones. The solutions are very similar, but the first is expressed algebraically, while the
second brings in the geometry.
$1$) Note that
$$(2x+3y)^2+(3x-2y)^2=13(x^2+y^2).$$
Thus, given that $2x+3y=6$,
$$13(x^2+y^2)= 36+(3x-2y)^2.$$
If we can manage to make $3x-2y=0$, then $13(x^2+y^2)$ will be as small as possible. But the system of two linear equations $2x+3y=6$, $3x-2y=0$ has a solution. There,
$$13(x^2+y^2)=36,$$
so the smallest possible value of $x^2+y^2$ is $36/13$.
$2$) Look at the problem geometrically. We want to find the smallest radius $r$ such that the circle $x^2+y^2=r^2$ meets the line $2x+3y=6$. If we draw a picture, we can see that for this smallest $r$, the line $2x+3y=6$ will be tangent to the circle. Let the point of tangency be $T(a,b)$. The line from the origin to $T$ is perpendicular to the tangent line.
The line $2x+3y=6$ has slope $-2/3$. So the line from the origin to $T$ has slope the negative of the reciprocal of $-2/3$. Thus
$$\frac{b}{a}=\frac{3}{2}.$$
This equation simplifies to $3a-2b=0$. We also have $2a+3b=6$. Now we can solve for $a$ and $b$. But let's not bother. Use the fact that
$$(3a-2b)^2+(2a+3b)^2=13(a^2+b^2)$$
to conclude that $r^2=a^2+b^2=36/13$.
| {
"pile_set_name": "StackExchange"
} |
Q:
PHP Time mismatch between date() and time()
I want to insert user entry log in a database table. The column where I want to keep the current date time is "date_time decimal(10,0) NOT NULL DEFAULT '0'". When inserting data I set the field as
$this->mytable->date_time = time();
My query executed successfully. But when I want to display the time of the entry it shows the time which is not match my pc(local server) time. To display the time I write
echo date('Y-m-d h:i:s A', $log->date_time);
I test several times but it showing the time which is 4 hours less than the exact time. On my test the current time is 2013-09-15 04:46:34 PM but table row shows 2013-09-15 12:46:34 PM.
Please help me. I can not find out the mistake.
A:
You need to specify the timezone. The time() function will just retern a timestamp which is timezone-independent.
When you use the date() function you are using the server's timezone, I would recommend using the DateTime object:
$timezone = new DateTimeZone("Etc/GMT-4");
$date = new DateTime("@".$log->date_time); // @-symbol indicates timestamp input
$date->setTimezone($timezone);
echo $date->format("r");
Here is a list of supported timezones http://php.net/manual/en/timezones.php
A:
Sorry. It was my mistake. When inserting data I set the time zone as
if(function_exists('date_default_timezone_set')) date_default_timezone_set("Asia/Dhaka");
But when display the data I forgot to set the time zone. It working fine when I set the time zone as I defined before in my display page. Thanks everybody for your help.
| {
"pile_set_name": "StackExchange"
} |
Q:
DataGrid itemrenderer getting issue while scroll
I am using radio button inside spark datagrid as following way.
<s:DataGrid dataProvider="{arrList}" >
<s:columns>
<mx:ArrayList>
<mx:source>
<s:GridColumn width="90" headerText="radio">
<s:itemRenderer >
<fx:Component>
<s:GridItemRenderer>
<fx:Script>
<![CDATA[
override public function set data( value:Object ) : void
{
super.data = value;
rdId.group=outerDocument.rbgGroup;
}
]]>
</fx:Script>
<s:RadioButton id="rdId" />
</s:GridItemRenderer>
</fx:Component>
</s:itemRenderer>
</s:GridColumn>
<s:GridColumn headerText="Name" dataField="name" />
</mx:source>
</mx:ArrayList>
</s:columns>
</s:DataGrid>
I have created group for radiobutton as i want any one of selected.
<s:RadioButtonGroup id="rbgGroup" />
This working fine. But, if i select any radio like first and scroll then it will select another radio button automatically and first selected removed.
I have checked many other post like this but that doesn't seems work.
Issue occurred only when i scroll.
Any help would greatly appreciated.
A:
I have done it following way. Made selected = false for all and made selected which is changed.
<s:GridItemRenderer>
<fx:Script>
<![CDATA[
import spark.components.RadioButtonGroup;
[Bindable]
private static var rbgGroup:RadioButtonGroup = new RadioButtonGroup();
override public function set data( value:Object ) : void
{
super.data = value;
}
protected function rdId_changeHandler(event:Event):void {
if(outerDocument.arrList != null)
{
for each(var obj:Object in outerDocument.arrList)
{
obj.selected = false;
}
}
data.selected=true;
}
]]>
</fx:Script>
<s:RadioButton id="rdId" group="{pndcGroup}" selected="{data.selected}" change="rdId_changeHandler(event)" />
</s:GridItemRenderer>
Hope it help some one.
| {
"pile_set_name": "StackExchange"
} |
1. Introduction {#sec1-molecules-23-03206}
===============
It is generally admitted that drawing glycans using a chemical notation can be at least cumbersome, if not a challenge. This issue was addressed very early in glycochemistry \[[@B1-molecules-23-03206]\], and several groups have proposed symbolic nomenclatures to ease the representation of complex carbohydrates. Although these representations have evolved from the original idea to several visualization schemes reviewed in \[[@B2-molecules-23-03206]\], the most compelling ones consist of a series of geometrical shapes that symbolize monosaccharide units connected with lines specifying glycosidic linkages. A recent reappraisal of this representation finally rallied a wide community of glycoscientists who have settled on the usage of the Symbol Nomenclature for Glycans (SNFG) \[[@B3-molecules-23-03206]\]. As a result of the dissemination of this symbolic notation, a variety of software applications for drawing glycans has been developed to fulfil mainly two distinct purposes. A drawing interface may be useful to query databases or to input structures for further analysis, modelling, or prediction of properties. Conversely, the extent of glycan encoding formats often requires the translation of commonly used formats into images. The intrinsic user-friendliness of the symbolic nomenclature not only meets the glycoscientists' needs but also simplifies access to glycoscience for non-specialists.
Over the past decades, the variety of glycan representations used in chemistry, glyco-chemistry and glycobiology gave rise to a series of editing tools. We however, limit our coverage to those that meet three requirements: (1) web-based, (2) freely accessible, and (3) exporting structures to standard encoding formats \[[@B2-molecules-23-03206]\]. KegDraw can be considered as the earliest standalone online graphical glycan editor though it was preceded by a basic tool integrated in GlycoSuiteDB for graphical queries in the IUPAC condensed format used in that database \[[@B4-molecules-23-03206]\]. KegDraw was designed to perform a similarity search in the KEGG databases where the original CarbBank \[[@B5-molecules-23-03206]\] is integrated \[[@B6-molecules-23-03206]\]. It was a Java application that needed to be installed and could produce low-resolution images where text labels of monosaccharides were connected by lines. GlycanBuilder \[[@B7-molecules-23-03206]\] is a more recent java applet developed during the EuroCarb project \[[@B8-molecules-23-03206]\]. This tool provides an interface to assemble glycan structures using the graphic visualization scheme proposed by the Consortium of Functional Glycomics (CFG) and described in \[[@B9-molecules-23-03206]\]. GlycanBuilder was upgraded to work in a web environment, but it needs to be installed and connected with a server. Moreover, recent security upgrades of all major browsers seriously challenged the usage of Java applets. This web-based implementation usually involves fairly long time-lags during processing despite a recent upgrade \[[@B10-molecules-23-03206]\]. However, this drawback can be avoided as demonstrated with the glycan structure builder of the GlycoViewer platform \[[@B11-molecules-23-03206]\] a web interface for drawing glycans that pioneered in terms of design, usability and speed. A subjective weak point of this tool is the drag-and-drop implementation that allows computer users to draw a glycan structure easily with the help of a mouse. However, this may become tiresome if many and large structures are drawn on a touch-based device such as a tablet or a smartphone. Also, monosaccharides are displayed in text format like KegDraw neglecting the advantage brought by a symbolic notation. GlycoViewer, like GlycanBuilder, is composed of a client interface and a server written in Ruby on Rails. The drag-and-drop feature is also used in Glycano, a software for drawing glycans entirely written in JavaScript (<http://glycano.cs.uct.ac.za>). Glycano is browser-independent and does not require a server. The interface uses SNFG symbols, though not in the proper color scheme and lacks the option of positioning of monosaccharides according to their linkage. It is designed for trained chemists and glycobiologists precluding access for non-experts. Polys \[[@B12-molecules-23-03206]\] and DrawGlycan-SNFG \[[@B13-molecules-23-03206]\] are the most recent published tools to draw glycan structures. Polys is integrated in the Glyco3D portal \[[@B14-molecules-23-03206]\] to mainly serve as an input form for building 3D models. This is also the case of the (unpublished) carbohydrate builder (<http://glycam.org/tools/molecular-dynamics/oligosaccharide-builder/build-glycan?id=1>) of the GLYCAM-Web portal. DrawGlycan-SNFG is standalone and produces high quality and SNFG-compliant depiction of glycan structures but data input is limited to IUPAC linear encoding \[[@B15-molecules-23-03206]\]. Users cannot draw a structure interactively, but only use a string encoding to generate images.
In summary, tools developed in the last decade, are either incompatible with SNFG or rely on complicated and/or slow interfaces or have other format or usability limitations. From a technical viewpoint, these tools require a continuous connection with a server to support consistent and fast drawing. From a scientific viewpoint, we believe glycan drawing should be democratized. With this in mind, we have developed SugarSketcher, an intuitive and fast interface to draw glycans online. This tool is entirely built in JavaScript and does not need a connection to any server. It is supported by the major browsers and is fully compatible with the SNFG nomenclature. The interface has been streamlined to accommodate expert and non-expert usage. In particular, a "quick mode" allows users with limited knowledge of glycans to build up a structure quickly while the "normal mode" offers a broader range of options regarding the structural features of complex carbohydrates. Its beta-version is currently implemented as another graphic interface for searching structures in CSDB, the Carbohydrate Structure Database \[[@B16-molecules-23-03206]\]. The GlycoCT \[[@B17-molecules-23-03206]\] export feature allows every software project or a database supporting GlycoCT to translate the SugarSketcher user input into the project native notation and to further process the constructed glycan sequences, including piping data to the search engine. Major cheminformatics standards (Simplified Molecular-Input Line-Entry System (SMILES), InChi) are also available for export provided the structure is fully defined (e.g., no undetermined linkage or configuration). Furthermore, the code is destined to be shared and hopefully improved by the community of glycoinformaticians. A prototype of SugarSketcher is currently included in the tool collection of Glycomics\@ExPASy \[[@B18-molecules-23-03206]\] as a standalone application. It can be accessed at <https://glycoproteome.expasy.org/sugarsketcher> while the code is available on GitHub at <https://github.com/alodavide/sugarSketcher>.
2. Results {#sec2-molecules-23-03206}
==========
SugarSketcher is divided in two main components: the core JavaScript library and the D3.js (<https://d3js.org>)-based interface. This division provides two main advantages: (1) the core library can be used standalone or integrated in other web applications that handle information about glycan structures; (2) the interface can be modified by collaborators without changing the underlying core library. In the "Materials and Methods" section we present how the two components have been built using JavaScript and a set of libraries rich in visualization components.
The interface of SugarSketcher has been designed to address the increasing popularity of glycans among scientists with only basic knowledge of carbohydrates. Without knowledge of chemistry behind each monosaccharide, average users can quickly draw glycan and derivative structures using the "quick mode". In this case, the interface presents 12 monosaccharides most commonly found in mammalian glycans as a reflection of the bias in structural data production observed in \[[@B19-molecules-23-03206]\] for example. This limited set of monosaccharides is used as LEGO© bricks to sequentially build a structure. Each time a new monosaccharide is added to a structure, the user needs to input only the anomericity of the linkage and the attachment position of the monosaccharide. To depict monosaccharides and glycans, SugarSketcher uses the SNFG icons.
Positioning monosaccharides basing on the acceptor linkage only can end up in overlays. For example, this situation occurs when two nodes coming from different branches should be in the same place. We have created a grid system that tracks whether a position is free or already occupied by a monosaccharide. At present, positioning follows the option of depicting the monosaccharide linkages with embedded type and anomericity \[[@B20-molecules-23-03206]\]. [Figure 1](#molecules-23-03206-f001){ref-type="fig"}a shows a galactosylated and sialylated N-glycan core drawn with the "quick mode" option.
SugarSketcher provides a "normal mode" for glycochemists and glycobiologists. After switching off the "quick mode", a user gains access to a more sophisticated menu. Now, the addition of a monosaccharide requires knowledge of stereochemistry and ring types, and not only acceptor, but also a donor linking position must be specified to create a glycosidic linkage. In the "normal mode", the user can decorate monosaccharides with substituents and can add repeating units. In addition to the delete function available in the "quick mode", an experienced user can modify each monosaccharide using the update button. [Figure 1](#molecules-23-03206-f001){ref-type="fig"}b shows an example of a bacterial LPS drawn in the "normal mode" following the drawing procedure summarized in [Figure 2](#molecules-23-03206-f002){ref-type="fig"}.
The overall drawing process is a succession of feature selection steps that ends with the display of an SNFG icon. This process is repeated as many times as the target structure contains building blocks. For the realization of [Figure 1](#molecules-23-03206-f001){ref-type="fig"}b example, the user is first invited to "Add Node" in the top menu. Mousing over this task reveals two options: monosaccharide and substituent. In the example the first entity is a monosaccharide ([l]{.smallcaps}-glycero-[d]{.smallcaps}-manno-heptose) which is therefore the selected option. Clicking on it prompts the display of a first array of geometrical shapes. In the example, clicking on the hexagon then prompts a second array of colors represented as drops. Clicking on the green drop will result in moving to the next step which is the sequential selection of options for anomericity, ring and linkage characteristics. Selecting optional values at each step (fully described in [Supplementary S1--S5](#app1-molecules-23-03206){ref-type="app"}) will lead to the placement of the corresponding monosaccharide (alpha-linked [l]{.smallcaps}-glycero-[d]{.smallcaps}-manno-heptose) in main space of the interface.
SugarSketcher was benchmarked with currently available web interfaces in terms of speed for drawing a selection of small, middle-sized, and large molecules from several glycan databases. It was also compared to other tools according to a list of qualitative criteria. The results of the speed tests are summarized in S6 in [Supplementary Material](#app1-molecules-23-03206){ref-type="app"}. Sugar Sketcher functionality and performance are compared to other tools in [Table 1](#molecules-23-03206-t001){ref-type="table"}.
3. Discussion {#sec3-molecules-23-03206}
=============
We encourage glycoinformatics project holders to integrate SugarSketcher as an alternative structure input tool. Currently it has been incorporated in CSDB \[[@B16-molecules-23-03206]\]. Ultimately it is also destined to become the graphic interface for querying databases and running software from the glycomics\@ExPASy collection.
At the moment, SugarSketcher supports the depiction of glycan structures following the standard imposed by SNFG. However, there are various features that we would like to implement in the nearest future:On the fly Copy-Paste of the glycan imageImport structure via the URLAutomatic adjustment of parameters where it is possible to detect their chemically forbidden combinations
Several users have asked for the possibility of copying the glycan image from SugarSketcher to other applications via the clipboard. The choice of high-resolution Scalable Vector Graphics (SVG) as the primary image format precludes this possibility since no general-purpose software supports it. Other image formats will be included.
Another feature in development is the import of a glycan from the GlycoCT or IUPAC string in the URL parameter for an automatic image generation.
As SugarSketcher is developed primarily for non-glycobiologists, one of the main features we are working on is the possibility of automatic preselection of various parameters where it is possible to infer them from the user input. For example, the linking position in a donor residue (sometimes referred to as "anomeric carbon") can be automatically detected based on the residue type (aldose vs. ketose). We provide in [supplementary material](#app1-molecules-23-03206){ref-type="app"}, the non-exhaustive list of parameters/rules to be introduced in the next version to control the consistency of users' input.
4. Materials and Methods {#sec4-molecules-23-03206}
========================
SugarSketcher is divided into two parts, the library and the interface. The library is a collection of JavaScript files which get compressed into a single file. The Interface is composed of nine files plus the index.html and two Cascading Style Sheet (CSS) files, which can be merged in one. In the end, the complete SugarSketcher needs 13 files.
4.1. The Core Library {#sec4dot1-molecules-23-03206}
---------------------
The main assumption behind the core library is that each glycan can be represented as a graph. This concept has already been applied by users of MzJava \[[@B21-molecules-23-03206]\] from which the core library is inspired. To avoid reinventing the wheel, the Graph class has been taken from Sigma.js, an established JavaScript library (<http://sigmajs.org>) which allows the integration of graphs in a web environment.
The Graph class, together with GraphNode and GraphEdge classes, form the general data structure that can handle a general graph. Since a glycan structure holds specific chemical information for each node and edge, the basic data structure is extended in the glycomics package to encapsulate glycan-specific information. For example, GraphNode is extended by Monosaccharide and Substituent classes. The Glycan class allows the creation of saccharide objects by connecting monosaccharides and substituents with glycosidic and substituent linkages, respectively.
To control data input, the core library provides a collection of dictionaries, one for each glycan-specific entity. These dictionaries encompass anomericity, ring type, isomer, etc. see [Tables S1--S5](#app1-molecules-23-03206){ref-type="app"}. The user is required to pick an entry from the dictionary thereby blocking possible arbitrary inputs. At the level of nodes, the library provides dictionaries for MonosaccaridesTypes, SubstituentTypes, RingTypes and Anomericity. In addition, edges are defined from donor and acceptor linkage positions. The data structure is detailed in [Figure 3](#molecules-23-03206-f003){ref-type="fig"} where the relationships between entities are detailed.
Since glycan structures are not always fully defined, the core library has been designed to handle fuzziness in both edges (linkage position) and nodes (monosaccharide type, its anomeric, absolute and ring size configurations). The user can input monosaccharides with multiple alternative connections on each carbon. Repeating parts of structure are handled by the repeating unit class and can be added to the glycan structure as single nodes. A collection of glycan structures is already encoded in the library and can be directly used to create sugar objects.
The input-output section of the library allows the import and export of glycan sequences. Since we are mainly using GlycoCT encoding \[[@B17-molecules-23-03206]\] across the tools in Glycomics\@ExPASy \[[@B18-molecules-23-03206]\], the first implemented parser/writer implements the GlycoCT standard. Parsers and Writers are completely decoupled from the data structure allowing the implementation of adapters for any glycan encoding format. The way the library is built and the Apache license 2.0 (<https://www.apache.org>) allow any research group to contribute with their specific import and export adapters.
The core library has been designed to facilitate external contributions and encourage further extension. Unless the Graph class, the rest of the code follow the ECMAScript6 (ES6) standard and comes with unit test. The project includes resources for minification and transpilation to ECMAScript5 (ES5).
4.2. The Interface {#sec4dot2-molecules-23-03206}
------------------
SugarSketcher runs in two different modes: "quick mode" and "normal mode" the application of which is illustrated in the "results" section. In either mode, a collection of pre-built structures can be used as a template amenable to extension. This selection currently mirrors the trend in over-representation of animal N- and O-linked carbohydrate moieties in glycoproteins in recent databases and in the literature (except for CSDB \[[@B16-molecules-23-03206]\]). All N-and O-linked core structures reported in \[[@B9-molecules-23-03206]\] are listed. Additionally, a shortlist of glycan epitopes is provided. For example, to draw a di-antennary core-fucosylated N-linked structure, an N-linked core-fucosylated template can be loaded and the first antenna added manually. Then this antenna is selected and with the copy-paste functionality available on right click, the second antenna can be pasted to complete the structure. Mistakes can be corrected with the "Delete" button that has been enabled to prune the tree-like structure.
Once the structure is completed, the depiction can be downloaded in in high-resolution SVG format. As an alternative to images, SugarSketcher provides an export of glycan structures to the GlycoCT machine readable format \[[@B17-molecules-23-03206]\]. Since several glycoinformatic tools provide the export in GlycoCT, SugarSketcher has an internal engine for parsing and displaying GlycoCT encoded structures. In that respect, GlycoCT encoded structures can also be imported and further modified in the SugarSketcher interface. We also added newly developed converters (unpublished) to extend export options to cheminformatics standards, namely SMILES, InChi (IUPAC International Chemical Identifier) and InChiKey, commonly adopted in major compound databases such as PubChem \[[@B22-molecules-23-03206]\] and ChEBI \[[@B23-molecules-23-03206]\]. At this stage, this export is restricted to fully defined structures, i.e., cannot be applied to structures with undetermined linkages or residues.
The interface is built using HTML5, CSS3 and the JavaScript library D3.js. (V3). To handle all glycan information, the interface is connected to the core library (see [Section 4.1](#sec4dot1-molecules-23-03206){ref-type="sec"}). SugarSketcher works with any browser that supports JavaScript (up to version ES6) and can be integrated in any website. In addition, it can be combined with web-based glycoinformatics tools, that accept GlycoCT as encoding format. For example, GlycoDigest \[[@B24-molecules-23-03206]\] that simulates the digestion of glycans by exoglycosidases accepts the structure input in GlycoCT format. The same is true for several modern glycan databases, such as GlyTouCan \[[@B25-molecules-23-03206]\] or CSDB \[[@B16-molecules-23-03206]\].
5. Conclusions {#sec5-molecules-23-03206}
==============
To conclude, we are aware that SugarSketcher still shows weaknesses that are listed and further documented in the [Supplementary Materials](#app1-molecules-23-03206){ref-type="app"} file. Nonetheless, we are actively attending to these items and invite users and developers to participate in the GitHub issue tracker to send feedback and report bugs.
We thank Claire Doherty and Prof Sabine Flitsch for making this work fit the scope of the IB-Carb network and Elisabeth Gasteiger for helping with integration in the ExPASy server.
The following are available online at <http://www.mdpi.com/1420-3049/23/12/3206/s1>. (1) Installation; (2) List of missing features and shortcomings; (3) Known bugs; (4) Currently imposed constraints; (5) Open-to-discussion rules that would prevent user's mistakes (to be implemented).
######
Click here for additional data file.
Conceptualization, D.A., J.M. and F.L.; methodology, D.A. and F.L.; software, D.A., N.H., R.C., P.S and J.M.; validation, D.A., P.T., J.M., R.S.V and F.L.; writing---original draft preparation, D.A. and F.L.; writing---review and editing, P.T., R.S.V and F.L.; supervision, J.M and F.L.; funding acquisition, R.S.V, P.T. and F.L.
This work was supported by the European Union FP7 Innovative Training Network \[grant number 316929\], IB-Carb (<http://ibcarb.com/>) and by the Swiss Federal Government through the State Secretariat for Education, Research and Innovation SERI. ExPASy is maintained by the web team of the Swiss Institute of Bioinformatics and hosted at the Vital-IT Competency Center. Architectural and interface design testing was supported by Russian Science Foundation, grant 18-14-00098. P.S. benefits from RIAT-CZ (ATCZ40).
The authors declare no conflict of interest.
The appendix contains details of the current procedure for importing the software, the full description of dictionaries in tables, lists of shortcomings and known bugs as well as suggestions for introducing consistency rules. All is saved in a [Supplementary file](#app1-molecules-23-03206){ref-type="app"}.
######
(**a**) Screenshot of the SugarSketcher interface after the completion of an N-glycan core carried out in the "quick mode". The upper menu shows a selection of 12 monosaccharides more frequently observed in the composition of mammalian glycans. Monosaccharides are positioned following the option proposed in \[[@B16-molecules-23-03206]\]. Linkage is indicated by the bond angle whereas anomericity by solid (β) or dashed (α) lines.; (**b**) Screenshot of the SugarSketcher interface after the completion of a glycan carried out in the "normal mode". The top menu shows a much broader range of possible monosaccharides. The same positioning procedure applies.


{#molecules-23-03206-f002}
{#molecules-23-03206-f003}
molecules-23-03206-t001_Table 1
######
Qualitative comparison of six tools generating SNFG-compatible glycan pictures.
SugarSketcher \[[@B1-molecules-23-03206]\] POLYS Builder \[[@B2-molecules-23-03206]\] GlyTouCan \[[@B3-molecules-23-03206]\] CSDB Wizard \[[@B4-molecules-23-03206]\] GlycoViewer \[[@B5-molecules-23-03206]\] Carbohydrate Builder \[[@B6-molecules-23-03206]\]
-------------------------------------------------- -------------------------------------------- -------------------------------------------- -------------------------------------------------- ------------------------------------------------------------ ------------------------------------------ ---------------------------------------------------
Library of pre-defined structures yes no yes yes no no
Edit a library; add substituents yes no yes via menu yes no
Selection of sugar residues via graphic symbols yes yes yes yes no no
Selection of sugar residues via text description no yes no yes yes yes
Clicks for a disaccharide \* 16/6 8 12 9 7 10
Model time of a disaccharide \* \[min\] 0:21/0:10 0:16 0:41 0:32 0:56 0:21
Import GlycoCT, Library INP (internal format) GlycoCT, Library, CarbBank, Linucs, IUPAC, WURCS GlycoCT, Library no no
Export GlycoCT, SMILES, InChi, InChiKey, SVG INP, PDB GlycoCT, Glyde, Linucs, WURCS GlycoCT, WURCS, SMILES, GLYDE-II, GLYCAM, LINUCS, MOL-file no PDB
Implementation JavaScript PHP, C Java PHP, JavaScript Ruby, JavaScript unknown
\* For SugarSketcher, the first value is running in normal mode and the second in the Quick mode. Corresponding URLs: \[[@B1-molecules-23-03206]\] <https://glycoproteome.expasy.org/sugarsketcher/>; \[[@B2-molecules-23-03206]\] <http://glycan-builder.cermav.cnrs.fr/>; \[[@B3-molecules-23-03206]\] <https://glytoucan.org/Structures/graphical>; \[[@B4-molecules-23-03206]\] <http://csdb.glycoscience.ru/database/core/wizard.html>; \[[@B5-molecules-23-03206]\] <http://www.glycoviewer.babs.unsw.edu.au/sequence_sets/add.xhtml>; \[[@B6-molecules-23-03206]\] <http://glycam.org/tools/molecular-dynamics/oligosaccharide-builder/build-glycan?id=1>.
| {
"pile_set_name": "PubMed Central"
} |
Stress Level Zero just offered up a tiny new look at its hotly-anticipated VR shooter, Boneworks. Or at least a look at how the team is making it.
Instead of another jaw-dropping gameplay video, this video focuses on how the team works together efficiently. Specifically, the team talks about Plastic, an application that allows them to easily sync up their latest work with others to make collaboration much easier. It gives you a good idea of how a team of this size is able to work on a game of this sort of ambition.
But don’t worry, there is a tiny tease of gameplay too. We see another impressive display of Boneworks’ physics. The team’s Brandon Laatsch piles a bunch of weapons into a trash can, then presses a button using the trash can before emptying them all out on the floor. It’s not really the point of the video, but we continue to be amazed at just how realistic and considered Bonework’s laws seem to be.
In fact, that’s why we said it felt like VR’s first next-generation game when we last played it. Boneworks promises to raise the bar in giving players a sense of physical presence in VR; you can realistically grab and interact with just about any object. Suffice to say, we’re excited to get our hands on the full thing.
As for a release date? We’re still expecting it to launch in 2019 on Index, Rift, Vive and Windows VR. The Steam store listing for the game still says as much. We don’t have a date more specific than that sadly. We’ll keep a lookout for more new Boneworks video content and keep you updated as soon as we hear more news. | {
"pile_set_name": "OpenWebText2"
} |
Q:
using find sort and wc -l in the one command
This is how find files using find and show the number of lines in each file
$ find ./ -type f -name "data*.csv" -exec wc -l {} +
380723 ./data_2016-07-07-10-41-13.csv
369869 ./data_2016-07-11-10-42-01.csv
363941 ./data_2016-07-08-10-41-50.csv
378981 ./data_2016-07-12-10-41-28.csv
1493514 total
how do I sort the results by file name? Below is my attempt, but it is not working.
$ find ./ -type f -name "data*.csv" -exec wc -l {} + | sort
1493514 total
363941 ./data_2016-07-08-10-41-50.csv
369869 ./data_2016-07-11-10-42-01.csv
378981 ./data_2016-07-12-10-41-28.csv
380723 ./data_2016-07-07-10-41-13.csv
$ find ./ -type f -name "data*.csv" | sort -exec wc -l {} +
sort: invalid option -- 'e'
Try `sort --help' for more information.
$ find ./ -type f -name "data*.csv" -exec sort | wc -l {} +
find: wc: {}missing argument to `-exec'
: No such file or directory
wc: +: No such file or directory
0 total
$
Can someone offer a solution and correct me so I understand it better?
EDIT1
from man sort
-k, --key=POS1[,POS2]
start a key at POS1 (origin 1), end it at POS2 (default end of line). See POS syntax below
POS is F[.C][OPTS], where F is the field number and C the character position in the field; both are origin 1. If neither -t nor -b is in effect, characters in a field are counted from the begin‐
ning of the preceding whitespace. OPTS is one or more single-letter ordering options, which override global ordering options for that key. If no key is given, use the entire line as the key.
A:
Ismail's suggestion of using sort -k is correct. However, I'm often too lazy to learn (or relearn) how -k works, so here's a cheap solution:
find . -name 'data*.csv' -print0 | sort -z | xargs -0 wc -l
Edit: after some experimentation, I did figure out how -k works:
find . -name 'data*.csv' -exec wc -l {} + | sort -k 2
| {
"pile_set_name": "StackExchange"
} |
Q:
Pass an image file to AWS lambda
I would like to know if there is a way to pass an image file from the client and send it to AWS lambda function. I ask this because I have to save the image file in a S3 bucket but I want to rename and compress the file in the lambda function before uploading it. If it's not possible give me your suggestion.
A:
It is possible. Save the image (PutObject) in a S3 bucket. It is called Push Model where a PutObject in S3 triggers a lambda execution. The S3 object name (key) is passed to the lambda function. The lambda when invoked, downloads the image fie, resizes it and uploads the resized image to a different bucket in S3.
AWS has detailed documentation and example for your use case. Check Using AWS Lambda with Amazon S3 and Tutorial: Using AWS Lambda with Amazon S3
| {
"pile_set_name": "StackExchange"
} |
G20 Summit September 2013: USA in Big Trouble, as World Decides to Abandon the Dollar
During the 2008 crisis, Fed has been printing tons of money and borrowed it out. Now if they increase the interest rate, how are people going to pay back this money? Without increasing rate already seemed like this money can’t be paid back. I think America is the biggest bubble in the history of man kind. The question : When is the bubble going to burst?
Who does the Fed think they’re kidding? Every 1% rise in rates adds $170 billion to the federal deficit, compounded annually. Raise rates by 3% to get back to “normal” levels, and the increase in the deficit amounts to 20% of revenues and growing.
The understanding is that the US has not been invited to the G20 meeting in Russia. Talk about Obama not wanting to meet up with Putin in Russia is BS! Dr. Jim Willie, a highly credible source, is saying that a Gold Trade Settlement system has been worked out and will replace the USD in international trade settlement. The price of gold will be set in the region of US$7000/oz and silver about US$150/oz. We will know very soon whether this plan is executed in September! Watch the price of gold leading up to the G20 meeting on 5-6 September, St. Petersburg, Russia!
The G-20 summit is coming up in less than a month. It will be held in St. Petersburg (no, not Florida) the city in Russia. This I believe will be the meeting where the world decides to abandon the dollar or at least set the wheels in motion. We, to put it bluntly have ticked the world off in so many different ways and for all intents and purposes were not really even invited to the meeting. No matter what, President Obama has refused to meet with Mr. Putin “because” of the Snowden affair. This is wrong on so many levels but most importantly will isolate the U.S. even further. The sad thing is that we have done this to ourselves with with bailout, stimulus and other scams pulled off by our corrupt government.
–
I think the best way to describe what I believe will happen is that the U.S. will be looking in from the outside while the rest of the world votes “on our fate.” Will a new currency, one that has some sort of tie to gold be announced?
Odds are better than 50-50 that a “new currency” is at least mentioned. Even if a new currency is not announced, I believe the odds are better than 80% that “non dollar” settlement of oil and other commodities will be discussed. Earlier, Prime Minister Medvedev is said to have issued an alert to Russians all over the world to “get their money out” of U.S. and Western banks. Why?
–
Is it a coincidence that the G-20 summit is being held in Russia and their #2 in command would make a statement like the above? Is it a coincidence that Western gold inventories are being depleted in a “bank run” fashion? Is it coincidence that GOFO (Gold forward rates) are now negative 25 days running and negative out to 9 month contracts? In the past, GOFO rates have only been negative 3 times that I can remember and only for 2 days for each instance, what’s up now? Interestingly, the dollar index has softened and broken trend lines and supports, again, why?
–
We have seen Russia and China take the other side to the U.S. pertaining to Syria (and their natural gas pipeline), Vladimir Putin declined a $15 billion arms deal with Saudi Arabia because they wanted to tie the gas pipeline to Europe as part of a deal… We see these same parties taking opposite sides in the “non” coup in Egypt. My point is that the U.S. can no longer “speak” and watch as the world bows down to our wishes, those days are over.
–
Many are saying that gold is “dead” money until at least Sept., I am not so sure. “Front running” of any deal that has already been made (and this is exactly how political deals are made) will show up in the markets BEFORE any announcements are made. Watch the dollar and Treasury bonds for clues to what is coming. There is potential for an announcement for settlement of oil trades in other currencies which will “send” Treasury bonds back home to our shores. The Fed is already buying up the majority of Treasury issuance, new supply from foreigners wanting out will add to their strain.
–
This “front running” could also show up in the price of gold and silver and of course the potential exists that we find out about an inventory failure. Inventory levels are very low, it will not take much to clean them up and out. All that has lined up is no coincidence and the day is coming where massive “gaps” will open …never again to be filled. Stocks, bonds, commodities and currencies will be jolted open and not looked back to their initial movements. If you sold metal or haven’t bought yet the mentality of “Oh, I’ll buy it on a pullback” won’t be available once this thing breaks. Getting on board or getting back in will be the toughest thing to do mentally …if it’s even an option if we see an exchange default.
VN:F [1.9.22_1171]
calculating...
Rating: 10.0/10 (2 votes cast)
G20 Summit September 2013: USA in Big Trouble, as World Decides to Abandon the Dollar, 10.0 out of 10 based on 2 ratings | {
"pile_set_name": "Pile-CC"
} |
Q:
Java 8 Stream function grouping to Map where value is a Map
I am trying to collect result of a list and organise them into a Map where the value is a Map:
private Map<Organisation, Map<LocalDate, Status>> getSummaries(final List<Status> summaries) {
return summaries
.stream()
.collect(groupingBy(Status::getOrganisation,
toMap(Status::getProcessedDate, Function.identity())));
}
I get java.lang.IllegalStateException: Duplicate key error as getProcessedDate() is same for different values in the list.
Is there a way I can merge multiple objects with same processeddate into the map?
e.g say I have these objects in the the list:
Summary(ProcesseDate=2020-01-30, Organisation=ABC, status=OK, statusCount=5)
Summary(ProcesseDate=2020-01-30, Organisation=ABC, status=FAILED, statusCount=2)
Summary(ProcesseDate=2020-01-30, Organisation=APPLE, status=OK, statusCount=5)
Summary(ProcesseDate=2020-01-30, Organisation=APPLE, status=REJECTED, statusCount=5)
Values contained in the map should be:
key=ABC
value { key=2020-01-30, value= Summary(ProcesseDate=2020-01-30, Organisation=ABC, status=OK, statusCount=5), Summary(ProcesseDate=2020-01-30, Organisation=ABC, status=FAILED, statusCount=2) }
When I tried toMap(Status::getProcessedDate, Function.identity(), (v1, v2) -> v2))); it removes one of the entries
A:
You might just be looking for a nested grouping if you don't want to merge data and don't have unique keys:
private Map<Organisation, Map<LocalDate, List<Status>>> getSummaries(final List<Status> summaries) {
return summaries
.stream()
.collect(groupingBy(Status::getOrganisation, groupingBy(Status::getProcessedDate)));
}
| {
"pile_set_name": "StackExchange"
} |
Bernard Gross
Bernard M. Gross (born May 22, 1935) is a former Democratic member of the Pennsylvania House of Representatives.
He was born in Philadelphia.
References
Category:Members of the Pennsylvania House of Representatives
Category:Pennsylvania Democrats
Category:Living people
Category:1935 births
Category:Politicians from Philadelphia | {
"pile_set_name": "Wikipedia (en)"
} |
The relationship of school breakfast to psychosocial and academic functioning: cross-sectional and longitudinal observations in an inner-city school sample.
To determine if a relationship exists between participation in a school breakfast program and measures of psychosocial and academic functioning in school-aged children. Information on participation in a school breakfast program, school record data, and in-depth interviews with parents and children were collected in 1 public school in Philadelphia, Pa, and 2 public schools in Baltimore, Md, prior to the implementation of a universally free (UF) breakfast program and again after the program had been in place for 4 months. One hundred thirty-three low-income students had complete data before and after the UF breakfast program on school breakfast participation and school-recorded measures, and 85 of these students had complete psychosocial interview data before and after the UF breakfast program. Teacher ratings of behavior before and after the UF breakfast program were available for 76 of these students. Schoolwide data showed that prior to the UF breakfast program, 240 (15%) of the 1627 students in the 3 schools were eating a school-supplied breakfast each day. Of the 133 students in the interview sample, 24 (18%) of the students ate a school-supplied breakfast often, 26 (20%) ate a school-supplied breakfast sometimes, and 83 (62%) ate a school-supplied breakfast rarely or never. Prior to the UF breakfast program, students who ate a school-supplied breakfast often or sometimes had significantly higher math scores and significantly lower scores on child-, parent-, and teacher-reported symptom questionnaires than children who ate a school-supplied breakfast rarely or never. At the end of the school term 4 months after the implementation of the UF breakfast program, school-supplied breakfast participation had nearly doubled and 429 (27%) of the 1612 children in the 3 schools were participating in the school breakfast program each day. In the interview sample, almost half of the children had increased their participation. Students who increased their participation in the school breakfast program had significantly greater increases in their math grades and significantly greater decreases in the rates of school absence and tardiness than children whose participation remained the same or decreased. Child and teacher ratings of psychosocial problems also decreased to a significantly greater degree for children with increased participation in the school breakfast program. Both cross-sectional and longitudinal data from this study provide strong evidence that higher rates of participation in school breakfast programs are associated in the short-term with improved student functioning on a broad range of psychosocial and academic measures. | {
"pile_set_name": "PubMed Abstracts"
} |
κουταβάκι μικρό κόλευ 1,5 μηνός αρσενικό. € 0 (Ρόδος)
Κωδικός αγγελίας: 70157-1-0
♥ Αποθήκευση στις αγαπημένες Ημερομηνία καταχώρησης: 14-04-2018 16:00:48
Απαντήστε στο: [email protected] Απάντηση
χαρίζεται κουταβάκι 1,5 μηνός αρσενικό, μικρό κόλευ που δε μεγαλώνει. Πληροφορίες 6944152293.
Καταχωρητής: Ιδιώτης (Μη εγγεγραμμένος χρήστης)
Τύπος αγγελίας: Προσφορά
| {
"pile_set_name": "OpenWebText2"
} |
Edward Wasserman, writing in the Feb. 18 Miami Herald, makes an obvious but still unsettling point about the news business:
The nearly two-century-old marriage between consumer advertising and journalism is on the rocks.
Prof. Wasserman, the Knight professor of journalism ethics at Washington and Lee University, recounts that two hundred years from the penny press to the difficulties that “new media” have with a business model that presumes people will pay for news — and therefore advertisers will pay to park themselves in front of those eyeballs. But, says Prof. Wasserman:
That era is now ending, not because the public no longer needs news or because people mistrust news any more than they always have — but because new technologies are churning out better ways to reach customers who are shopping for cars, jobs or homes.
For two centuries, advertising has supported journalism. The First Amendment guarantees freedom of the press — but does not guarantee profitability. That news organizations must achieve without government support.
And they have been doing that. Even now, newspapers represent cash cows for their owners. Profit margins hover in the mid-teens, although they’re down from the heydays of 20-plus percent of just a decade ago. As investors demand that profits be maintained, news companies have reduced expenses — primarily by cutting staffs and curtailing geographical circulation — to do so.
No more, says Prof. Wasserman. Those means will not allow profitability much longer. A new revenue model is needed. He reviews the revived notions of foundation ownership and public financing and finds them lacking.
In some respects such [foundation] patronage is hugely appealing, though as [the American Journalism Review article] suggests the dangers to editorial independence can be no less serious than with advertising support: Indeed, advertisers could be sublimely indifferent to editorial content as long as it was drawing a crowd they could sell to (and wasn’t about them). But foundations and public-minded plutocrats are less bashful about their preferences and convictions, and some philanthropies may even be obligated to ensure their money advances certain policy goals. Public financing, too, long banished from polite conversation, is getting a new airing. An article last fall in the Columbia Journalism Review dusted off the topic and noted that in other countries, stand-alone systems of automatic funding have kept dying newspapers alive and made the press even feistier — more, not less, inclined to watchdog governments.
Remember, please, Prof. Wasserman is exploring ways to generate financial support for journalism beyond the dying-on-the-vine business model to which the news business is unwisely and unfortunately tethered.
He asks one fundamental question: Who gets the money?
It’s the Internet age. A great many entities and individuals have leapt into fact-gathering and topical commentary in a magnificent, worldwide surge of communicative enfranchisement. Shouldn’t they get compensated, too? Maybe the solution isn’t to escape the market, but to empower it. Modern computing offers unparalleled capacities to track and calculate. Imagine a vast menu of news and commentary offered to you ad-free for pennies per item, the charges micro-billed, added up and presented like a utility bill at month’s end. The money that journalism providers got would depend on their audience.
Right now, I enjoy The New York Times free of charge every day. And I read, also free, news stories from other Web sites. Has Prof. Wasserman proposed a workable business model that would benefit bloggers as well?
Would you pay a few pennies for each item you read on the Web, billable at the end of the month? If you visited S&R, would you pay a few cents to read the posts? More to the point, if others read your comments, should you be paid a few cents as well?
I don’t know if Prof. Wasserman’s suggestions are workable. But his commentary is worth the read. It is a new approach to a vexing problem.
And we do know this: Journalism needs a sustainable business model that secures both suitable revenue for ownership and guarantees against government interference. | {
"pile_set_name": "OpenWebText2"
} |
Q:
Create Video from ImageSource
Is there any easy way of adding ImageSources to a stack and create a video from it?
A:
I already did such a class. I only have to submit my "ImageInfo" which is a system.DrawingBitmap. This can be created easy by using the following code:
Public Function WpfBitmapSourceToBitmap(ByVal source As BitmapSource) As System.Drawing.Bitmap
If source Is Nothing Then Return Nothing
Dim bmp As New System.Drawing.Bitmap(source.PixelWidth, source.PixelHeight, System.Drawing.Imaging.PixelFormat.Format32bppPArgb)
Dim data As System.Drawing.Imaging.BitmapData = bmp.LockBits(New System.Drawing.Rectangle(System.Drawing.Point.Empty, bmp.Size), System.Drawing.Imaging.ImageLockMode.[WriteOnly], System.Drawing.Imaging.PixelFormat.Format32bppPArgb)
source.CopyPixels(Int32Rect.Empty, data.Scan0, data.Height * data.Stride, data.Stride)
bmp.UnlockBits(data)
Return bmp
End Function
Then I did a AviClass to add frames to it and store it as a AVI file with preselected Codec (for example XVid MPEG4)
Public Class clsAviWriter
Inherits MAINInterface.TB.Imaging.Pia7.clsDspTemplate
Private cAvi As AviReaderWriter.AviFile.AviManager
Private AviStream As AviReaderWriter.AviFile.VideoStream
Private AudioStream As AviReaderWriter.AviFile.AudioStream
Private cFps As clsTbQueue
Private OldFpsDate As Date = Now
''' <summary>
''' The image object to paint graphical objects on it
''' </summary>
''' <value>descriptor of the image</value>
Public Overrides Property ImageInfo() As MAINInterface.TB.Imaging.Pia7.clsImageInfo
Get
Return Me._ImageInfo
End Get
Set(ByVal value As MAINInterface.TB.Imaging.Pia7.clsImageInfo)
Me._ImageInfo = value
Call WriteFrame()
Call Me.OnPropertyChanged(Me.Guid)
End Set
End Property
Private Sub WriteFrame()
Dim D As Date = Now
Dim Fps As Single
Me.cFps.Values.Add(D.Subtract(Me.OldFpsDate).Ticks)
Me.OldFpsDate = D
Me.cFps.Trim()
Fps = 1000 / New TimeSpan(Me.cFps.Average).TotalMilliseconds
Me.cFps.BufferSize = TB.Math.myTrim(Fps * 1, 1, 1000)
If Me.AviStream IsNot Nothing Then
Me.AviStream.AddFrame(Me._ImageInfo.Image.Clone)
End If
End Sub
Public Sub New()
Me.ClassDescription = "Write images into an avi file"
Me.cFps = New clsTbQueue(10)
End Sub
Private Sub InitializeAvi()
Dim W As String
Dim Fps As Single
Dim di As New IO.DirectoryInfo(TB.SystemMain.AppPath & "Avi\")
TB.FileSystem.CreateDirectories(di)
W = IO.Path.Combine(di.FullName, "Record_" & Now.Ticks.ToString("0") & ".avi")
Me.cAvi = New AviReaderWriter.AviFile.AviManager(W, False)
Dim Opts As New AviReaderWriter.AviFile.Avi.AVICOMPRESSOPTIONS
Opts.fccType = 0
Opts.fccHandler = 1684633208
Opts.dwKeyFrameEvery = 0
Opts.dwQuality = 0 '0 ... 10000
Opts.dwFlags = 8 'AVICOMRPESSF_KEYFRAMES = 4
Opts.dwBytesPerSecond = 0
Opts.lpFormat = 0
Opts.lpParms = New IntPtr(0)
Opts.cbParms = 3532
Opts.dwInterleaveEvery = 0
Fps = 1000 / New TimeSpan(Me.cFps.Average).TotalMilliseconds
'Dim bm1 As Bitmap
'bm1 = TB.Imaging.CreateReScaledImage(Me.pic.Image, New Size(Me.pic.Image.Width, Me.pic.Image.Height), False)
Me.AviStream = cAvi.AddVideoStream(Opts, Math.Floor(TB.Math.myTrim(Fps, 1, 50)), Me._ImageInfo.Image.Clone)
End Sub
Public Overrides Property Run() As Boolean
Get
Return Me._Run
End Get
Set(ByVal value As Boolean)
If Me._Run <> value Then
Me._Run = value
If Me._Run = True Then
Call InitializeAvi()
Else
If Me.cAvi IsNot Nothing Then
Me.cAvi.Close()
Me.cAvi = Nothing
Me.AviStream = Nothing
End If
End If
Call Me.OnPropertyChanged(Me.Guid)
End If
End Set
End Property
End Class
For more codes look here: http://www.wischik.com/lu/programmer/avi_utils.html and MSDN or http://www.codeproject.com/KB/audio-video/avigenerator.aspx
I've posted the sourcecode to show how such a sequence can looks like (code above need some more references which are not public available). You can see that you just need to initialize, add frames, store the FPS value and safe it to harddisk.
Also if wanted, you can search for DirectShow to see how all works.
| {
"pile_set_name": "StackExchange"
} |
Q:
Delete text on searchbar and return to view before the search. SWIFT
I already have some items that are shown on tableview. When I do a search using searchbar, the same tableview is updated with new items.(My project today).
But If I erase what I wrote on searchbar, I need to return for the items they were before.
How can I do that ?
import UIKit
import Alamofire
class MovieViewController: UIViewController, AsyncUpdateProtocol,UISearchBarDelegate, UITableViewDelegate, UITableViewDataSource {
@IBOutlet weak var myTable: UITableView!
@IBOutlet weak var searchBar: UISearchBar!
var myArray: Array<Movie>!
var movieData: MovieData!
override func viewDidLoad() {
super.viewDidLoad()
searchBar.delegate = self
self.myTable.dataSource = self
self.movieData = MovieData(controllerUpdate: self)
self.myArray = self.movieData.data
}
func tableView(tableView: UITableView, numberOfRowsInSection section: Int) -> Int {
return self.movieData.data.count;
}
func tableView(tableView: UITableView, cellForRowAtIndexPath indexPath: NSIndexPath) -> UITableViewCell {
let cell: AnyObject = self.myTable.dequeueReusableCellWithIdentifier("Cell") as! UITableViewCell
cell.textLabel!!.text = self.movieData.data[indexPath.row].title
cell.textLabel!!.adjustsFontSizeToFitWidth = true
var url: NSURL
if((self.movieData.data[indexPath.row].poster ) != nil){
url = NSURL(string: "http://image.tmdb.org/t/p/w500\(self.movieData.data[indexPath.row].poster)") as NSURL!
}else{
url = NSURL(string: "http://developer-agent.com/wp-content/uploads/2015/05/images_no_image_jpg3.jpg") as NSURL!
}
var data = NSData(contentsOfURL: url) as NSData!
var imagem = UIImage(data: data!)
cell.imageView!!.image = imagem
return cell as! UITableViewCell
}
func searchBarSearchButtonClicked(searchBar: UISearchBar){
pesquisa(searchBar.text)
searchBar.resignFirstResponder()
}
func pesquisa(nome: String){
self.movieData.data.removeAll()
self.movieData.request(nome)
}
A:
You should use the UISearchBar delegate method for textDidChange and check for nil, if it's nil then you can load your data like in your view did load, here's what I mean:
func searchBar(searchBar: UISearchBar, textDidChange searchText: String) {
if searchText.isEmpty == true {
self.movieData = MovieData(controllerUpdate: self)
}
}
| {
"pile_set_name": "StackExchange"
} |
Q:
Django Modular Testing
I have an "ok" test suite now, but I'm wanting to improve it. What happens is that I'm having to repeat setting up (limiting models for an example) users, property, school, and city objects.
Here is an example of something I have now, which works (note: could be broken because of changes made to simplify the example, but the logic is what I'm after):
class MainTestSetup(TestCase):
def setUp(self):
self.manage_text = 'Manage'
User = get_user_model()
# set up all types of users to be used
self.staff_user = User.objects.create_user('staff_user', '[email protected]', 'testpassword')
self.staff_user.is_staff = True
self.staff_user.save()
self.user = User.objects.create_user('user', '[email protected]', 'testpassword')
self.city = City.objects.create(name="Test Town", state="TX")
self.school = School.objects.create(city=self.city, name="RE Test University",
long=-97.1234123, lat=45.7801234)
self.company = Company.objects.create(name="Test Company", default_school=self.school)
def login(self):
self.client.login(username=self.user.username,
password='testpassword')
def login_admin(self):
self.client.login(username=self.staff_user, password="testpassword")
class MainViewTests(MainTestSetup):
def test_home(self):
url = reverse('home-list')
manage_url = reverse('manage-property')
anon_response = self.client.get(url)
self.assertEqual(anon_response.status_code, 200)
self.assertNotContains(anon_response, self.manage_text)
self.login_admin()
admin_response = self.client.get(url)
self.assertContains(admin_response, self.manage_text)
def test_search(self):
url = reverse('search')
response = self.client.get(url)
self.assertEqual(response.status_code, 200)
...more tests
As you can see the MainViewTest inherits the setUp and login functions from the MainTestSetup class. This works ok, but I have many apps and not all need to set up all models. What I've tried to do is set up a set of mixins to include things like User, School, Company only in the TestSetups that I need.
This MainTestSetup would turn into something like:
class SchoolMixin(object):
def setUp(self):
self.city = City.objects.create(name="Test Town", state="TX")
self.school = School.objects.create(city=self.city, name="RE Test University",
long=-97.1234123, lat=45.7801234)
class CompanyMixin(SchoolMixin):
def setUp(self):
self.company = Company.objects.create(name="Test Company", default_school=self.school)
class UserMixin(object):
def setUp(self):
User = get_user_model()
# set up all types of users to be used
self.staff_user = User.objects.create_user('staff_user', '[email protected]', 'testpassword')
self.staff_user.is_staff = True
self.staff_user.save()
self.user = User.objects.create_user('user', '[email protected]', 'testpassword')
def login(self):
self.client.login(username=self.user.username,
password='testpassword')
def login_admin(self):
self.client.login(username=self.staff_user, password="testpassword")
class MainTestSetup(UserMixin, CompanyMixin, TestCase):
def setUp(self):
self.manage_text = 'Manage'
This would allow a lot more flexibility for my test suite - this is only a small example. It would allow me in other apps to only include the Mixins that are necessary. For example if company was not needed, I would include just the SchoolMixin from the above example.
I believe my problem here is with inhertance of the setUp function. I'm not sure how to inherit correctly (through super, or though something else?). I've tried using super but haven't been able to get it to work. I have to admit, I'm not that great with classes/mixins yet, so any help/pointers would be much appreciated.
A:
You can simplify and reduce the amount of code you have by using 2 libraries: WebTest and FactoryBoy. You won't need these Mixins.
https://pypi.python.org/pypi/django-webtest
https://github.com/rbarrois/factory_boy
Do the change step by step:
1. Starts with WebTest so you can get rid of your login_ method (and you won't need to prepare the passwords as well). With WebTest, you can specify the logged-in user when you load a page. For instance you will replace:
self.login_admin()
admin_response = self.client.get(url)
with:
admin_response = = self.app.get(url, user=self.admin)
2. Then use factory_boy to create all the objects you need. For instance you will replace:
self.staff_user = User.objects.create_user('staff_user', '[email protected]', 'testpassword')
self.staff_user.is_staff = True
self.staff_user.save()
with:
self.staff_user = StaffFactory.create()
3. Mix it up. Get rid of self.admin. Replace it with:
admin = AdminFactory.create()
response = = self.app.get(url, user=admin)
Once you've done all that, your code is going to be a lot shorter and easier to read. You won't need these mixins at all. For example your SchoolMixin can be replaced like this:
self.city = City.objects.create(name="Test Town", state="TX")
self.school = School.objects.create(city=self.city, name="RE Test University",
long=-97.1234123, lat=45.7801234)
replaced with:
school = SchoolFactory.create()
That's because factories can automatically create related entities with "SubFactories".
Here is a complete really simple example of a test using factories: http://codeku.co/testing-in-django-1
| {
"pile_set_name": "StackExchange"
} |
High focal depth by apodization and digital restoration.
We show that by using an iterative, digital restoration algorithm (Wiener or Kalman), it is possible to improve substantially the defocused optical transfer function of a previously apodized optical system. Consequently, high focal depth can be achieved by the use of an apodizer at the recording step, and a posteriori step of digital restoration. Computer-simulated images exhibit the focal depth achieved. | {
"pile_set_name": "PubMed Abstracts"
} |
(CNN) The initial US assessment of the deadly bombing in Syria that killed four Americans is that ISIS was behind the attack, two US officials said Thursday.
One official said it is believed that Wednesday's attack in Manbij was carried out by an ISIS sleeper cell.
ISIS claimed responsibility for the bombing on Wednesday. The ISIS-affiliated Amaq agency said the attack was carried out by a suicide bomber with an explosive vest.
The American deaths included two US service members, a defense contractor and a Department of Defense civilian, the US Central Command said in a statement. Three other US service members were injured in the attack.
Prior to Wednesday's attack, only two US service members had been killed in action in Syria since the start of the campaign in 2014.
Read More | {
"pile_set_name": "OpenWebText2"
} |
Q:
Who owns the rights to Kingpin?
I used to think that Disney-owned Marvel had a pretty clear-cut claim to Wilson Fisk, featuring him in Daredevil since 2015. However, just a few months ago he appeared in Spider-Verse (brought to you by Sony), and now I'm not so sure I know who owns him. How can both studios use him?
Did someone make a deal with someone else, like how Disney let Fox change Negasonic Teenage Warhead's powers in exchange for the rights to use Ego in Guardians 2?
Is it a Quicksilver-style scenario, where Kingpin exists as part of both the rights to Daredevil and the rights to Spider-Man?
Is this all building up to a Kingpin-Verse movie, in which Peter Parker's multidimensional efforts to bring back Uncle Ben end up uniting Vincent D'Onofrio, Liev Schreiber, Michael Clarke Duncan, and Kingpig?
A:
Sony made a deal with Fox where Fox paid them for the use of Kingpin in the Daredevil movie. But when the Daredevil sequel fell through the rights to Daredevil reverted to Marvel. I think actually the rights to Kingpin may have become a bit scattered.
| {
"pile_set_name": "StackExchange"
} |
Kerberos does not take the bait offered by Timber Wolf. Instead, he just walks by the hero, muttering, " We'll see how far your idealism lasts when your friends get slaughtered."Then, before anyone can respond, he is out and, a heartbeat later, gone.
So, what do our heroes do now?
---Prison----
Morticus shakes his head, grinning. " They might not...yet, anyway. But others can and will...they will tear down the walls of sanity and I get to watch this whole stinking pile of shit that is humanity tear itself apart. You know, I almost wish you knew what you were up against, just to see how your mind would crack."
---Highway----
They have, indeed and the sands immediately stop any motion that still presents danger. The only vehicle not slowing down is the van which means that its stop comes very sudden. None of the passengers inside have used seatbelts...resulting in the almost comical sight of the front pane shattering under the impact of five garishly clad luchadores who slam into the sands...miraculously unharmed!
Valkyrie looks at Trissa, catching the wink. Not sure exactly what she plans to do, Valkyrie would prefer not to show any kind of dissention and simply trust the amazon. After all, if they did face off against Morticus in the future and he thought her a coward, he's be in for a rude shock.
She nods and leaves the room. Once out of the room, she looks around and she flags down an officer. "Can you direct me to someone who can tell me about the girl in the other cell?"
Gloom hovers near the genie wizard. As he speaks shadows coagulate around the luchadores to utter darkness, but with subtle manipulation, somehow TEO is still able to see. Gloom himself isn't hindered by the darkness at all. "You should give up now..."
Morticus grin turns truly predatory. " Oh, an innocent soul, then? How sweet...My employers would love her..." he stands up, moving towards the glass, the runes covering it glowing with a greenish light as they keep him at bay. " But you...you are different. Hardened...it would take so much longer to break you..."
The officer frowns as he thinks for a moment. " Oh, yes...nothing special, really. Her name's Lucy Farrows. Apparently she's got a minor magic talent and attempted to resurrect a zombie from a veteran's grave on a dare. We kept her here to protect her from possible magical backlash while we wait for her mother to pick her up. Kids...they really shouldn't mess with that stuff..." He shakes his head.
---Highway----
The two heroes are met with a salvo of insults and denials in mexican Spanish while the five brothers seem to...organize themselves without being able to see each other, doing a concerted effort to run out of the cloud, all headed in the very same direction...which is away from the voices and over the now-parked cars, jumping, sprinting and somersaulting like world-class athletes...
"Some people just have less tolerance for depravity than others." Trissa shrugged. "On the other hand, I doubt that you have what it takes to give me a bad night's sleep, much less break me." She stepped right up to the glass and stared into his eyes. "What can you do that would threaten a true Amazonian warrior's soul?"
" I could show you the true darkness that lies beneath this illusion you take for reality", Morticus hisses, apparently very much enjoying having an audience. " I could peel away layer upon layer of your very soul and drag you, screaming and bleeding into the endless wastes beyond the gates of sanity where the true inheritors of all that is lie in wait."
Gloom is slightly disappointed that they are so easily escape his summoned darkness. On his wink the darkness hardens to a leash and he tries to catch one of the fleeing Mexicans. Somehow his dark emotions seem to enhance his powers...
Apex ducked low and maneuvered through the museum, coming to the entrance. He paused a moment, to look around until he spotted someone. "Hey wolfie try to get dem on da radio again, we done here right?"
He waved his massive hand as he strode towards the pretty young officer, his long stride had him beside her side in two large steps. "Five-five-five ... uhhh ... eight-nine-four-seven ... yeah, dats it." He leaned closer grinning, "Jack want you to call him, but him too puny to ask pretty girl. Hur. Hur."
He turned his back and walked towards Timber Wolf. "Dey coming to pick us up, or Apex gotta find a taxi dat'll fit him?"
"If sacred spaces are spared the ravages of war -- make all places sacred. And if the holy people are to be kept harmless from war -- make all peoples holy." -- Norrin Radd (Silver Surfer: Requiem; J. Michael Straczynski)
Hazid frowns."Thou have chosen thy destiny sen." The sands threaten to grab all the luchadores again, but this time it rises up, creating a massive wall of sandstone blocks ahead on the highway. "Quickly Gloom, capture sem"
Using Illusions in the sand, rank 7 illusion, Three senses types, Area 125 cft, independent 6.
"In the case of Trissa and Valkyrie that probably just means they're still working the interrogations. But, with Gloom and TEO it means that they are either someplace the comms don't reach or they're in trouble.
"Let's see if we can find out which."
He activates his comm, "Watcher, if possible I need locations on Gloom and The Enlightened One. We'll need transport to them asap if you can locate them."
Here comes the Dog!Stong and brave!Here comes the Dog!The day he will save! | {
"pile_set_name": "Pile-CC"
} |
Q:
sklearn-LinearRegression: could not convert string to float: '--'
I am trying to use a LinearRegression from sklearn and I am getting a 'Could not convert a string to float'. All columns of the dataframe are float and the output y is also float. I have looked at other posts and the suggestions are to convert to float which I have done.
<class 'pandas.core.frame.DataFrame'>
Int64Index: 789 entries, 158 to 684
Data columns (total 8 columns):
f1 789 non-null float64
f2 789 non-null float64
f3 789 non-null float64
f4 789 non-null float64
f5 789 non-null float64
f6 789 non-null float64
OFF 789 non-null uint8
ON 789 non-null uint8
dtypes: float64(6), uint8(2)
memory usage: 44.7 KB
type(y_train)
pandas.core.series.Series
type(y_train[0])
float
from sklearn.model_selection import train_test_split
X_train,X_test,y_train,y_test=train_test_split(X,Y,random_state=0)
X_train.head()
from sklearn.linear_model import LinearRegression
linreg = LinearRegression().fit(X_train, y_train)
The error I get is a
ValueError Traceback (most recent call last)
<ipython-input-282-c019320f8214> in <module>()
6 X_train.head()
7 from sklearn.linear_model import LinearRegression
----> 8 linreg = LinearRegression().fit(X_train, y_train)
510 n_jobs_ = self.n_jobs
511 X, y = check_X_y(X, y, accept_sparse=['csr', 'csc', 'coo'],
--> 512 y_numeric=True, multi_output=True)
513
514 if sample_weight is not None and np.atleast_1d(sample_weight).ndim > 1:
527 _assert_all_finite(y)
528 if y_numeric and y.dtype.kind == 'O':
--> 529 y = y.astype(np.float64)
530
531 check_consistent_length(X, y)
ValueError: could not convert string to float: '--'
Please help.
A:
A quick solution would involve using pd.to_numeric to convert whatever strings your data might contain to numeric values. If they're incompatible with conversion, they'll be reduced to NaNs.
from sklearn.linear_model import LinearRegression
X = X.apply(pd.to_numeric, errors='coerce')
Y = Y.apply(pd.to_numeric, errors='coerce')
Furthermore, you can choose to fill those values with some default:
X.fillna(0, inplace=True)
Y.fillna(0, inplace=True)
Replace the fill value with whatever's relevant to your problem. I don't recommend dropping these rows, because you might end up dropping different rows from X and Y causing a data-label mismatch.
Finally, split and call your classifier:
X_train, X_test, y_train, y_test = train_test_split(X, Y, random_state=0)
clf = LinearRegression().fit(X_train, y_train)
| {
"pile_set_name": "StackExchange"
} |
Eula High School
Eula High School is a public high school located near unincorporated Eula, Texas (USA) and classified as a 1A school by the UIL. It is part of the Eula Independent School District located in western Callahan County. The high school is addressed to Clyde since there is not a post office in Eula and is often referred to locally as Clyde-Eula. In 2015, the school was rated "Met Standard" by the Texas Education Agency.
Athletics
The Eula Pirates compete in the following sports -
Baseball
Basketball
Cross Country
Golf
Softball
Tennis
Track and Field
State Titles
Boys Basketball
2011(1A/D1)
Softball
2006(1A)
References
External links
Eula ISD
Category:Schools in Callahan County, Texas
Category:Public high schools in Texas | {
"pile_set_name": "Wikipedia (en)"
} |
The ES-1008PL is an 8-port Fast Ethernet switch with 8 PoE (Power over Ethernet) ports, designed for use in home, small or medium sized network environments. It is easily to connect and supplies power to any PoE-enabled devices such as wireless access points, network cameras, IP phones, as well as the other Ethernet-enabled devices , for example, computers, printers, and network attached storages (NAS) onto the some network. The device's compact size also comes with a rack-mount kits allows to install in a 19” cabinet, and fan-less design makes the ES-1008PL ideal for home and small business easily to expand the network.
Features
Power over Ethernet (PoE) Auto Detection
The ES-1008PL features 8 IEEE 802.3at Power over Ethernet (PoE) ports, which supply up to 30 watts of power per port. It can convert standard 100-240V AC power into low-voltage DC, which runs over existing Ethernet cables to supply power to IEEE 802.3at compliant network devices, including wireless access points, network cameras and IP phones. The ES-1008PL also features PoE detection, to verify whether the connected device is IEEE 802.3at compliant and supply power as needed; if it is not a PoE device, only data will be sent through the Ethernet cable.
Plug & Play, No Installation And Special Cable Required
The ES-1008PL is with no configuration required. Just easily plug the Ethernet or PoE enabled devices into any ports of the ES-1008PL switch, the data and power can be transmitted through the existing standard Cat-5 Ethernet cables with no additional cost for special cables.
Flexible Network Deployment And Cost Saving
The metal case design including an additional rack-mount kit for a 19” cabinet installation makes the ES-1008PL be flexibility placed on desk and 19” cabinet and reduces the installation time and costs. It is the best solution for when power outlets are difficult to install or the power outlets are too far from the device. Moreover, the ES-1008PL supports IEEE 802.3az Energy Efficient Ethernet, allows for less power consumption during periods of low data activity.
Fan-Less Quiet And Compact Design
The ES-1008PL is with fan-less design ensures quiet operation and compact size without tacking too many desk space for home or small office environments. | {
"pile_set_name": "Pile-CC"
} |
M.Scott Mahaskey/POLITICO Bernie Sanders bulks up his digital operation
Sen. Bernie Sanders’ campaign for president is bulking up its digital operation.
The campaign has hired Zack Exley and two other operatives to join its digital team, Sanders aides confirmed Tuesday.
Exley is the former chief revenue officer of the Wikimedia Foundation and previously served as MoveOn.org’s organizing director during the peak of its anti-Iraq war advocacy efforts. He was also an adviser for former Vermont governor Howard Dean’s presidential campaign. Exley will be a senior adviser on the digital team.
“He’ll be working on converting the enthusiasm we’ve seen around the senator’s message into real votes by organizing with grassroots volunteers around the country,” Sanders digital director Kenneth Pennington said in an email.
Two other operatives have joined the Sanders digital team, the campaign told POLITICO. Claire Sandberg, who formerly ran a digital campaign against fracking in New York, is the digital organizing director. She will be “working with Zack to secure votes and mobilize volunteers” Pennington said. Pinky Weitzman, formerly of the American Civil Liberties Union, is joining the campaign as the Iowa digital director. | {
"pile_set_name": "OpenWebText2"
} |
Green Bay Packers 38 (12-0) – New York Giants 35 (6-6)
After a humiliating defeat at the hands of the New Orleans Saints, the Giants finally regrouped and seemed to find the sense of urgency and intensity that they’ve been missing since the 49ers game of four weeks ago. Despite the fire and enthusiasm they displayed, the Giants were simply overmatched at the end of the game by a dominant Packers offense and very shoddy officiating. New York tied the game in its waning moments with another touchdown and 2 point conversion on a phenomenal 2 minute drive orchestrated by QB Eli Manning. Unfortunately, a wounded, young, inexperienced and confused defense was unable to hold up for a mere 58 seconds and allowed Green Bay to erupt down the field to get into easy field goal position and put the Giants away 38-35.
Frankly New York deserved a better fate on Sunday. As has been mentioned often over in The Corner Forum, the officiating was wretched. The Giants lost a touchdown they probably should have been awarded and were victimized by a Green Bay touchdown that probably should have been overturned. Several other gaffes occurred that will be mentioned later. To make matters worse, the Giants have gotten to the point where they are now slapping duct tape on any moving part in hopes of keeping it together. It’s starting to feel like we’ve got Scottie in the engine room trying everything he can to give it more power without blowing the whole thing up. Seriously, it’s beyond laughable how badly this team has been, is, and continues to be hit with injuries.
Going to The Corner Forum these days is an exercise in holding your breath. It’s been discussed how many people, including myself, generally get all their breaking news on BBI and it’s the first place they go to in order to check up on what’s going on. This year, I dread opening The Corner Forum almost every time I go to it. The first thing I expect to see is a “sticky” thread with the title “So and so is out for year with X.”
Sure enough, we got another one this week when reserve OT and blocking TE Stacy Andrews was taken to the hospital on Thursday night where it was found he had blood clots travel from his legs to his lungs, a life threatening situation. We also found out that the Giants were going to be without the services of LB Mike Herzlich and C Kevin Baas. The Giants had very recently signed street free agents Chase Blackburn and Will Blackmon (Blackburn just this week) and they both played significant time on Sunday.
So let’s look at the situation on offense:
Mario Manningham out, Ramses Barden in.
David Baas out, Kevin Boothe in at center.
With David Diehl continuing at left tackle, Mitch Petrus at left guard.
Stacy Andrews out, Jim Cordle in at blocking tight end.
Henry Hynoski returned at fullback.
Now on defense:
Michael Boley returned.
Osi Umenyiora out.
Mark Herzlich out.
Chase Blackburn, signed off the street earlier in the week took over early in the game for Greg Jones.
Aaron Ross, Prince Amukamara and Kenny Phillips were all in and out of the game leaving Will Blackmon, another player out of football for most of the year, and rookie S Tyler Sash receiving significant playing time.
The Giants have been absolutely hammered with injuries this season, and for periods of time they’ve been able to persevere but they’ve been unable to overcome them in the long run. That’s not an excuse for the lack of intensity and resolve shown in games like against the Saints. The Giants are asking an awful lot out of career backups and street free agents while continuing to fight for a playoff spot, and it may be asking a bit too much.
Statistically, this game was quite close. Three things went against the Giants: First, they allowed too many long third down conversions and 7 of 12 overall. Second, though the Giants had six scoring drives to Green Bay’s give, they allowed a defensive touchdown and twice ended up trading touchdowns for field goals. Finally, though the teams traded touchdowns off turnovers, the Giants’ second turnover just before the half ended a drive that could have put points on the board.
Offense
The sleeping Giants offense roared to life on Sunday with a renewed potency in their rushing attack and hitting several big plays in the passing game, including a 67 yard touchdown on the first drive of the game. As noted, the Giants scored on six drives Sunday, though one was off a very short field. The Giants converted 3 of 4 green zone chances into touchdowns, and by many accounts probably should have had the 4th as well but TE Jake Ballard was ruled out of bounds in the end zone though it appeared he was in.
New York had their problems, but considering the upheaval along the offensive line they were relatively minor. The Giants had just two 3 and outs on the day, but also gave up the ball after 3 plays on an interception and 4 plays on a fumble. Three of the Giants’ touchdowns were on quick strikes of 2 plays, 3 plays and 5 plays.
As Antrel Rolle likes to say, even with a couple of gaffes, at the end of the day the offense got their running game untracked, were very effective with their downfield passing, and scored 35 points. That should be enough to beat any team in the league with the possible exception of Green Bay. It turned out that the offense had to play the perfect game, and they nearly pulled it off.
Quarterback
It’s unfortunate that most pundits will point to Eli Manning and the pick six and the fumble near the end of the half as the reasons the Giants lost. Nothing could be further from the truth. Manning once again hauled the team onto his back and brought them within 58 seconds of overtime against the undefeated, and for the most part this season, unchallenged Packers.
On Sunday Manning completed 23 of 40 passes for 347 yards, 3 touchdowns and the fateful interception. According to the game log, Manning was hit just 3 times. That’s inaccurate. Manning was under constant pressure and according to the TV analysis, was hit 15 times before the 4th quarter even began. Manning was sacked just once, however. Manning had 8 of his 40 passes broken up by Green Bay defenders, too.
On the year, Manning is now 4th in the league with 3,705 yards leaving him on pace to throw for 4,940 yards on the year. Manning ranks 6th in passer rating, 5th in touchdowns, 9th in completion percentage, 1st in completions of more than 40 yards (13), tied for 3rd with completions of more than 20 yards (50) and 7th in completions for a 1st down. Eli’s Total QBR ranking for this week was the worst of his season at 45.6, but he still ranks at 9th overall this year at 62.3. Again, these are the numbers for the entire NFL. If that’s not elite, then there is no such thing. By the time this season is said and done and provided he stays healthy, Manning will set career highs in every major category, and may very well beat his yardage total by 1,000 yards. Simply an amazing year for #10 and when you consider the tepid running game and inconsistent offensive line he’s been playing with, you can say it’s astounding.
Running Backs
The Giants welcomed the return of HB Ahmad Bradshaw on Sunday, and the impact was immediate. On the very first play of the game Bradshaw took a swing pass from Manning and turned up field for what should have been about a 3 or 4 yard gain, but Bradshaw made a move and took on CB Charles Woodson, gaining 7 total yards. Bradshaw was fired up, and so was the crowd. While Bradshaw didn’t pile up huge numbers, carrying 11 times for 38 yards (3.5 ypc) and catching 2 passes for 9 yards, it was clear that his presence was a huge catalyst for the offense. It was also clear that New York did not want to overwork Bradshaw as two bread and butter plays for him, the bubble screen, were given to HB D.J. Ware. Neither worked, and Ware gained just 3 yards on 1 carry. He did have an extremely important reception on the final drive for 12 yards, resulting in a 1st and goal at the 2 yard line on a 2nd and 7 play.
It seemed that HB Brandon Jacobs benefited most from Bradshaw’s return, almost like he missed his little buddy and was like a kid in a candy store. Speculation is that part of Jacobs’ lack of production while Bradshaw was out was due to his having to run plays designed for Bradshaw’s style. There may be a bit of truth to that, as during the New England game Jacobs was effective running down hill off direct handoffs on runs in the A gaps. Jacobs has trouble when he begins to move laterally, and this week again he was slamming the line in the A gaps for good chunks of yardage. On the day, Jacobs only carried 8 times for 59 yards, a 7.4 ypc average. Jacobs did tweak a hamstring and that may have limited his carries, but the Giants were also in catch up mode for the most of the final three quarters and only ran the ball 20 times. Jacobs continued his dominant blitz pickups and chips out of the backfield. Some of his chips are so violent it’s not exactly a correct name for them. He doesn’t chip, he hammers.
FB Henry Hynoski was solid in the running game, though he still tends to get blown up once in a while. He was instrumental in opening several holes at the second level for Jacobs.
Wide Receivers and Tight Ends
The Giants receiving corps once again had a dominant game. It seems that this write up is a repeat of the previous game every week. WR Hakeem Nicks, despite injuries to his ribs and ankle, had a terrific game finishing with 7 catches for 88 yards and 2 touchdowns. Nicks almost had a 3rd but was stopped at the 1 yard line prior to Jacobs’ touchdown run. Nicks made the play of the game for the Giants, hauling in a 51 yard pass from Manning that set up the Giants’ 3rd touchdown. New York had just gone down 28-17 midway through the 3rd quarter and it looked like the Packers might run away with the rest of the game when on the 1st play of the next drive Nicks and Manning made the hookup that put them in position to get right back into the contest.
On the other side, WR Victor Cruz continues his amazing season catching 7 balls for 119 yards. Cruz is now the 4th leading receiver in terms of receiving yardage in the entire NFL. This from a kid who did not catch a single pass in a regular season game until week 2 of this year. If he can keep up the yardage pace (he has had five 100+ yards performances this season, and had another for 99), he will break Amani Toomer’s single season Giants yardage record. Not to be lost is the fact that Cruz has become a proficient blocker down field. In case you missed it, valued Corner Forum contributor mort christenson started a thread about Cruz on Monday that’s worth a read.
WR Ramses Barden wasn’t targeted often on Sunday and only had 1 catch for 9 yards.
The tight ends accounted for 114 total yards on Sunday, as Travis Beckum caught a 67 yard touchdown pass in which he serpentined his way for the last 20 yards to the end zone giving the Giants the early lead. Beckum, the well known insurance policy, was wide open on a nice move to beat S Charles Peprah and never broke stride after the catch. It’s something all of us have been waiting to see for quite some time. If Eli ever finally develops a trust in Beckum, it’s certainly possible he can become the weapon many people envision he can be. That said, he must work to become a more consistent receiving threat.
Jake Ballard had three catches on six passes thrown in his direction. He easily could have had 5, as an apparent touchdown was not awarded following a booth review that ostensibly didn’t show enough to overturn the call on the field. Ballard also never saw a perfectly thrown pass to the inside on a skinny post that would have resulted in a huge play.
Offensive Line
Despite the fact that Kevin Boothe found out he would be moving to center and backup guard Mitch Petrus found out he’d be getting a start at left guard just hours before the game, the line played surprisingly well. The run blocking was better than it’s been in several games, and other than RT Kareem McKenzie again giving up too many pressures, the pass blocking stood up relatively well. McKenzie appears to be the text book example of staying with a player a year longer than maybe the team should have. Recall that Kareem graded out as the top RT in the league last season. That isn’t going to be even close to the case this season, as he’s been attacked and beaten consistently in the passing game all year long. On the final touchdown pass to Nicks, Manning nearly didn’t get the pass off and took a wicked hit from OLB Clay Matthews. On the play, McKenzie completely missed the block giving Matthews a free run at Eli and if it hadn’t been for the quick pass, Manning would have been sacked. Incidentally, many people were clamoring for a run on that play in order to run clock even though it was 3rd down. Moreover, the original play call was a run that Eli said he checked out of at the line of scrimmage. It’s important to point this out because, as is usually the case, too much emphasis is placed on what people believe the play call from OC Kevin Gilbride is and forget that Eli has a lot of leeway to change the play at the line of scrimmage.
One other negative to point out is that C Kevin Boothe gave up on the Manning fumble at the end of the first half despite no whistle being blown. The Giants stress ball security from the word go in training camp and it’s just not smart football to allow the ball to sit on the ground, whistle or no whistle, at any time. Just pick the ball up and let the refs determine whose ball it is. The Giants would have retained the ball at midfield with 26 seconds and a time out left to continue the drive had Boothe simply picked up the ball. Incidentally, after review, it was inexcusable that the refs didn’t take at least 4 seconds off the clock, which stopped because the clock operator incorrectly assumed that Manning had thrown and incomplete pass.
Defense
It’s hard to lay a lot of blame at the feet of the defense for this loss, despite the fact that they gave up a ton of yardage and 31 points to the Packers. Green Bay has arguably the best offense in football, and coming into this game they had won 16 straight games, most of them due to their offensive prowess. They have the #1 QB in football, an incredible array of receivers including one of the best tight ends in the game, and they have a solid running game.
The defense did a great job of containing the running backs all day, the longest run by a back being only 8 yards. The Packers running backs gained just 57 yards on 24 carries, a paltry 2.4 ypc average. That’s ‘getting it done’, and the Giants haven’t done that very well all season.
The problem was that due to playing a tight man under defense for most of the game, the Giants lost contain on QB Aaron Rodgers 4 times for 32 total yards. Three of those scrambles kept drives alive. The first was an 11 yard scramble on 3rd and 10 from the Giants 43 yard line. That call should have been challenged, as Rodgers began his slide short of the line to gain was awarded an extra 3 yards and the first down. It wasn’t caught by the announcers, but it was clear as day, especially on the replay. It would have been 4th down had the play been challenged. The second was a 2nd and 9 scramble later in the drive that netted 13 yards and another 1st down. These plays led to Green Bay’s second offensive touchdown. Later, Rodgers hurt the defense again on the second Green Bay drive of the second half for 6 yards and a 1st down on a 3rd and 5 play.
Overall, the Giants allowed too many 3rd down conversions, 7 of 12 (58%) and also allowed Green Bay to convert 4 touchdowns in 5 trip to the green zone. The 5th trip resulted in the game-deciding field goal, so ultimately the Packers were 5 of 5.
One of the 3rd down conversions was a ref job, as rookie LB Jacquian Williams was called for an extremely questionable illegal contact penalty on a play in which the Giants sacked Rodgers. (Incidentally, three Giants penalties led to 1st downs for the Packers.) The drive stalled, but it kept the defense on the field and at the end, fatigue was an issue as the Packers ran 16 more offensive plays than the Giants and had 6 more minutes time of possession.
Another reason why it’s hard to get too upset at the Giants defense is that once again they were extremely affected by injuries. Street free agents Chase Blackburn and Will Blackmon (who said he could not remember the last time he played cornerback) saw extended time, and the Giants were without Osi Umenyiora and lost S Kenny Phillips during the game. In their place DE Dave Tollefson and S Tyler Sash saw significant action. Many people hate to use the injury ‘excuse,’ but realistically it’s asking an awful lot of a defense this banged up to hang with the best offensive team in the league.
Front 7
The defensive line, minus Umenyiora, played one of their most inspired games of the season. DT Linval Joseph nearly single handedly took the Green Bay running game out of play on Sunday. Joseph had an astounding 9 solo tackles (1 for a loss) to lead the Giants. Jacquian Williams, despite his ticky tack penalty that wiped out a sack, also had a solid game, making 7 tackles. Williams had two passes defensed and one was nearly an interception, which may have been why he aggressively attacked TE Finley and went for the interception on the final and fateful drive of the game. Had Williams simply corralled Williams and escorted him out of bounds, the 24 yard gain is limited to just 7 or 8 and the Giants wouldn’t have been on their heels. No one likes to penalize aggressive play, but sometimes discretion is the better part of valor and in that case the play was to keep it from becoming a big gain.
DE Justin Tuck seems to be coming out of the season long injury funk he’s been in, as he had a dominant first half in which he got significant pressure on Rodgers, forcing him to get out on the edge. Unfortunately, there was little opposite side support and Rodgers is deadly on the run. Tuck was in on 5 tackles and registered his first sack in ages. Tuck also had another QB hit and stuffed a run. DTs Chris Canty and Rocky Bernard did yeomen’s work in the middle, clogging the lanes and helping to stifle the Packers running game. Canty was also in on 6 tackles, and had 1 QB hit.
Jason Pierre-Paul continues his monster season, though his numbers weren’t great. He knocked down 2 Rodgers’ passes at the line and hit him twice, and was chasing him down all game. Again, there just wasn’t much contain from the opposite side when JPP got his pressures and Rodgers ran away from him. The final member of the line, Dave Tollefson, also recorded a sack. In all, the Giants line registered 5 of the 6 hits on Rodgers on Sunday. It’s also of significance to note that the Giants had eight tackles for a loss on Sunday.
The Giants’ linebackers played well. Michael Boley’s tackles were down, but he was coming off a hamstring injury and he didn’t look like he was at full strength. Greg Jones started, but he was quickly replaced by Chase Blackburn who played like a man possessed all game. Blackburn had 2 passes defensed and forced the only Packer turnover, an interception, that led to a Giants touchdown. He also had 5 tackles. Blackburn certainly impressed considering he hadn’t played a snap all season.
Secondary
The Giants had their hands full going against Green Bay’s potent array of receivers, but they got the help from their defensive line they’d sorely been missing the past few weeks. New York didn’t play a lot of zone, and infrequently used the ‘three man rush’ widely despised over in The Corner Forum. Interestingly, that formation worked early but they were burned on it down in the green zone after Rodgers had an eternity to find Donald Driver take the scenic route to the inside pylon for a touchdown. New York had 6 passes defensed on Sunday, but only 1 came from a defensive back. The secondary didn’t break up a single play other than the one in which Kenny Phillips was hurt. They were also helped out by a bevy of Green Bay drops, but frankly at least half the drops were due to the heavy pressure on Rodgers that led to some slightly off target throws. A few, however, were flat out drops.
Corey Webster had a decent game, and as Troy Aikman opined during the broadcast, there is no way he can stay with a receiver, in this case Donald Driver, when the QB has 8 seconds to find him. Just like a QB, a DB has an internal clock going off in his head after a certain amount of time elapses and it’s nearly impossible to stay locked on to a receiver that long. That said, Coughlin seemed to indicate on Monday that it was Webster who blew the coverage on the Packers’ touchdown where the receiver was left wide open.
On this play, the Packers were lined up with two receivers split wide and a TE to Rodgers’ right. The back was in offset on the strong side, between the RT and TE. Donald Driver was in the slot. The Giants countered this look with Amukamara covering the split receiver on the weak side, Webster covering the flanker on the strong side, and Ross covering down on Driver in the slot. The rest of the Giants defense was in a cover 2 shell with 4 linemen and 3 linebackers in the box. Before the snap, Driver went in motion to the strong side and set in the slot, and Aaron Ross followed him. At the snap, Webster took Nelson, who had run a quick in, and was on him like a blanket. Rolle began to move to the inside where Finley split between Ross and Grant. Ross was standing in the flat as if he were covering a short zone. Driver ran right by him and posted to the end zone. On the replay, it’s clear as day that Rodgers’ first read was Finley, and if Rolle hadn’t covered down on him it would have been an easy touchdown despite Aikman’s ascertation that Grant “didn’t need the help.” Rodgers quickly came off Finley and found Driver all alone. Aikman broke down the play further and suggested that the Giants were playing man coverage with safety help inside, and Ross was probably at fault. But Coughlin said the outside corner (who was Webster) was supposed to “fall off,” so he was probably supposed to be in a deeper zone.
Prince Amukamara was burned badly on the touchdown to Jennings that many believe should not have been. Jennings got Prince moving to the outside and then just left him in the dust with a skinny post to the inside. To Prince’s credit, he recovered to knock the ball out of Jennings’ hands and by all angles it appeared that it should not have been a touchdown.
How do you blame Blackmon, guy who hasn’t played meaningful football in nearly a year, and can’t remember the last time he played cornerback, for allowing Jordy Nelson catch a 27 yard pass on that last drive? Yes, he lost Jordan, and it’s puzzling why Ross wasn’t locked up on the outside instead of him. But the fact is, Aaron Rodgers threw a great pass when he was a microsecond away from being swallowed up by three Giants defenders. It was just one of those mindboggling throws that you simply have to tip your hat to. It’s the fate the Giants have been dealt this year. It’s hard to keep count of how many defensive backs missing time or out for the year.
Special Teams
The special teams play was solid, as the Giants and Packers basically matched kick for kick and field position for field position all day. Each team’s starting field position average was their own 27 yard line. New York averaged 25 yards per kick return while allowing an average of 25.3. The Giants punted 4 times, allowing just 2 returns for a total of 6 yards. In the punt return game, the Giants did nothing with 3 fair catches and 2 other Green Bay punts downed.
K Lawrence Tynes hit two field goals, one for 38 yards and the other for 50. The Packers missed one field goal.
Coaching
It was talked about all week. Giants teams simply do not quit on HC Tom Coughlin despite adversity and that was evident again on Sunday as the Giants played with the most emotion and intensity that they’ve shown in weeks. It’s hard to question anything the Giants coaches attempted, but frankly these odd challenges are starting to become worrisome. Those time outs and not having any challenges late in the game could come back to haunt the Giants when they need them most. The Ballard challenge seemed futile from the get-go, but the still pictures seem to prove he was in bounds and it should have been a touchdown. The challenge on the catch along the sidelines, however, didn’t have a chance to get overturned and it was obvious from the live shot, let alone the replay.
As for DC Perry Fewell, there has been some discussion that he’s coaching scared. That could be, but it would probably be better to say he’s playing cautious and it’s probably due to not trusting the newcomers and rookies. That has got to stop. It would be prudent to just turn these guys loose and say “get to the ball carrier.” It’s proven over and over in football that you cannot play tentative, you have to dictate the pace, you have to attack, and you have to play fast. You cannot do that waiting to ‘read’ the play and then react. Frankly, it’s time to go for broke. Seriously, what does he have to lose?
Final Thoughts
I didn’t think the Giants had a prayer in this game. I believed that New York could move the ball well and score against Green Bay, but I didn’t think they’d score 35. The defense, though they performed better than my expectations, gave up too many 3rd downs and could not stop the Packers in the green zone. The swapping of field goals for touchdowns ultimately decided the game.
That said, it was impressive to see this team fight. Now I have hope that they will go to Dallas and kick some Cowboy ass.
Content Sections
Content Sections
Follow Us!
Posts By Month
Posts By Month
Need Help with WordPress? Whether you need technical support, training or site repair,PCQB WordPress Support can help you out.
Part of the USA Today Sports Media Group
BigBlueInteractiveSM provides news, analysis, and discussion on the New York Football Giants. The site is owned and operated by Big Blue Interactive, LLC. If you
have any questions or comments about this website, please see our contact information page. | {
"pile_set_name": "Pile-CC"
} |
Q:
How to get header values in .csv file generated in non gui mode of jmeter
I am using Apache Jmeter version 4.0.
I have created one Jmeter script in Gui mode .Using below steps have executed the jmter script.It gentrates reports in .csv file.
Steps to execute the Script
1.Open command promt
2.move to jmter bin folder.
3.Execute the below command
C:\apache-jmeter-4.0\apache-jmeter-4.0\bin>jmeter -n -t C:\apache-jmeter- 4.0\apache-jmeter-4.0\bin\examples\Post_call_24_FirstStep_10.jmx -l C:\apache-jmeter-4.0\apache-jmeter-4.0\bin\examples\CSVDATATest.csv
jmeter.property file values
#jmeter.save.saveservice.print_field_names=true
It generates repots in .csv file
A:
JMeter will only print the headers in .jtl results file if the file:
Does not exist
Or it is empty
If you have a single line there - JMeter will not add any header, it will just append new results to the existing file.
So I would suggest taking the following steps:
Add -f command-line argument to your command, this way JMeter will delete the previous results and create a brand new file having results of the current run with (hopefully) headers generated. The full command-line just in case:
C:\apache-jmeter-4.0\apache-jmeter-4.0\bin>jmeter -f -n -t C:\apache-jmeter- 4.0\apache-jmeter-4.0\bin\examples\Post_call_24_FirstStep_10.jmx -l C:\apache-jmeter-4.0\apache-jmeter-4.0\bin\examples\CSVDATATest.csv
If there still will not be headers add one more command line argument so you will be totally sure that the property is set: -Jjmeter.save.saveservice.print_field_names=true. Full command line just in case:
C:\apache-jmeter-4.0\apache-jmeter-4.0\bin>jmeter -f -n -t -Jjmeter.save.saveservice.print_field_names=true C:\apache-jmeter- 4.0\apache-jmeter-4.0\bin\examples\Post_call_24_FirstStep_10.jmx -l C:\apache-jmeter-4.0\apache-jmeter-4.0\bin\examples\CSVDATATest.csv
More information:
Full list of command-line options
Configuring JMeter
Results File Configuration
Apache JMeter Properties Customization Guide
Overriding Properties Via The Command Line
| {
"pile_set_name": "StackExchange"
} |
Introduction {#s1}
============
Poised at the interface of immunity and coagulation, platelets express a plethora of surface molecules and receptors and carry granules packed with hundreds of biologically active products. Platelets arise from megakaryocytes, which in turn differentiate from pluripotent hematopoietic cells restricted to the bone-proximal osteoblastic niche in the bone marrow ([@B1], [@B2]). Proplatelets are released from this specialized site into the circulation, continue to mature, ultimately releasing mature platelets ([@B3]).
Platelets are undoubtedly critical for hemostasis, in a large part by supporting blood coagulation. Upon activation, platelets expose negatively-charged phospholipids on the outer leaflet of their plasma membrane, providing an ideal surface for the assembly of coagulation factors complexes, such as the VIIIa-IXa complex and the Xa-Va complex. Moreover, factor XI and thrombin are brought into close proximity through interactions with the glycoprotein (GP) Ib/V/IX complex on the surface of platelets, facilitating factor XI cleavage by thrombin. This not only sustains the coagulation cascade, but also overcomes coagulation arrest in the presence of TFPI (inhibitor of the extrinsic pathway) ([@B4]).
Platelets are equipped with numerous immune receptors. For instance, signaling through TLRs 2/6 and 1/2 triggers platelet activation, marked by increased expression of surface CD62P (P-selectin), degranulation and aggregation ([@B5]--[@B7]). TLR4 engagement was shown to induce platelet aggregation and interaction with leukocytes as well as affect CD62P expression in a ligand-dependent manner ([@B8]--[@B10]). Moreover, human platelets have been shown to secrete antimicrobial peptides targeting both bacteria and fungi in response to thrombin, a key enzyme in the coagulation cascade ([@B11]). Platelets have also been increasingly recognized for their role in cellular recruitment. In that sense, platelets have been shown to serve as a "landing pad" in endothelial beds devoid of adhesion molecules, such as the brain ([@B12], [@B13]). In a model of hepatitis, CD8+ T cells were also shown to preferentially dock onto platelets adherent within liver sinusoids rather than adhering to endothelial cells themselves, suggesting a role for platelets in adaptive immunity ([@B14]).
The active role of platelets in both coagulation and immunity hints at an evolutionary link to the central immune cells of invertebrates, hemocytes. Hemocytes not only provide host defense through secretion of microbicidal peptides and phagocytosis, but also through coagulation of the hemolymph. In these invertebrate organisms, clot formation is a potent host defense mechanism as it isolates and contains the infectious agent ([@B15]--[@B17]). This ability to sequester and contain pathogens undeniably resembles the role of neutrophil extracellular traps (NETs), an immune effector mechanism of higher vertebrates. The complex interplay between platelets and neutrophils, which is most likely a long-evolving relationship, will be discussed in this review, with a focus on NET-driven coagulation.
Platelet Dependent Neutrophil Recruitment and Activation {#s2}
========================================================
The interactions between platelets and neutrophils are orchestrated by both their surface and secreted molecules, with the former allowing for physical interactions between these cells. Activated platelets express CD62P, which binds to P-selectin glycoprotein ligand-1 (PSGL-1) on the surface of neutrophils ([Figure 1A](#F1){ref-type="fig"}) ([@B21], [@B22]). Alternatively, both GPIb or the integrin αIIβ3 on platelets interact with the integrin αMβ2 on leukocytes either directly, or through fibrinogen as a bridging molecule ([@B23]--[@B25]). Secreted molecules, such as cathepsin G produced by activated neutrophils, can disrupt these interactions through cleavage of GPIb and PSGL-1 ([@B26]). Moreover, there is increasing evidence of platelet-derived products modulating neutrophil recruitment, activation and function. For instance, CD40L secreted by platelets has been shown to upregulate integrin expression on neutrophils ([@B27]). Serotonin and CXCL4 have also been implicated in platelet-dependent neutrophil recruitment in models of abdominal inflammation and acute pancreatitis ([@B28], [@B29]).
![Platelets-neutrophil interactions and NETs. **(A)** Key adhesion molecules involved in platelet-neutrophil interactions. These interactions not only provide mechanisms of cell attachment but may also trigger intracellular signaling, promoting cell activation, resulting in the upregulation of additional adhesion and effector molecules. Effector molecules from both cells, such as cathepsin G from neutrophils, in turn modulate neutrophil-platelet physical interactions through cleavage of PSGL-1 and GPIb. **(B)** Effect of NET inhibition (PAD4^−/−^ mice) or disruption (DNase treatment) in animal models of **(Bi)** endocarditis ([@B18]), **(Bii)** bacterial sepsis ([@B19]), and **(Biii)** bacterial pneumonia ([@B20]). Overall, targeting NETs is associated with reduced inflammation and organ damage; however, this effect has been shown to favor bacterial dissemination.](fcvm-06-00085-g0001){#F1}
Platelets mediate leukocyte recruitment via two main mechanisms: (a) by serving as a docking site for immune cells along the endothelium surrounding the inflammatory focus and (b) through secretion of chemoattractants. The extent to which platelets promote cellular recruitment appears to be tissue- and model-dependent. For instance, neutrophil infiltration into the peritoneal cavity, skin and brain in response to LPS was shown to be platelet-dependent whereas in the lung, a compensatory mechanism characterized by the upregulation of CXCL1 and CCL5 was able to overcome platelet depletion ([@B30]). In a model of *Pseudomonas aeruginosa* pulmonary infection, however, platelet depletion was shown to reduce neutrophil infiltration in the lung ([@B31]). Similarly, platelet-driven neutrophil recruitment to the colon and kidney has been demonstrated in models of dextran sodium sulfate (DSS)-induced acute colitis and cecal ligation and puncture (CLP) ([@B32], [@B33]). In these studies, platelet depletion was shown to improve clinical and histopathological scores, whereas in the aforementioned model of *P. aeruginosa* pulmonary infection, platelet depletion led to increased bacterial dissemination and mortality ([@B31]--[@B33]). These differences further emphasize the complexity of the interplay between platelets and other immune cells, such as neutrophils.
Indeed, platelets are not limited to providing a port-of-entry to neutrophils into sites of tissue insult. Platelets have been shown to directly stimulate the production of NETs through the process of NETosis ([@B18], [@B34]--[@B37]). In turn, NETs amplify platelet activation, aggregation and thrombin activation, and all three act in synergy to promote intravascular coagulation in sepsis ([@B19], [@B38], [@B39]). Evidence supporting the deleterious effects of NET-induced coagulation in infectious diseases will be discussed in the next section.
Platelet-Driven NETosis {#s3}
=======================
Unsurprisingly, platelet and neutrophil interactions are greatly increased during inflammatory responses. These interactions are, for the most part, initiated by soluble mediators, which directly activate these cells ([Table 1](#T1){ref-type="table"}). Co-incubation of healthy platelets and neutrophils with plasma from septic patients has been shown to promote platelet adhesion to neutrophils in a TLR4-dependent manner, a result similar to what is observed following co-incubation of platelets, neutrophils and LPS ([@B34], [@B36]). Moreover, LPS-induced intravascular NETosis and trapping of *Escherichia coli* in NETs has been shown to be augmented in the presence of platelets ([@B34]). Platelet-driven NETosis has been observed in the presence of all classic platelet agonists (i.e., thrombin, ADP, collagen, arachidonic acid) as well as several TLR ligands; however, in these models, NET formation does not occur in the absence of platelet activation ([@B35]--[@B37]). Hence, it is consensus that, for platelet-induced NETosis *in vitro*, platelets must be first activated. CD62P is also required for platelet-induced NETosis as CD62P^−/−^ platelets have been shown to fail to promote the release of NETs whereas overexpression of platelet CD62P enhanced phorbol 12-myristate 13-acetate (PMA)- and ionomycin-induced NETosis ([@B35]). The role of other surface molecules involved in platelet-neutrophil interactions, GPIb/IIa, CD11b (integrin α-chain L), and CD18 (integrin β-chain 2), remains debatable as some studies have shown these molecules to be dispensable ([@B37]) while other models have indicated a clear need for integrin-mediated platelet adhesion in the induction of NETosis ([@B69]). Although there is strong evidence that platelet-neutrophil adhesion plays a central role in platelet-induced NET formation, physical interactions between platelets and neutrophils may not be absolutely required as activated platelets are known to shed CD62P. Indeed, neutrophils, in the presence of *Streptococcus mutans* and soluble CD62P (sCD62P) have been shown to produce NETs, thus indicating that direct interactions are dispensable in some situations, at least *in vitro* ([@B18]).
######
Platelet molecules that modulate neutrophil activation.
---------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Factors stored in granules **Adhesive glycoproteins:** P-selectin\*([@B27], [@B40]), Fibrinogen ([@B41]), vWF\*([@B36], [@B42], [@B43]), Fibronectin ([@B44]), Thrombospondin ([@B44])
**Coagulation factors:** Protein S ([@B45]), Factor XI ([@B46])
**Mitogenic factors:** PDGF ([@B47]), TGF-β ([@B48]), EGF ([@B49])
**Angiogenic / Vasoactive factors:** VEGF ([@B50]), PF4 inhibitor ([@B51]), Serotonin ([@B52])
**Chemokines:** CXCL7 ([@B53]), CXCL4\*(PF4) ([@B54], [@B55]), CXCL1 (GROα) ([@B56], [@B57]), CXCL5\*([@B58]), CCL5\*(RANTES) ([@B59], [@B60]), CCL3 (MIP1α) ([@B61])
Unknown location CCL7(MCP3) ([@B56]), IL1β ([@B62]), HMGB1\*([@B63]), Defensins\*([@B11])
Plasma Membrane Thromboxane A2\* ([@B36]), PAF ([@B64]), CD40L ([@B9]), TREM-1 ligand ([@B65]), αIIbβb3 Integrin\*([@B66], [@B67]), GPIb\*([@B36]), ICAM2 ([@B68]), P-selectin\*([@B35])
---------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------
*vWF, von Willebrand factor; PDGF, platelet derived growth factor; TGF-β, transforming growth factor β; VEGF, vascular endothelial growth factor; PF4, Platelet Factor 4; IL1β, interleukin 1β; HMGB1, high mobility group box protein 1; PAF, platelet activating factor; TREM-1, triggering receptor expressed on myeloid cells 1; GPIb, glycoprotein Ib; ICAM2, intercellular adhesion molecule 2. Factors known to be associated with NET production by neutrophils are denoted with an\**.
In addition, to CD62P/PSGL-1 signaling, other mediators have been shown to be involved in triggering NET release. For instance, antibody-mediated blockade of platelet-derived high-mobility box group 1 protein (HMBG1) has been shown to inhibit NET formation *in vitro* ([@B37]). In fact, NET release was inhibited in the presence of anti-HMBG1 to a greater extent than in the presence of anti-CD62P. Moreover, HMGB1 was shown to be required for activation of autophagy pathways, which are required for NETosis in a RAGE-dependent manner, a key receptor for HMGB1 ([@B37]). A role for thromboxane A2 (TXA2) has also been demonstrated in platelet-driven NET release. In a study by Caudrillier et al. ([@B66]), activated, but not resting, platelets were shown to induce NET formation *in vitro*. This process was shown to be dependent on TXA2 receptor-mediated signaling, which activates the MAPK pathway ([@B66]). Moreover, platelet-derived β-defensin 1 was shown to induce NET formation in a ROS-dependent manner ([@B70]). Human platelets store β-defensin 1, an antimicrobial peptide primarily expressed by epithelial cells, in extragranular cytoplasmic compartments. Thus, β-defensin is not released during classic platelet degranulation stimulated by agonists such as thrombin or platelet activating factor (PAF). Instead, β-defensin 1 was shown to be released when platelets were stimulated in the presence of *Staphylococcus aureus*-derived α-toxin but not LPS ([@B15]). This α-toxin-platelet-β-defensin 1 axis could represent a novel mechanism by which platelets directly induce NETs in response to Gram-positive infections. The generation of NETs, however, was only demonstrated in the presence of purified β-defensin 1 ([@B70]). Co-incubation of α-toxin, platelets and neutrophils could provide more conclusive evidence of this putative interplay.
Effects of NETs on Platelets and Coagulation {#s4}
============================================
The downstream effects of platelet-induced NET release have also been studied. Activated, but not resting, platelets and neutrophils were shown to result in NET release and increased monolayer permeability in LPS-stimulated endothelial cells *in vitro* ([@B66]). Importantly, NETs also affect platelet function and direct evidence demonstrates that the platelet-NET axis is by no means a one-way road. In a report by Elaskalani et al. ([@B38]), cell-free NETs (collected from PMA-stimulated neutrophils) were incubated with human platelets where the presence of NETs alone (with no other platelet agonists added) was shown to promote platelet aggregation, secretion of ATP and ADP, and increased expression of CD62L and phosphatidylserine (PS) on the surface of the platelets ([@B38]). An increase in protein phosphorylation at tyrosine residues pointed to NET-mediated activation of intracellular signaling in platelets. Accordingly, Fuchs et al. ([@B39]) have demonstrated a role for DNA as well as histones H3 and H4 in NET-induced platelet aggregation *in vitro* ([@B39]). These results strongly hint at a positive feedback loop between platelets and NETs. NET-dependent platelet aggregation, however, was shown to be unaffected by DNase or heparin, suggesting mechanisms independent of the DNA scaffold and thrombin. Inhibition of cathepsin G, a NET component, reduced surface expression of CD62P and PS exposure in platelets whereas blockade of GPIIb/IIIa significantly inhibited platelet aggregation without affecting CD62P expression ([@B38]). These observations suggest multiple pathways are likely involved in NET-induced platelet activation and aggregation.
Furthermore, NETs have been implicated in directly inducing thrombin generation through both platelet-dependent and independent mechanisms. NETs released from PMA-stimulated neutrophils have been shown to increase thrombin generation in platelet-poor plasma. This activation required coagulation factors XII and XI, pointing to the involvement of the intrinsic coagulation pathway ([@B71]). Importantly, this mechanism was DNA-dependent, as thrombin generation was abrogated when DNase was added to the system ([@B71], [@B72]), although the exact role of DNA scaffold in this context is unknown. Given that coagulation factors optimally assemble on negatively-charged surfaces (such as the PS-rich plasma membrane of activated platelets and microparticles), and that DNA is a negatively-charged molecule, perhaps the DNA backbone of NETs serves as a somewhat ideal surface for the formation of coagulation factor complexes. Importantly, some studies have demonstrated a failure of purified NETs to induce thrombin generation. Whereas, the purified DNA backbone from NETs was able to induce thrombin generation in platelet-poor plasma, intact NETs, failed to generate active thrombin pointing out clear differences between *in vivo* and *in vitro* assays and suggesting qualitative differences in NETs themselves exist ([@B73]). In the presence of platelets, NETs were also shown to contribute to thrombin generation. The effect was dependent on platelet expression of TLR-2 and TLR-4, suggesting that NET components may act as receptor agonists, inducing platelet activation. Interestingly, DNase potentiated platelet- and NET-driven thrombin generation, likely because dismantling of NETs led to enhanced release of NET components, making them more readily available to act as platelet agonists ([@B71]).
In addition to NET production, PMA-stimulated neutrophils have also been shown to release microparticles, which attach themselves to NETs via PS residues. Blocking this interaction was implicated in reduced NET-dependent thrombin generation pointing to a role for neutrophil-derived microparticles in addition to DNA and other NET components ([@B72]). While these reports were largely done *in vitro*, the direct effect of NETs on coagulation *in vivo* has been also been investigated ([@B19], [@B72]). Using a model of sepsis following CLP, increased cell-free DNA in plasma was shown to be largely derived from neutrophils. Plasma levels of DNA-histone complexes were significantly inhibited in neutrophil-depleted animals, strongly suggesting that neutrophils undergo NETosis in the vasculature during sepsis. Thrombin-antithrombin (TAT) complex levels, a marker of thrombin generation, were increased 24 h after CLP. Moreover, thrombin generation *ex vivo* was decreased in these animals due to consumption of coagulation factors, a clinical manifestation of disseminated intravascular coagulation (DIC) in septic patients. In animals treated with DNase prior to CLP, thrombin generation *ex vivo* was restored, placing NETs as critical factor for the depletion of coagulation factors during sepsis ([@B72]). Additionally, blockade of NETs has been shown to reduce thrombin activity in the liver and lung microvasculature and inhibition of thrombin prevents NET-induced liver damage in an *in vivo* model of *E. coli*-induced sepsis ([@B19]), providing a direct link between NETs and coagulation. Interestingly, NETs have been shown to contribute to the occlusion of vessels in the lung microvasculature independently of thrombin generation in animals deficient for DNase1 and 3 during sepsis ([@B74]). These data indicate that NETs engage multiple mechanisms leading to microvascular obstruction, impairing organ perfusion and driving tissue damage.
Taken together, these studies have contributed greatly to mapping of the molecular mechanisms underlying platelet-NET interplay. While these mechanisms remain incompletely known, crucial molecules, such as CD62P and HMGB1 in platelets, PSGL1 in neutrophils and cathepsin G, histones and DNA in NETs, have been identified as potential targets for uncoupling immunity and coagulation. Targeting of these interactions, however, may have profound impact on host defense, underscoring the need for studying platelet-neutrophil crosstalk in *in vivo* models of infection and inflammatory disease.
Targeting Platelet-NET Interactions in Infection-Induced Vascular Dysfunction {#s5}
=============================================================================
In support of experimental evidence of NET-induced coagulation, a correlation between NETs and thrombosis has been previously demonstrated in clinical studies ([@B75]--[@B77]). Various components of NETs (cell-free DNA, citrullinated H3 and nucleosome) were shown to be significantly increased following acute ischemic stroke. Levels of citrullinated H3 were also associated with mortality at the 1-year follow up assessment of the study, which included over 200 patients with acute ischemic stroke ([@B75]). Of note, H3 citrullination weakens histone binding to negatively charged DNA, which in turn favors chromatin decondensation, a critical step for NETosis ([@B78]). Furthermore, a descriptive study of the composition of thrombi retrieved from ischemic stroke patients has revealed the presence of activated neutrophils (CD66b+ and neutrophil elastase+ granulocytes) and as well as NETs (extracellular DNA with citrullinated H3) ([@B76]). In the context of infectious diseases, a study by Yang et al. ([@B77]) has compared the ability of neutrophils from septic patients to undergo NETosis and its implications in thrombin and fibrin generation ([@B77]). Neutrophils from septic patients were able to promote thrombin and fibrin generation in the presence of control plasma in a DNA-dependent manner. Thus, NET release alone was associated with risk of thromboembolism. However, it is still unclear if NETs are causative of thrombosis or if NETosis is a consequence of thrombus formation.
Clinical evidence has supported the role of NETs in driving coagulation and ultimately, vascular dysfunction in both primarily cardiovascular disorders (i.e., ischemic stroke) as well as conditions initiated by infections, such as sepsis. These reports, however, did not delve into the implications of disrupting NET-induced coagulation for disease outcome. Perhaps the main concern when considering this strategy is the impairment of the innate immune response. To address this issue, animal models of systemic or highly invasive infections have been very insightful ([Figure 1B](#F1){ref-type="fig"}). Infective endocarditis is a condition involving thrombi formation following an immune response to bacterial colonization in the heart. Vegetations, a pathological feature of endocarditis, are composed of bacteria embedded in a mass of platelets, fibrin, and immune cells, such as neutrophils. In a rat model of *Streptococcus mutans*-induced endocarditis, NETs were identified in vegetations on damaged heart valves ([@B18]). Accordingly, pre-treatment with DNase I, which dismantles the NET web-like structure, led to a decrease in vegetation weight and bacterial load. However, the effect was also accompanied by increased bacterial dissemination. These results suggest a protective role of thrombus and NET formation on heart valves, at least in restricting pathogen invasion and dissemination throughout the body. Indeed, *in vitro*, in the presence of neutrophils, platelets were shown to form aggregates around bacteria. The effect was abrogated in the presence of DNase I, supporting the involvement of NETs in pathogen trapping ([@B18]).
Furthermore, in models of bacterial sepsis, NET formation, platelet aggregation and thrombin activity were associated with impaired perfusion of the liver microvasculature and organ damage. PAD4^−/−^ mice (unable to release NETs from neutrophils) or pre-treatment with DNase I markedly reduced thrombin activity, suggesting that NETs directly contributed to thrombin activation, likely through platelet-dependent mechanisms. Moreover, organ perfusion and function were improved in the absence of NETs, suggesting that direct targeting of NETs may be beneficial in the context of disseminated infections characterized by overt inflammation, such as sepsis ([@B19]). Importantly, this beneficial effect of NET prevention/dismantlement is very much context dependent. In a study by Lefrançais et al. ([@B20]), NETs, as expected, were associated with increased inflammation, lung damage and early mortality in a model of bacteria-induced lung injury ([@B20]). However, NET-deficient mice or removal of NETs with DNAse I, was associated with increased bacterial loads, demonstrating a clear role for NETs in restricting pathogen dissemination. Critically, the survival rate of animals in this model was only improved if NETs were targeted at earlier time points post-infection. At later time points (\>40 h from the i.t., administration of *S. aureus*), blockade of NETs was no longer protective. These results suggest that although NETs play a role in immunopathology early in infection, their role in trapping and sequestering microorganisms is also critical to limit dissemination of infection and as such, complete abrogation of NET production may be just as deleterious to the host as NET-induced pathology.
Summary {#s6}
=======
Platelet-neutrophil interactions are undoubtedly a two-way relationship due, in large part, to the role of NETs in modulating both platelet and neutrophil function. Moreover, clinical and experimental studies have supported that NET-mediated coagulation and immunity are critical to disease outcome. Perhaps not surprisingly, NETs seem to play a dual role in models of infectious diseases: they orchestrate both immunopathology and infection clearance. Interestingly, the direct participation of NETs in amplifying coagulation through platelet activation, thrombin generation and microparticle release, seems to ultimately underlie NET-induced immunopathology. Importantly, while blocking or dismantling NETs ameliorates coagulation dysfunction, it may also impair pathogen clearance. Although separating immunity and coagulopathy is the ultimate goal, the question still remains... is this a matter of simply fine tuning of platelet-neutrophil interactions, or, is targeting the procoagulant components of NETs the key to this therapeutic avenue? In the end, a more thorough understanding of the molecular mechanisms underlying NET-driven coagulation will be needed if we are to uncouple immunity and coagulation in the setting of infectious disease.
Author Contributions {#s7}
====================
AZ and CJ contributed to manuscript generation and revision and read and approved the submitted version.
Conflict of Interest Statement
------------------------------
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
[^1]: Edited by: Marie Lordkipanidzé, Université de Montréal, Canada
[^2]: Reviewed by: Nigel S. Key, University of North Carolina at Chapel Hill, United States; Daniel Duerschmied, Department of Cardiology, University of Freiburg, Germany
[^3]: This article was submitted to Atherosclerosis and Vascular Medicine, a section of the journal Frontiers in Cardiovascular Medicine
| {
"pile_set_name": "PubMed Central"
} |
Subsets and Splits